Prosecution Insights
Last updated: April 19, 2026
Application No. 18/647,953

GENERATING TEXT USING MACHINE-LEARNED LARGE LANGUAGE MODELS AND PRESENTING TEXT ON USER INTERFACE

Non-Final OA §103§112
Filed
Apr 26, 2024
Examiner
FIBBI, CHRISTOPHER J
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Textio Inc.
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
199 granted / 376 resolved
-2.1% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
40 currently pending
Career history
416
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the original filing dated 26 April 2024. Claims 1-20 are pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 4, 11 and 18 are objected to because of the following informalities: claims 4, 11 and 18 recite “receiving an indication the user modified…”. Examiner suggests “receiving an indication that the user modified…”. Appropriate correction is required. Claims 6, 13 and 20 are objected to because of the following informalities: claims 6, 13 and 20 use the acronyms API, RPC and gRPC without first defining the terms. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 8 and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 8 and 15 recite the limitation “the candidate starting text”. There is insufficient antecedent basis for this limitation in the claim. Examiner notes that this term appears to be established as the candidate “starter” text. Claims 2-7, 9-14 and 16-20 are rejected for incorporating the deficiencies of their base independent claims. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Eisenstadt et al. (US 2025/0165721 A1) in view of Gray et al. (US 2025/0191481 A1). As for independent claim 1, Eisenstadt teaches a method comprising: displaying a user interface configured with an editor to allow a user to enter and edit an electronic document [(e.g. see Eisenstadt paragraphs 0026, 0032, 0041 and Figs. 4A-5) ”user 102 will be editing output text 400 prior to sending output text 400 as an email … Actual edits (e.g., edit 421) are received to output text 400 through UI 300 in operation 728 … a reply email message (i.e., a reply to electronic document 200)”]. responsive to receiving an indication from a user to generate starting text, presenting one or more topics and one or more keywords related to the topics for selection [(e.g. see Eisenstadt 0028, 0030, 0031 and Fig. 3 numerals 301-304, 141-143) ”When user 102 clicks generate email button 305 … UI 300 offers several classes of automatic document generation, indicated by a reply to inquiry selection 301, an offer discount selection 302, a make proposal selection 303, and an address a concern selection 304. Each selection produces a different class of language model prompt 162. The available selections are tied to various training-based capabilities of language model prompt generator 160 … passage 141 as “May 2” … passage 142 as “5% since available after requested date” … passage 143 as “Thursday morning 10-11 am””]. Examiner notes that Fig. 3 numerals 301-304 are topics and Fig. 3 numerals 141-143 are keywords for the selected topic. generating a prompt to a machine-learned language model, the prompt specifying at least the selected topic, the selected keywords, and a request to generate a set of candidate starter texts incorporating the selected topic and the selected keywords of the user [(e.g. see Eisenstadt paragraphs 0028, 0030, 0031) ”When user 102 clicks generate email button 305, orchestrator 176 tasks language model prompt generator 160 to generate language model prompt 162 and then tasks language model 164 to generate output text 400 … Each selection produces a different class of language model prompt 162. The available selections are tied to various training-based capabilities of language model prompt generator 160 … This information is available from user 102, but may not be easily-located by an automated system and/or may not be available in the other data sources used by language model prompt generator 160 (e.g., CRM data 150 and enterprise suite data 154). Without the information provided by user 102 in response to plurality of topic-specific prompts 130, there is a risk that language model 164 will hallucinate and that language model prompt generator 160 will miss an issue raised in electronic document 200”]. receiving, from the machine-learned language model, a response generated by executing the machine-learned language model on the prompt [(e.g. see Eisenstadt paragraphs 0026, 0031 and Fig. 4A) ”orchestrator 176 tasks language model prompt generator 160 to generate language model prompt 162 and then tasks language model 164 to generate output text 400 … Language model prompt 162 is provided to a language model 164 that generates output text 400. Language model 164 may be an LLM or a multimodal large language model (MMLLM)), and in some examples comprises GPT-4, chatGPT, or an equivalent. Orchestrator 176 routes output text 400 to UI 300, as shown in FIGS. 4A-5”]. for a candidate starter text, detecting issues for mitigation in the candidate starting text to evaluate whether a degree of the detected issues in the candidate starting text is less than a predetermined threshold [(e.g. see Eisenstadt paragraphs 0030, 0033, 0042, 0043) ”This information is available from user 102, but may not be easily-located by an automated system and/or may not be available in the other data sources used by language model prompt generator 160 (e.g., CRM data 150 and enterprise suite data 154). Without the information provided by user 102 in response to plurality of topic-specific prompts 130, there is a risk that language model 164 will hallucinate and that language model prompt generator 160 will miss an issue raised in electronic document 200 … The annotations are available when there is traceability of critical information to specific input sources. The highlighting brings the attention of a human user, such as user 102 information elements for which the user may wish to verify the accuracy and/or source … Decision operation 730 determines whether output text 400 passes RAI criteria 172, prior to transmitting output text 400 across computer network 930. If not, operation 732 prevents transmitting output text 400 until output text 400 is changed to passing RAI criteria 172. Operation 734 displays warning message 174, and flowchart 700 either terminates or returns to operation 724 for user 102 to amend output text 400 … However, if decision operation 730 verifies that output text 400 passes RAI criteria 172, operation 736 transmits output text 400 across computer network 930 to source 122 (e.g., as a an email message to the email address of correspondent 104). In some examples, output text 400 extends email thread 114”]. and an evaluation of the candidate starter texts to the user [(e.g. see Eisenstadt paragraphs 0032, 0033) ”Output text 400 is formatted as a correspondence from user 102 (Roy) to correspondent 104 (Nancy), and has annotations to enable user 102 to verify the source of various information elements. For example, an annotation 401 is shown as highlighting of “May 2nd”, which is information that user 102 may wish to verify … The highlighting brings the attention of a human user, such as user 102 information elements for which the user may wish to verify the accuracy and/or source”]. Eisenstadt does not specifically teach generating a pane element on the user interface to present the candidate starting texts or responsive to receiving a selection of a candidate starter text, inserting the selected candidate starter text as an input document into the editor of the interface. However, in the same field of invention, Gray teaches: generating a pane element on the user interface to present the candidate starting texts [(e.g. see Gray paragraph 0051 and Fig. 3 numeral 352) ”an LLM service may be provided within the context of a collaboration application as an add-in or plug-in. FIG. 3 may illustrate such an example. That is, the panel 352 may provide LLM functionality within or in association with the panel 350 of the collaboration application 221. As a student begins drafting the essay within the panel 350, the student may submit prompts 354 and 356 to the LLM service 105 via the panel 352”]. responsive to receiving a selection of a candidate starter text, inserting the selected candidate starter text as an input document into the editor of the interface [(e.g. see Gray paragraphs 0054, 0059, 0060 and Figs. 3-4) ”as the student drafts the assignment, information from the responses 360 and 362 may be incorporated into the content of the assignment … the guidance function 110 may highlight text within the panel 450 to indicate that it relates to an interaction with the LLM service 105. As shown in FIG. 4, the guidance function 110 generates highlight 464a, highlight 464b, and highlight 464c to visually indicate that these portions of text correspond to interactions with the LLM service 105. For example, the highlight 464a relates to the response 460, and the highlights 464b and 464c relate to the response 462 … the student can see that the portion of text covered by the highlight 464c is a verbatim copy of the response 462”]. Therefore, considering the teachings of Eisenstadt and Gray, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add generating a pane element on the user interface to present the candidate starting texts and responsive to receiving a selection of a candidate starter text, inserting the selected candidate starter text as an input document into the editor of the interface, as taught by Gray, to the teachings of Eisenstadt because it reduces the time it takes a user to find and access relevant information and improves the basic user experience with respect to LLMs (e.g. see Gray paragraph 0025). As for independent claim 8, Eisenstadt and Gray teach a non-transitory computer-readable storage medium. Claim 8 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. As for independent claim 15, Eisenstadt and Gray teach a system. Claim 15 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. Claims 2, 3, 6, 9, 10, 13, 16, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Eisenstadt et al. (US 2025/0165721 A1) in view of Gray et al. (US 2025/0191481 A1), as applied to claim 1 above, and further in view of Alikaniotis et al. (US 11,763,085 B1) As for dependent claim 2, Eisenstadt and Gray teach the method as described in claim 1, but do not specifically teach wherein detecting issues for mitigation in the candidate starter text further comprises: applying a set of features to the candidate starter text, wherein a feature corresponds to detection of a respective category of bias and wherein applying the feature to the candidate starter text generates an impact score for the category of bias, generating an evaluation score for the candidate starter text by combining impact scores across the set of features, or determining whether the evaluation score is less than the predetermined threshold. However, in the same field of invention, Alikaniotis teaches: wherein detecting issues for mitigation in the candidate starter text further comprises: applying a set of features to the candidate starter text, wherein a feature corresponds to detection of a respective category of bias and wherein applying the feature to the candidate starter text generates an impact score for the category of bias [(e.g. see Alikaniotis col 7 lines 45-48, col 10 lines 4-6, col 18 lines 37-57 and Fig. 5A) ”tone detection system 160 uses an anti-bias mechanism to determine at least one of the candidate tones that is displayed … For a particular first (input) text sequence, tone detection system 160 produces a tone score for each of the tone labels in a reference set of tone labels. A set of tone labels is associated with the particular text sequence … sentence level tone predictions shown in FIG. 5A may be produced as a result of processor execution of flow 200 on first portion 504. In FIG. 5A, first portion 504 is displayed with highlighting 506 because first portion 504 has been analyzed by an embodiment of tone detection system 160. Examples of highlighting 506 include but are not limited to adding a background color, changing the color of the text, and adding bold, italics, or underlining to the text. When cursor 508 is positioned over any part of first portion 504, a sub-window overlays text input window 500 and displays sentence level tone labels 510, 512 and corresponding sentence level tone scores 514, 516. Sentence level tone labels 510, 512 and sentence level tone scores 514, 516 have been determined and displayed as a result of an embodiment of tone detection system 160 analyzing first portion 504 including any syntactic structure data associated with first portion 504. As can be seen from FIG. 5A, any portion of a text sequence can give rise to multiple tone labels. All or only a subset of the tone labels and/or tone scores determined by tone detection system 160 may be displayed”]. generating an evaluation score for the candidate starter text by combining impact scores across the set of features [(e.g. see Alikaniotis col 12 lines 20-25, col 14 lines 21-27) ”sentence level tone encoder instructions 208 compute a set of sentence level tone scores for the first portion of the text sequence that precedes the end of sentence signal. In an embodiment, a sentence level tone score is indicative of a presence or absence of a particular tone in the first portion of the text sequence … Sentence level tone encoder instructions 208 repeats the foregoing sentence level tone scoring operations on sentence level syntactic structure data 206 corresponding to other portions of document text 202. In an embodiment, sentence level tone encoding data 216 may be output for display to a user and/or provided as input to a document level tone detection”]. and determining whether the evaluation score is less than the predetermined threshold [(e.g. see Alikaniotis col 15 lines 47-52, col 19 lines 7-12) ”Sentence level tone score 530 includes a negative sign, which indicates that it has a different sentiment or polarity than sentence level tone scores 514, 516. As such, highlighting 524 may be different than highlighting 506. For example, highlighting 506 may include a yellow or green color while highlighting 524 may include a red color … anti-bias selection data include randomly selected document level tone scores that fall below a threshold tone score value or probability value, such as tone scores that are in the bottom k tone scores or are at least less than the lowest tone score in the top k tone scores or have a probability less than a probability threshold value”]. Therefore, considering the teachings of Eisenstadt, Gray and Alikaniotis, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein detecting issues for mitigation in the candidate starter text further comprises: applying a set of features to the candidate starter text, wherein a feature corresponds to detection of a respective category of bias and wherein applying the feature to the candidate starter text generates an impact score for the category of bias, generating an evaluation score for the candidate starter text by combining impact scores across the set of features, and determining whether the evaluation score is less than the predetermined threshold, as taught by Alikaniotis, to the teachings of Eisenstadt and Gray because it improves the detection of the tone of written text and reduces the transmission of text containing errors (e.g. see Alikaniotis col 2 lines 47-61). As for dependent claim 3, Eisenstadt, Gray and Alikaniotis teach the method as described in claim 2, but Eisenstadt and Gray do not specifically teach the following limitations. However, Alikaniotis teaches: further comprising: identifying one or more phrases in the candidate starter text that are detected to have text with one or more categories of bias; and for each identified phrase, generating indications over the phrases on the user interface associated with the category of bias for the identified phrase [(e.g. see Alikaniotis col 10 lines 4-6, col 18 lines 37-57 and Figs. 5A-B) ”tone detection system 160 uses an anti-bias mechanism to determine at least one of the candidate tones that is displayed … The sentence level tone predictions shown in FIG. 5A may be produced as a result of processor execution of flow 200 on first portion 504. In FIG. 5A, first portion 504 is displayed with highlighting 506 because first portion 504 has been analyzed by an embodiment of tone detection system 160. Examples of highlighting 506 include but are not limited to adding a background color, changing the color of the text, and adding bold, italics, or underlining to the text. When cursor 508 is positioned over any part of first portion 504, a sub-window overlays text input window 500 and displays sentence level tone labels 510, 512 and corresponding sentence level tone scores 514, 516. Sentence level tone labels 510, 512 and sentence level tone scores 514, 516 have been determined and displayed as a result of an embodiment of tone detection system 160 analyzing first portion 504 including any syntactic structure data associated with first portion 504. As can be seen from FIG. 5A, any portion of a text sequence can give rise to multiple tone labels. All or only a subset of the tone labels and/or tone scores determined by tone detection system 160 may be displayed”]. The motivation to combine is the same as that used for claim 2. As for dependent claim 6, Eisenstadt and Gray teach the method as described in claim 1, but do not specifically teach the following limitation. However, Alikaniotis teaches: further comprising: providing a prompt to the model serving system via an API call to an endpoint of the model serving system, wherein the API call follows one or a combination of a REST API communication protocol, a RPC protocol, or a gRPC protocol [(e.g. see Alikaniotis col 3 lines 40-44, col 19 lines 44-53) ”text communication interface 112 may provide an application program interface (API) that allows executing programs or processes of user system 110 to make text sequences available for processing by GEC system 130 through an API call … GEC model 134 and reference data store 150 may each reside on at least one persistent and/or volatile storage devices that may reside within the same local network as at least one other device of computing system 100 and/or in a network that is remote relative to at least one other device of computing system 100. Thus, although depicted as being included in computing system 100, GEC model 134 and/or reference data store 150 may be part of computing system 100 or accessed by computing system 100 over a network, such as network 120”]. The motivation to combine is the same as that used for claim 1. As for dependent claim 9, Eisenstadt and Gray teach the medium as described in claim 8; further, claim 9 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2. As for dependent claim 10, Eisenstadt, Gray and Alikaniotis teach the medium as described in claim 9; further, claim 9 discloses substantially the same limitations as claim 3. Therefore, it is rejected with the same rational as claim 3. As for dependent claim 13, Eisenstadt and Gray teach the medium as described in claim 8; further, claim 13 discloses substantially the same limitations as claim 6. Therefore, it is rejected with the same rational as claim 6. As for dependent claim 16, Eisenstadt and Gray teach the system as described in claim 15; further, claim 16 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2. As for dependent claim 17, Eisenstadt, Gray and Alikaniotis teach the system as described in claim 16; further, claim 17 discloses substantially the same limitations as claim 3. Therefore, it is rejected with the same rational as claim 3. As for dependent claim 20, Eisenstadt and Gray teach the system as described in claim 15; further, claim 20 discloses substantially the same limitations as claim 6. Therefore, it is rejected with the same rational as claim 6. Claims 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Eisenstadt et al. (US 2025/0165721 A1) in view of Gray et al. (US 2025/0191481 A1), as applied to claim 1 above, and further in view of Liu et al. (US 2019/0370629 A1). As for dependent claim 4, Eisenstadt and Gray teach the method as described in claim 1, but do not specifically teach further comprising: evaluating each sentence of one or more sentences of the input document and storing the evaluations of the one or more sentences in a cache storage; receiving an indication of the user modified an existing sentence or added a new sentence to the input document; evaluating the modified sentence or the new sentence of the input document, presenting the evaluation of the modified sentence or the new sentence in the editor, or retrieving the evaluations of sentences that are unchanged from the cache storage without reevaluating the unchanged sentences. However, in the same field of invention or solving similar problems, Liu teaches: further comprising: evaluating each sentence of one or more sentences of the input document and storing the evaluations of the one or more sentences in a cache storage; receiving an indication of the user modified an existing sentence or added a new sentence to the input document; evaluating the modified sentence or the new sentence of the input document [(e.g. see Liu paragraphs 0029, 0075) ”FIG. 5 is an example 500 of a calculating the distance of a new question sentence vector within the process of FIG. 2 … In step 206, chatbot 110 (see FIG. 1) determines whether local memory cache 112 (see FIG. 1) includes an entry that has a sentence vector that matches (i.e., is similar to) the sentence vector calculated in step 204”]. presenting the evaluation of the modified sentence or the new sentence in the editor [(e.g. see Liu paragraphs 0037, 0042) ”chatbot 110 (see FIG. 1) in step 218 generates a response based on the intent classification retrieved in step 224 … After step 218, chatbot 110 (see FIG. 1) presents the response to the user”]. and retrieving the evaluations of sentences that are unchanged from the cache storage without reevaluating the unchanged sentences [(e.g. see Liu paragraph 0041) ”In step 224, chatbot 110 (see FIG. 1) retrieves the intent classification of the question from the entry in LMC 112 (see FIG. 1) that includes the matching sentence vector. Since step 224 uses LMC 112 (see FIG. 1) and is performed directly after step 222 without a need to perform local NLC 114 (see FIG. 1)”]. Therefore, considering the teachings of Eisenstadt, Gray and Liu, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add further comprising: evaluating each sentence of one or more sentences of the input document and storing the evaluations of the one or more sentences in a cache storage; receiving an indication of the user modified an existing sentence or added a new sentence to the input document; evaluating the modified sentence or the new sentence of the input document, presenting the evaluation of the modified sentence or the new sentence in the editor, and retrieving the evaluations of sentences that are unchanged from the cache storage without reevaluating the unchanged sentences, as taught by Liu, to the teachings of Eisenstadt and Gray because caching reduces the latency of a response (e.g. see Liu paragraph 0020). As for dependent claim 11, Eisenstadt and Gray teach the medium as described in claim 8; further, claim 11 discloses substantially the same limitations as claim 4. Therefore, it is rejected with the same rational as claim 4. As for dependent claim 18, Eisenstadt and Gray teach the system as described in claim 15; further, claim 18 discloses substantially the same limitations as claim 4. Therefore, it is rejected with the same rational as claim 4. Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Eisenstadt et al. (US 2025/0165721 A1) in view of Gray et al. (US 2025/0191481 A1), as applied to claim 1 above, and further in view of Campbell et al. (US 2012/0167010 A1). As for dependent claim 5, Eisenstadt and Gray teach the method as described in claim 1, but do not specifically teach presenting the one or more topics and the one or more keywords further comprises: presenting a dropdown element including the one or more topics or responsive to receiving the selected topic, presenting the one or more keywords as selection chips on the interface. However, in the same field of invention or solving similar problems, Campbell teaches: presenting the one or more topics and the one or more keywords further comprises: presenting a dropdown element including the one or more topics [(e.g. see Campbell paragraphs 0040, 0042, 0046) ”each interface element has an upside down triangle element to indicate, for example, that the interface element is a multi-purpose interface, e.g., one that provides a drop down menu of operations. Such menus may be used, for example, to permit the user to specify a particular or more specific action … A user may select the interests button 416, for example, to gain access to saved interests … interests 416 may be a multi-purpose interface element providing a user with options. One exemplary option may be to select a saved topic of interest to open in a topic interface … First interface element or topic interface 432 includes first content that is based on a "health and fitness"”]. and responsive to receiving the selected topic, presenting the one or more keywords as selection chips on the interface [(e.g. see Campbell paragraphs 0025, 0058, 0064 and Fig. 5) ”An example of this is provided in FIG. 5, which shows that when a user positions cursor 572 over tab 430, or more particularly the topic "health & fitness," menu 530 appears with additional interface elements (reposition icon 574, combine topic icon "+" … users are actively and automatically aided with suggested related keywords … yoga, pilates, running, meditation, diet and weight loss”]. Therefore, considering the teachings of Eisenstadt, Gray and Campbell, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add presenting the one or more topics and the one or more keywords further comprises: presenting a dropdown element including the one or more topics and responsive to receiving the selected topic, presenting the one or more keywords as selection chips on the interface, as taught by Campbell, to the teachings of Eisenstadt and Gray because it allows a user to quickly discover, investigate and refine a topic of interest (e.g. see Campbell paragraph 0009). As for dependent claim 12, Eisenstadt and Gray teach the medium as described in claim 8; further, claim 12 discloses substantially the same limitations as claim 5. Therefore, it is rejected with the same rational as claim 5. As for dependent claim 19, Eisenstadt and Gray teach the system as described in claim 15; further, claim 19 discloses substantially the same limitations as claim 5. Therefore, it is rejected with the same rational as claim 5. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Eisenstadt et al. (US 2025/0165721 A1) in view of Gray et al. (US 2025/0191481 A1), as applied to claim 1 above, and further in view of Carbune et al. (US 2025/0209261 A1). As for dependent claim 7, Eisenstadt and Gray teach the method as described in claim 1, but do not specifically teach generating the prompt further comprises: identifying one or more pieces of personal identifiable information (PII) entities in the prompt; identifying one or more placeholder entries for the one or more PII entries; generating a modified prompt by replacing the PII entities with respective placeholder entities; or responsive to receiving the response, replacing the placeholder entities with the respective PII entities in the response. However, in the same field of invention, Carbune teaches: generating the prompt further comprises: identifying one or more pieces of personal identifiable information (PII) entities in the prompt; identifying one or more placeholder entries for the one or more PII entries; generating a modified prompt by replacing the PII entities with respective placeholder entities; and responsive to receiving the response, replacing the placeholder entities with the respective PII entities in the response [(e.g. see Carbune paragraph 0037) ”the contextual event 202 and/or the required proactive assistance include personal identification information (PII) or information otherwise sensitive in nature. In these instances, the digital assistant 150 and/or the local LLM 152 may redact or otherwise censor the sensitive information from the remote LLM 160. For example, the digital assistant 150 prompts the local LLM 152 to generate the prompt 156 for the remote LLM 160 that omits any private and/or sensitive information. For example, the digital assistant 150 may use prompt examples (e.g., few-shot examples for training the remote LLM 160 to perform a specific task), fine-tuning, or any other technique to anonymize the prompt 156. Additionally or alternatively, the digital assistant 150 only contacts the remote LLM 160 when the user consents, as discussed above. In these examples, private or sensitive information may be shared with the remote LLM 160 to enhance generation of the proactive assistance. When redacting private/sensitive information such as a contact name, the local LLM 152 may generate the prompt 156 by including placeholder tokens (e.g., #NAME) replacing the contact name such that the remote LLM 160 is only able to ascertain the placeholder token. As such, response content 162 generated by the remote LLM 160 requiring recitation of the contact name instead includes the placeholder token (e.g., #NAME) used by the local LLM 152 in the prompt 156, whereby the local LLM 152 (or the digital assistant 150) modifies the response content 162 by replacing instances of the placeholder token with the contact name”]. Therefore, considering the teachings of Eisenstadt, Gray and Carbune, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add generating the prompt further comprises: identifying one or more pieces of personal identifiable information (PII) entities in the prompt; identifying one or more placeholder entries for the one or more PII entries; generating a modified prompt by replacing the PII entities with respective placeholder entities; and responsive to receiving the response, replacing the placeholder entities with the respective PII entities in the response, as taught by Carbune, to the teachings of Eisenstadt and Gray because redacting personally identifiable information sent server side allows for an increase in privacy (e.g. see Carbune paragraph 0022). As for dependent claim 14, Eisenstadt and Gray teach the medium as described in claim 8; further, claim 14 discloses substantially the same limitations as claim 7. Therefore, it is rejected with the same rational as claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPub 2025/0315629 A1 issued to De Wynter et al. on 09 October 2005. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g. LLM suggestion sidebar). U.S. PGPub 2025/0124308 A1 issued to Ramani et al. on 17 April 2025. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g. a menu for generating a natural language prompt based on topics and keywords). Grammarly, “GrammarlyGo Is on Its Way”, published 09 March 2023 <URL: https://www.youtube.com/watch?v=Pnr8dbT20ZE>. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g. starting from scratch using a separate AI panel to generate text by selecting topic ideas). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585866
AUTOMATED ENTRY OF EXTRACTED DATA AND VERIFICATION OF ACCURACY OF ENTERED DATA THROUGH A GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12561152
METHODS AND SYSTEMS FOR ADAPTIVE CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12535930
INTEROPERABILITY FOR TRANSLATING AND TRAVERSING 3D EXPERIENCES IN AN ACCESSIBILITY ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12535941
USER INTERFACE FOR MANAGING INPUT TECHNIQUES
2y 5m to grant Granted Jan 27, 2026
Patent 12519999
Location Based Playback System Control
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
90%
With Interview (+37.6%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month