Prosecution Insights
Last updated: April 19, 2026
Application No. 18/381,956

PERSONALIZED STYLIZING LARGE LANGUAGE MODEL WRITING ASSISTANT

Final Rejection §102§103§112
Filed
Oct 19, 2023
Examiner
SERRAGUARD, SEAN ERIN
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
92 granted / 134 resolved
+6.7% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
177
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . All objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Status of the Claims Prior to entry of the amendment(s) and/or consideration of the argument(s), the status of the claims is as follows. Claim(s) 1-20 is/are pending. Claim(s) 2, and 6-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite. Claim(s) 1, 9, 11-13, 15, and 18-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Non-Patent Literature to Salemi (Salemi, A., Mysore, S., Bendersky, M. and Zamani, H., 2023. Lamp: When large language models meet personalization. arXiv preprint arXiv:2304.11406 (2023), hereinafter Salemi). Claims 2-3, 10, 14, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salemi as applied to claims 1, 12, and 18 above, and further in view of Dai (WO 2024064249 A1, hereinafter Dai). Claims 4-8 and 17 is/are indicated as allowable over the cited references. Response to Amendments Applicant’s amendment filed on 10 November 2025 has been entered. In view of the amendment to the claim(s), the amendment of claim(s) 1-4, 6-7, 9, 12-14, 16-18, and 20 have been acknowledged and entered. In view of the amendment to claim(s) 14 and 17, the objection to claim(s) 14 and 17 is withdrawn. In view of the amendment to claim(s) 2, and 6-7, the rejection of claim(s) 2 and 6-7 under 35 U.S.C. §112 is maintained as modified in response to amendment, for the reasons provided in the action below. In view of the amendment to claim(s) 1-4, 6-7, 9, 12-14, 16-18, and 20, the rejection(s) of claims 1-20 under 35 U.S.C. §102 and 103 is withdrawn. In light of the amended claims, new grounds for rejection under 35 U.S.C. §103 and 35 U.S.C. §112 are provided in the action below. Response to Arguments Applicant’s arguments regarding the prior art rejections under 35 U.S.C. §102/103, see pages 7-8 of the Response to Non-Final Office Action dated 27 August 2025, which was received on 10 November 2025 (hereinafter Response and Office Action, respectively), have been fully considered. With respect to the rejection(s) of claim(s) 1, 12, and 18 under 35 U.S.C. §102 as being anticipated by Salemi, applicant asserts that Salemi fails to teach or suggest all limitations of the claims as amended. Applicant’s arguments are persuasive. As such, the rejections of claims 1, 12, and 18 under 35 U.S.C. §102 are withdrawn. Applicant further argues that the rejection(s) of dependent claims 2-11, 13-17, and 19-20 should be withdrawn for at least the same reasons as independent claims 1, 12, and 18. Applicant’s arguments in light of the amended claims are persuasive. As such, the rejections of claims 2-11, 13-17, and 19-20 under 35 U.S.C. §102 and 35 U.S.C. §103 are withdrawn. However, upon further consideration, new ground(s) of rejection under 35 U.S.C. §103 are made in light of combinations of Salemi and Dai. The Applicant has not provided any further statement and therefore, the Examiner directs the Applicant to the below rationale. Claim Objections Claim(s) 3 is/are objected to because of the following informalities: Regarding claim 3, the phrase “is stylized in the of the user” in line 3 lacks clarity in light of the amendment to delete the word “voice”. However, as claim 3 depends from claim 2, which was amended similarly to delete the word “voice” and insert the phrase “writing style”, this is understood as a typographical error. As such, the phrase “is stylized in the of the user” in line 3 should read as “is stylized in the writing style of the user”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claims 1, and mutatis mutandis claims 12 and 18, the phrase “generative model” is not supported by the specification as filed. Claim 1 is amended to recite “generative model” at lines 12 and 13. However, there is insufficient specification support for the broader genus “generative model” as applied in place of the species “large language model (LLM).” Applicant cites paragraphs [0002]-[0004], [0026], [0028]. [0039], [0043], [0046], [0052], and [0053] of the specification as supporting the amendments as filed. However, support for the phrase “generative model” was not clearly disclosed in these paragraphs. Further, upon review of the specification generally, the phrase “generative model” or a known equivalent, other than LLMs, is not disclosed in the specification. Though applicant does describe the LLM as a generative LLM, and LLMs are well known generative models, a single species of a genus is not considered adequate support for claiming the entire genus. (See MPEP 2163(II)(A)(3)(a)(ii)). Therefore, claims 1, 12, and 18 contain limitations which lack specification support and the claims are rejected. Regarding claims 2-11, 13-17, and 19-20, claims 2-11, 13-17, and 19-20 depend from claims 1, 12, and 18 and incorporate all limitations therefrom. Therefore, 2-11, 13-17, and 19-20 are rejected for the same reasons as described above with reference to claims 1, 12, and 18. Appropriate correction is required. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 2, and 6-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 2, the phrase “a first score” is unclear. Claim 2 recites “a first score” at line 5. Claim 1, from which claim 2 depends, also recites “a first score” at line 6 and “the first score” at line 8. Further, each of these first scores appear to be part of the same training process as performed at different training stages. As such, it is unclear whether the “first score” in claim 2 refers to the same “first score” as in claim 1, or a different first score. Therefore, the phrase “a first score” is unclear and claim 2 is rejected. If these should be considered the same first score, applicant is advised to amend the claims such that the claim parts such that they reflect a shared antecedent basis. If these should be considered different claim parts, applicant is advised to amend the claims such that the names of the claim parts are distinct. Regarding claims 6 and 7, the relationship between the phrase “calibrated third and fourth scores” and “that helps ensure” is unclear. Claim 6, as amended, now recites “calibrated third and fourth scores, determined based on the third and fourth scores, that helps ensure…”. Though applicant’s previous amendment is noted, said amendment does not address the lack of clarity and introduces further lack of clarity which is discussed here. Claim 6, and claim 7 by dependency, recites “calibrated third and fourth scores” at line(s) 2, which is understood as indicating two separate elements: “calibrated third… scores” and “calibrated… fourth scores”. Though applicant has indicated that the “calibrated third and fourth scores” are “determined based on the third and fourth scores,” the lack of clarity remains with regards to the relationship between said scores. Specifically, operation based on “calibrated third and fourth scores…that helps ensure the third score is predictive…,” the intended meaning of “that helps ensure” in this context is unclear. Though originally understood as a descriptive portion of the claim part (e.g., the “calibrated third and fourth scores” would correspond to the word “that” in the phrase “that helps ensure”), as currently amended the meaning of “that” in context of the claim limitation is unclear. Is the applicant arguing the use of “calibrated third and fourth scores,” essentially the calibration of the third and fourth scores, is the “that” which “helps ensure” the third score is predictive? The phrase “operates based on…” allows for an operation of the relative entropy loss function using a component or element which could be described as “based on” the calibrated scores, where the operation may also correspond to the word “that.” As such, and in the alternative, is the intermediary which establishes the relationship described as “operates based on…” the “that” which “helps ensure” the third score is predictive? It is unclear which portion of the claim corresponds to the functional description described. Thus, the component which is indicated by “that” lacks clarity. Further, when viewed as a further limitation of the claim, rather than as a descriptive portion of a claim part, it is unclear what it means that something “helps ensure the third content is predictive.” The specification does not define a threshold for what constitutes helping. The word “ensure” appears to be a guarantee of a result. However, this guarantee is diluted by the use of “helps” as a modifier, which allows for subjective opinions of what qualifies as acting in furtherance of ensuring. Helps is a term of degree that implies contribution, of any kind and at any level or amount, to a result without providing a standard for measuring that contribution. As a result, the component ultimately determined to be described by the word “that,” is being further distinguished from prior art equivalents solely based on a subjective performance outcome, whether it “helps ensure” or not. Therefore, and for the reasons described above, claims 6 and 7 lack clarity and are rejected. Further regarding claim 7, the phrase “all scores corresponding to the content previously generated by the user” is unclear. Claim 7, as amended, recites “the calibrated third scores include all scores corresponding to the content previously generated by the user” at line(s) 1-2 (emphasis added). Though applicant’s previous amendment is noted, said amendment does not address the lack of clarity. As previously indicated, the scores to which “all scores…” refers remains unclear. The broadest reasonable interpretation of the word “score” is understood as a numerical or quantitative measurement or value assigned to represent a specific attribute, quality, or result. The application does not establish a group of “scores” such that one skilled in the art would know what scores constitute the original group of “scores,” such that the subset of scores which are “corresponding to the content previously generated by the user” would be clear. Claim 7 depends from claims 1-6 and incorporates “first score”, “second score”, “third score”, “fourth score”, “calibrated third scores”, and “calibrated… fourth scores” therefrom. Any, all, or none of these scores may be among the set created by “all scores.” Further, “all scores” could very well include scores which have not yet been defined or, in fact, are not in existence yet. Further still, the application does not clarify what it means to “correspond to the content…” such that the limitation has a clear meaning. The phrase “corresponding to,” as currently used, is a loose and open ended connection between two parts. As a result, the limitation of scores which “include all scores corresponding to the content previously generated by the user” requires the inclusion of the entirety of a group (e.g., “all scores”) for which the constituent members are not known and/or cannot be known. Therefore, claim 7 lacks clarity and is rejected. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 9-16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salemi as applied to claims 1, 12, and 18 above, and further in view of Dai. Regarding claim 1, Salemi discloses A personally-stylized content generation method comprising (Systems and methods described with reference to “a retrieval augmentation approach that retrieves personalized items from user profiles to construct personalized prompts for large language models”; Salemi, ¶ Abstract): receiving a first request for first content to be stylized in a style of written prose previously produced by a user (The system receives “input xi” from “user u” for generating text “while aligning with the user’s writing style” based on “user u’s profile” where said profile consists of a “collection of data points pertaining to the user” and where the “the profile of each user encompasses all the papers they have authored.”; Salemi, ¶ pg. 2, col. 2, para. 2; pg. 3, col. 2, para. 2, pg. 5, col. 2, para. 3; Figure. 1); applying a previously trained retriever model to the first request to obtain second content previously produced by the user resulting in obtained content (“To achieve personalization for a given sample (xi, yi) associated with user u” the system uses “(1) a query generation function φq that transforms the input xi into a query q for retrieving from the user u’s profile” and “(2) a retrieval model R(q, Pu, k) that accepts a query q, a user profile Pu and retrieves k most pertinent entries from the user profile”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1); populating a prompt with the obtained content and the first request resulting in an augmented prompt (The system further includes “a prompt construction function φp that assembles a prompt for user u based on input xi and the retrieved entries.”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1); providing the augmented prompt to the generative model (As shown in Figure 1, and described throughout, the modified prompt based on input xi and the retrieved entries, is received by a large language model, with examples including FlanT5-XXL and GPT-3..; Salemi, ¶ Pg. 8, col. 1, para. 2; Figure 1); receiving personally-stylized content from the generative model (“for a given textual input x, the goal is to develop a model M that generates personalized output y for user u. One can model this task as arg maxy p(y|x, u). For each user u, the model M can take advantage of Pu = {(xu1, yu1),(xu2, yu2), (xumu, yumu) where each (xui, yui) denotes a pair of input and personalized output for user u in the same format as x and y. “; Salemi, ¶ pg. 2, col. 1, para. 2), the personally-stylized content including elements of the style of the written prose of the user (Discloses that the output is “personalized... for user u in the same format as x and y” and aligns with the user’s writing style.; Salemi, ¶ pg. 2, col. 1, para. 2; pg. 5, col. 2, para. 3); and providing the personally-stylized content to the user. (As the “personalized output” is intended and output for “user u”, the personalized output is also understood as being provided to user u.; Salemi, ¶ pg. 2, col. 1, para. 2). However, Salemi fails to expressly recite the previously trained retriever model trained based on a first score generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content, the first score indicating how much the previously generated content will help a generative model in generating the target content. Dai teaches systems and methods for “a prompt-based query generation, filtering and retriever training.” (Dai, ¶ [004]). Regarding claim 1, Dai teaches the previously trained retriever model trained based on a first score (“In some embodiments, a dual encoder 210 retrieval architecture may be used, along with a pretrain-finetune recipe,” which is a pretraining process {...trained} for the dual encoder model, which is pretrained based on a ranking score; Dai, ¶ [075]-[077], [057]) [the first score] generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content (“pretrain 245 may involve pretraining the retriever (e.g., dual encoder210)” which may be “trained on the same synthetic data (e.g., in synthetic dataset 220) generated from the prompt based QGen,” which also includes generating a ranking score using a generated synthetic training set {trained based on a first score generated by a language model}, the generated “synthetic training dataset comprising a plurality of query-document pairs, wherein each query-document pair comprises a synthetically generated query and a document from the corpus of documents” where the document of the corpus of documents is associated with the retrieval task, where the retrieval task in the context of Salemi is generating personalized content that “aligns with the user’s writing style” {target content that is stylized in the voice of the user}; Dai, ¶ [047]-[048], [057], [076]-[077], [168]-[169]), the first score indicating how much the previously generated content will help a generative model in generating the target content (“a projection layer may be applied on the output encodings of the first token and the output may be used as the ranking score {the first score}” to generate “a ranking list consisting of a positive (q, d+) pair and sampled negative (q, d-) pairs (e.g., 31 sampled negative pairs),” where the determined relevance of the “sampled negative (q, d-) pairs” {the content previously generated by the user} as compared to the “positive (q, d+) pair” {target content that is stylized in the voice of the user... and a second request for the target content} to “directly optimize for ranking performance.” As such, and in the context of Salemi, the first score indicates much the sampled negative pair {previously generated content} will help the LLM in generating the sampled positive pair {the target content}; Dai, ¶ [049], [057], [076]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include the previously trained retriever model trained based on a first score generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content, the first score indicating how much the previously generated content will help a generative model in generating the target content. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 2, the rejection of claim 1 is incorporated. Salemi discloses all of the elements of the current invention as stated above. However, Salemi fails to expressly recite further comprising: training the retriever model by: generating, by a language model (LM) and based on target content that is stylized in the voice of the user, content previously generated by the user, and a second request for the target content, a first score indicating how much the previously generated content will help the LLM in generating the target content; and altering the retriever model based on the first score. The relevance of Dai is described above with relation to claim 1. Regarding claim 2, Dai teaches further comprising: further training the previously trained retriever model (The method includes “training... a document retrieval model to take an input query associated with the retrieval task and predict an output document retrieved from the corpus of documents.”; Dai, ¶ [169]) by: generating, by a language model (LM) and based on target content that is stylized in the writing style of the user, content previously generated by the user, and a second request for the target content, a first score (The training includes generating by a language model a ranking score using a generated synthetic training dataset, the generated “synthetic training dataset comprising a plurality of query-document pairs, wherein each query-document pair comprises a synthetically generated query and a document from the corpus of documents” where the document of the corpus of documents is associated with the retrieval task, where the retrieval task in the context of the profile of Salemi is generating personalized content that “aligns with the user's writing style” {target content that is stylized in the voice of the user}; Dai, ¶ [047]-[048], [057], [168]-[169]), [the first score] indicating how much the previously generated content will help the generative model in generating the target content (“a projection layer may be applied on the output encodings of the first token and the output may be used as the ranking score {the first score}” to generate “a ranking list consisting of a positive (q, d+) pair and sampled negative (q, d-) pairs (e.g., 31 sampled negative pairs),” where the determined relevance of the “sampled negative (q, d-) pairs” {the content previously generated by the user} as compared to the “positive (q, d+) pair” {target content that is stylized in the voice of the user... and a second request for the target content} to “directly optimize for ranking performance.” As such, and in the context of Salemi, the first score indicates much the sampled negative pair {previously generated content} will help the LLM in generating the sampled positive pair {the target content}; Dai, ¶ [049], [057]); and altering the previously trained retriever model based on the first score (“a projection layer may be applied on the output encodings of the first token and the output may be used as the ranking score. Also, for example, the model may be optimized using softmax cross entropy loss over a ranking list consisting of a positive (q, d+) pair and sampled negative (q, d-) pairs (e.g., 31 sampled negative pairs).”; Dai, ¶ [057]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include further comprising: training the retriever model by: generating, by a language model (LM) and based on target content that is stylized in the voice of the user, content previously generated by the user, and a second request for the target content, a first score indicating how much the previously generated content will help the LLM in generating the target content; and altering the retriever model based on the first score. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 3, the rejection of claim 2 is incorporated. Salemi discloses all of the elements of the current invention as stated above. However, Salemi fails to expressly recite wherein training the retriever model further includes: generating, by the LM and based on the target content that is stylized in the voice of the user and the second request for the target content, a second score indicating how well the LLM can generate the target content based on just the second request; and wherein altering the retriever model includes altering the retriever model based on the first score and the second score. The relevance of Dai is described above with relation to claim 1. Regarding claim 3, Dai teaches wherein training the retriever model further includes: generating, by the LM and based on the target content that is stylized in the [writing style] of the user and the second request for the target content, a second score (“Some embodiments involve determining… for each query-document pair, a retrieval score” where “trained machine learning model(s) 1232 can receive” input data 1230 and training data 1210 “and generate and output one or more corresponding inferences and/or prediction(s) 1250 about” the data, which “can include output document, output query document pairs, numerical values (e.g., retrieval scores {a second score})”, Dai, ¶ [135], [137]-[138], [171]), [the second score] indicating how well the LLM can generate the target content based on just the second request (“the retrieval score is indicative of a relevance of the document of the query-document pair to the query of the query-document pair,” where relevance of the document to the query is understood as how well the LLM can generate the document {target content} based on just the query {the second request}, Dai, ¶ [137], [171]); and wherein altering the retriever model includes altering the retriever model based on the first score and the second score (“trained machine learning model(s) 1232 can use output inference(s) and/or prediction(s) 1250 as input feedback 460” where the trained machine learning models can include a “large language model and/or a document retrieval model” where the , Dai, ¶ [137]-[138]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include wherein training the retriever model further includes: generating, by the LM and based on the target content that is stylized in the voice of the user and the second request for the target content, a second score indicating how well the LLM can generate the target content based on just the second request; and wherein altering the retriever model includes altering the retriever model based on the first score and the second score. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 9, the rejection of claim 1 is incorporated. Salemi discloses all of the elements of the current invention as stated above. Salemi further discloses wherein the generative model is a generative LLM (Describes using “large language models (LLMs), such as GPT4”; Salemi, ¶ pg. 1, col. 1, para. 2). Regarding claim 10, the rejection of claim 1 is incorporated. Salemi discloses all of the elements of the current invention as stated above. However, Salemi fails to expressly recite wherein the retriever model is an encoder model. The relevance of Dai is described above with relation to claim 1. Regarding claim 10, Dai teaches wherein the retriever model is an encoder model (“pretrain 245 may involve pretraining the retriever (e.g., dual encoder 210)”; Dai, ¶ [076]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include wherein the retriever model is an encoder model. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 11, the rejection of claim 1 is incorporated. Salemi discloses all of the elements of the current invention as stated above. Salemi further discloses wherein the personally-stylized content includes emulation of a writing style or communication style in the second content (Discloses that the output is “personalized... for user u in the same format as x and y” and aligns with the user’s writing style based on “a collection of user records associated with each sample for all tasks”; Salemi, ¶ pg. 2, col. 1, para. 2; pg. 5, col. 2, para. 3). Regarding claim 12, Salemi discloses A personally-stylized content generation system comprising (Systems and methods described with reference to “a retrieval augmentation approach that retrieves personalized items from user profiles to construct personalized prompts for large language models”; Salemi, ¶ Abstract): a database storing content items previously generated by users (The system includes a user profile, where said profile consists of a “collection of data points pertaining to the user” and where the “the profile of each user encompasses all the papers they have authored.”; Salemi, ¶ pg. 3, col. 2, para. 2, pg. 5, col. 2, para. 3; Figure. 1); a pre-trained retriever model (Describes a model for generating text “while aligning with the user’s writing style” based on “user u’s profile”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1); configured to: receive, from a user, a query for personally-stylized content (The system receives “input xi” from “user u” for generating text “while aligning with the user’s writing style” based on “user u’s profile”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1); score each item of content generated by the user in the database (The system describes comparing Best Match (BM) 25 and Contriever as the retrieval models, both of which provide a score for the candidate documents (Contriever generates a relevance score for each of the documents based on the dot product between the query embedding and each document embedding; BM25 ranks documents based on TF-IDF).; Salemi, ¶ pg. 8, col. 1, para. 1); and provide a specified number of content items that were previously generated by the user and associated with highest scores (Discloses retrieving k items from a user profile, based on “the retrieval model (BM25 vs. Contriever)”; Salemi, ¶ pg. 8, col. 1, para. 1); a prompt augmenter configured to: receive the specified number of content items (The system receives the retrieved entries/items which are integrated into the assembled prompt by a language model “with the inputs corresponding to individual tasks” and the system then “assess[es] their performance based on the generated outputs.”; Salemi, ¶ pg. 8, col. 1-2, para. 2); augment a prompt to include content of the specified number of content items resulting in an augmented prompt (The system further includes “a prompt construction function φp that assembles a prompt for user u based on input xi and the retrieved entries.”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1); and provide the augmented prompt to a generative model (As shown in Figure 1, and described throughout, the modified prompt based on input xi and the retrieved entries, is received by a large language model, with examples including FlanT5-XXL and GPT-3..; Salemi, ¶ Pg. 8, col. 1, para. 2; Figure 1); and an application configured to provide, to the user, personally-stylized content from the generative model responsive to the augmented prompt (“for a given textual input x, the goal is to develop a model M that generates personalized output y for user u. One can model this task as arg maxy p(y|x, u). For each user u, the model M can take advantage of Pu = {(xu1, yu1),(xu2, yu2), (xumu, yumu) where each (xui, yui) denotes a pair of input and personalized output for user u in the same format as x and y.” As the “personalized output” is intended and output for “user u”, the personalized output is also understood as being provided to user u, and where the output is produced by an application.; Salemi, ¶ pg. 2, col. 1, para. 2). However, Salemi fails to expressly recite trained based on a first score generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content, the first score indicating how much the previously generated content will help a generative model in generating the target content. The relevance of Dai is described above with relation to claim 1. Regarding claim 12, Dai teaches trained based on a first score (“In some embodiments, a dual encoder 210 retrieval architecture may be used, along with a pretrain-finetune recipe,” which is a pretraining process {...trained} for the dual encoder model, which is pretrained based on a ranking score; Dai, ¶ [075]-[077], [057]) [the first score] generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content (“pretrain 245 may involve pretraining the retriever (e.g., dual encoder210)” which may be “trained on the same synthetic data (e.g., in synthetic dataset 220) generated from the prompt based QGen,” which also includes generating a ranking score using a generated synthetic training set {trained based on a first score generated by a language model}, the generated “synthetic training dataset comprising a plurality of query-document pairs, wherein each query-document pair comprises a synthetically generated query and a document from the corpus of documents” where the document of the corpus of documents is associated with the retrieval task, where the retrieval task in the context of Salemi is generating personalized content that “aligns with the user’s writing style” {target content that is stylized in the voice of the user}; Dai, ¶ [047]-[048], [057], [076]-[077], [168]-[169]), the first score indicating how much the previously generated content will help a generative model in generating the target content (“a projection layer may be applied on the output encodings of the first token and the output may be used as the ranking score {the first score}” to generate “a ranking list consisting of a positive (q, d+) pair and sampled negative (q, d-) pairs (e.g., 31 sampled negative pairs),” where the determined relevance of the “sampled negative (q, d-) pairs” {the content previously generated by the user} as compared to the “positive (q, d+) pair” {target content that is stylized in the voice of the user... and a second request for the target content} to “directly optimize for ranking performance.” As such, and in the context of Salemi, the first score indicates much the sampled negative pair {previously generated content} will help the LLM in generating the sampled positive pair {the target content}; Dai, ¶ [049], [057], [076]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include trained based on a first score generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content, the first score indicating how much the previously generated content will help a generative model in generating the target content. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 13, Salemi discloses wherein the generative model is a generative LLM (Describes using “large language models (LLMs), such as GPT4” which is a generative LLM; Salemi, ¶ pg. 1, col. 1, para. 2). Regarding claim 14, the rejection of claim 12 is incorporated. Salemi discloses all of the elements of the current invention as stated above. However, Salemi fails to expressly recite wherein the retriever model is an encoder model. The relevance of Dai is described above with relation to claim 1. Regarding claim 14, Dai teaches wherein the pre-trained retriever model is an encoder model (“pretrain 245 may involve pretraining the retriever (e.g., dual encoder 210)”; Dai, ¶ [076]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include wherein the retriever model is an encoder model. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 15, the rejection of claim 12 is incorporated. Salemi discloses all of the elements of the current invention as stated above. Salemi further discloses wherein the personally-stylized content includes emulation of a writing style or communication style in the second content (Discloses that the output is “personalized... for user u in the same format as x and y” and aligns with the user’s writing style based on “a collection of user records associated with each sample for all tasks”; Salemi, ¶ pg. 2, col. 1, para. 2; pg. 5, col. 2, para. 3). Regarding claim 16, the rejection of claim 12 is incorporated. Salemi discloses all of the elements of the current invention as stated above. However, Salemi fails to expressly recite further comprising: a language model configured to: generate, based on target content that is stylized in a voice of the user, content previously generated by the user, and a second request for the target content, a first score indicating how much the previously generated content will help the LLM in generating the target content; and generate, based on the target content that is stylized in the voice of the user and the second request for the target content, a second score indicating how well the LLM can generate the target content based on just the second request. The relevance of Dai is described above with relation to claim 1. Regarding claim 16, Dai teaches further comprising: a language model configured to: generate, based on target content that is stylized in a voice of the user, content previously generated by the user, and a second request for the target content (The training includes generating a ranking score using a generated synthetic training set, the generated “synthetic training dataset comprising a plurality of query-document pairs, wherein each query-document pair comprises a synthetically generated query and a document from the corpus of documents” where the document of the corpus of documents is associated with the retrieval task, where the retrieval task in the context of Salemi is generating personalized content that “aligns with the user's writing style” {target content that is stylized in the voice of the user}; Dai, ¶ [047]-[048], [057], [168]-[169]), a first score indicating how much the previously generated content will help the generative model in generating the target content (“a projection layer may be applied on the output encodings of the first token and the output may be used as the ranking score {the first score}” to generate “a ranking list consisting of a positive (q, d+) pair and sampled negative (q, d-) pairs (e.g., 31 sampled negative pairs),” where the determined relevance of the “sampled negative (q, d-) pairs” {the content previously generated by the user} as compared to the “positive (q, d+) pair” {target content that is stylized in the voice of the user... and a second request for the target content} to “directly optimize for ranking performance.” As such, and in the context of Salemi, the first score indicates much the sampled negative pair {previously generated content} will help the LLM in generating the sampled positive pair {the target content}; Dai, ¶ [049], [057]); and generate, based on the target content that is stylized in the voice of the user and the second request for the target content, a second score (“Some embodiments involve determining… for each query-document pair, a retrieval score” where “trained machine learning model(s) 1232 can receive” input data 1230 and training data 1210 “and generate and output one or more corresponding inferences and/or prediction(s) 1250 about” the data, which “can include output document, output query document pairs, numerical values (e.g., retrieval scores)” where, as applied to the document profile of Salemi as explained in the rejection of claim 12, the document {target content} is stylized in the voice of the user.; Dai, ¶ [135], [137]-[138], [171]) indicating how well the generative model can generate the target content based on just the second request (“the retrieval score is indicative of a relevance of the document of the query-document pair to the query of the query-document pair,” where relevance of the document to the query is understood as how well the LLM can generate the target content based on just the query {the second request}; Dai, ¶ [137], [171]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include further comprising: a language model configured to: generate, based on target content that is stylized in a voice of the user, content previously generated by the user, and a second request for the target content, a first score indicating how much the previously generated content will help the LLM in generating the target content; and generate, based on the target content that is stylized in the voice of the user and the second request for the target content, a second score indicating how well the LLM can generate the target content based on just the second request. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 18, Salemi discloses A non-transitory machine-readable medium including instructions that, when executed by a machine, cause the machine to perform operations for generating personally-stylized content, the operations comprising (Systems and methods described with reference to “a retrieval augmentation approach that retrieves personalized items from user profiles to construct personalized prompts for large language models”; Salemi, ¶ Abstract): receiving a first request for personally-stylized content from a user (The system receives “input xi” from “user u” for generating text “while aligning with the user’s writing style” based on “user u’s profile” where said profile consists of a “collection of data points pertaining to the user” and where the “the profile of each user encompasses all the papers they have authored.”; Salemi, ¶ pg. 2, col. 2, para. 2; pg. 3, col. 2, para. 2, pg. 5, col. 2, para. 3; Figure. 1);providing a retriever model with the first request (“To achieve personalization for a given sample (xi, yi) associated with user u” the system uses “(1) a query generation function φq that transforms the input xi into a query q for retrieving from the user u’s profile” and “(2) a retrieval model R(q, Pu, k) that accepts a query q”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1); receiving, from the retriever model and responsive to the first request, content previously generated by the user resulting in obtained content (The retrieval model “accepts a query q, a user profile Pu and retrieves k most pertinent entries from the user profile” which are received/used by the prompt construction function; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1), populating a prompt with the obtained content resulting in an augmented prompt, (The system further includes “a prompt construction function φp that assembles a prompt for user u based on input xi and the retrieved entries.”; Salemi, ¶ pg. 2, col. 2, para. 2; Figure. 1) providing the augmented prompt to a generative model (As shown in Figure 1, and described throughout, the modified prompt based on input xi and the retrieved entries, is received by a large language model, with examples including FlanT5-XXL and GPT-3..; Salemi, ¶ Pg. 8, col. 1, para. 2; Figure 1); receiving the personally-stylized content from the generative model (“for a given textual input x, the goal is to develop a model M that generates personalized output y for user u. One can model this task as arg maxy p(y|x, u). For each user u, the model M can take advantage of Pu = {(xu1, yu1),(xu2, yu2), (xumu, yumu) where each (xui, yui) denotes a pair of input and personalized output for user u in the same format as x and y” where the output is “personalized... for user u in the same format as x and y” and aligns with the user’s writing style.; Salemi, ¶ pg. 2, col. 1, para. 2; pg. 5, col. 2, para. 3); and providing the personally-stylized content to the user (As the “personalized output” is intended and output for “user u”, the personalized output is also understood as being provided to user u.; Salemi, ¶ pg. 2, col. 1, para. 2). However, Salemi fails to expressly recite the retriever model trained based on a first score generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content, the first score indicating how much the previously generated content will help a generative model in generating the target content. The relevance of Dai is described above with relation to claim 1. Regarding claim 18, Dai teaches the retriever model trained based on a first score (“In some embodiments, a dual encoder 210 retrieval architecture may be used, along with a pretrain-finetune recipe,” which is a pretraining process {...trained} for the dual encoder model, which is pretrained based on a ranking score; Dai, ¶ [075]-[077], [057]) [the first score] generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content (“pretrain 245 may involve pretraining the retriever (e.g., dual encoder210)” which may be “trained on the same synthetic data (e.g., in synthetic dataset 220) generated from the prompt based QGen,” which also includes generating a ranking score using a generated synthetic training set {trained based on a first score generated by a language model}, the generated “synthetic training dataset comprising a plurality of query-document pairs, wherein each query-document pair comprises a synthetically generated query and a document from the corpus of documents” where the document of the corpus of documents is associated with the retrieval task, where the retrieval task in the context of Salemi is generating personalized content that “aligns with the user’s writing style” {target content that is stylized in the voice of the user}; Dai, ¶ [047]-[048], [057], [076]-[077], [168]-[169]), the first score indicating how much the previously generated content will help a generative model in generating the target content (“a projection layer may be applied on the output encodings of the first token and the output may be used as the ranking score {the first score}” to generate “a ranking list consisting of a positive (q, d+) pair and sampled negative (q, d-) pairs (e.g., 31 sampled negative pairs),” where the determined relevance of the “sampled negative (q, d-) pairs” {the content previously generated by the user} as compared to the “positive (q, d+) pair” {target content that is stylized in the voice of the user... and a second request for the target content} to “directly optimize for ranking performance.” As such, and in the context of Salemi, the first score indicates much the sampled negative pair {previously generated content} will help the LLM in generating the sampled positive pair {the target content}; Dai, ¶ [049], [057], [076]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include the retriever model trained based on a first score generated by a language model (LM) based on (i) target content that is stylized in the writing style of the user, (ii) content previously generated by the user, and (iii) a second request for the target content, the first score indicating how much the previously generated content will help a generative model in generating the target content. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Regarding claim 19, Salemi discloses wherein the personally-stylized content includes emulation of a writing or communication style in the previously generated content (Discloses that the output is “personalized... for user u in the same format as x and y” and aligns with the user’s writing style based on “a collection of user records associated with each sample for all tasks”; Salemi, ¶ pg. 2, col. 1, para. 2; pg. 5, col. 2, para. 3). Regarding claim 20, the rejection of claim 18 is incorporated. Salemi discloses all of the elements of the current invention as stated above. However, Salemi fails to expressly recite wherein the retriever model is trained to obtain previously generated content that is most likely to help the LLM emulate the previously generated content. The relevance of Dai is described above with relation to claim 1. Regarding claim 20, Dai teaches wherein the retriever model is trained to obtain previously generated content that is most likely to help the generative model emulate the previously generated content (The method includes “training... a document retrieval model to take an input query associated with the retrieval task and predict an output document retrieved from the corpus of documents” based on anticipated relevance for the indicated task.; Dai, ¶ [169], [049]-[050]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the personalized text generation systems of Salemi to incorporate the teachings of Dai to include wherein the retriever model is trained to obtain previously generated content that is most likely to help the LLM emulate the previously generated content. Dai discloses “prompt-based query generation, filtering and retriever training” including generating “task-specific dual encoders by generating large quantities of task-specific synthetic data” where the generated retrievers are “able to achieve significant improvement in retrieval (e.g., without a re-ranker) by using just two to eight examples for prompts” which “significantly reduces the need for heavily engineered retrieval architectures,” as recognized by Dai. (Dai, ¶ [004]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Maschmeyer et al. (U.S. Pat. App. Pub. No. 2024/0256792) discloses automatically prompting a LLM to generate a personalized text, such as a personalized textual description, in which portions of the text are customized based on user attributes. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sean E. Serraguard whose telephone number is (313)446-6627. The examiner can normally be reached 07:00-17:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sean E Serraguard/Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Oct 19, 2023
Application Filed
Aug 22, 2025
Non-Final Rejection — §102, §103, §112
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Nov 07, 2025
Response Filed
Feb 17, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603095
Stereo Audio Signal Delay Estimation Method and Apparatus
2y 5m to grant Granted Apr 14, 2026
Patent 12598250
SYSTEMS AND METHODS FOR COHERENT AND TIERED VOICE ENROLLMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597429
PACKET LOSS CONCEALMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12512093
Sensor-Processing Systems Including Neuromorphic Processing Modules and Methods Thereof
2y 5m to grant Granted Dec 30, 2025
Patent 12505835
HOME APPLIANCE AND SERVER
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+33.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month