Prosecution Insights
Last updated: April 19, 2026
Application No. 18/768,205

INCLUSIVITY LANGUAGE CHECKING

Non-Final OA §101§103
Filed
Jul 10, 2024
Examiner
FOSTER JR., MICHAEL ALAN
Art Unit
2654
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
3 currently pending
Career history
3
Total Applications
across all art units

Statute-Specific Performance

§101
28.6%
-11.4% vs TC avg
§103
57.1%
+17.1% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is sent in response to Applicant’s communication received on 7/10/2024 for the application number 18768205. The office hereby acknowledges receipt of the following placed of record in the file: Specification, Abstract, Oath/Declaration and claims. Status of the claims Claims 1-20 are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 7/10/2024 was filed before the mailing date of the first office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as explained below. Claim 1 recites a system, comprising: at least one processor; and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations, comprising: (a). analyzing first text that is received based on first user input data, and context of the first text, the analyzing using a large language model to identify a first recommendation to alter the first text to satisfy an inclusive-language criterion (b). receiving user feedback data based on the first recommendation; (c). tuning the large language model based on the user feedback data, to produce an updated large language model. (d). analyzing second text received based on second user input data with the updated large language model to identify a second recommendation to alter the second text to satisfy the inclusive-language criterion. Step (a) comprises a mental process. This step can be performed by a human as a person can analyze text, consider context, and determine a recommendation. Step (b) is a mental step because one person can receive feedback from other person or entity. Step (c) involves tuning a large language model and constitutes updating parameters using computational techniques. This is an additional element and comprises a generic computer-implemented operation. Step (d) is a mental process, similar to step (a). This step can be performed by a human as a person can analyze text and determine a recommendation. Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites at least system. Thus, the claim is a machine, which is one of the statutory categories of invention. (Step 1: YES). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. As discussed above, the broadest reasonable interpretation of steps (a), (b) and (d) recites a mental process. Specifically, step (a) can be performed by a human as a person can analyze text, consider context, and determine a recommendation to alter the text. Step (b) can be performed by a human as a person can receive feedback from another individual regarding the recommendation. Step (d) can be performed by a human as a person can analyze text and determine a recommendation to alter the text based on context. Hence the claim encompasses mental processes practically performed in the human mind by observation, evaluation, judgement, and opinion. See MPEP 2106.04(a)(2), subsection III. (Step 2A, Prong One: YES). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The claim recites additional elements including at least one processor, at least one memory storing executable instructions, receiving user feedback data, and tuning a large language model. The at least one processor and at least one memory are recited at a high level of generality and perform generic computer functions, such as executing instructions and storing data. The step of tuning the large language model based on the user feedback data constitutes insignificant extra-solutional activity, as it merely updates parameters based on input data without imposing meaningful limitations on the judicial exception. Further, such tuning of a model is well-understood, routine, and conventional in the field of machine learning. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. As explained with respect to Step 2A, Prong Two, the LLM, and the tuning of it comprises additional elements that do not contribute to the patentability of the claim as a whole. The additional element of the “LLM” in limitations (a)-(d), is at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f). At Step 2B, the evaluation of the insignificant extra-solutional activity consideration takes into account whether or not the extra-solutional activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). As known in the art these elements are well routine and conventional. For example, tuning is taught as conventional in US 20250356190 A1 (Fig. 2 & Para 0044, “FIG. 2 is a diagram 200 illustrating conventional finetuning of a full neural network). Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solutional activity which do not provide an inventive concept. The claim is not patent eligible. Claim 2 constitutes data gathering and evaluation, this can be performed by a human as a person can review feedback and determine whether it is approved. Claim 3 constitutes evaluation of information, this can be performed by a human as a person can determine whether cumulative feedback is approved. Claim 4 constitutes categorization, this can be performed by a human as a person can identify approval data and determine whether to refrain from acting based on rejected feedback. Claim 5 constitutes insignificant extra-solutional activity, as it merely updates a model based on approved data, as is well understood, routine, and conventional in the art. Claim 6 constitutes data processing and evaluation, this can be performed by a human as a person can use example pairs of inclusive / exclusive language to guide how the text should be modified. Claim 7 constitutes categorization, this can be performed by a human as a person can specialize in text classification or text generation. Claim 8 constitutes insignificant extra-solutional activity, as it recites further tuning of a model which is shown in 2B to be well understood, routine, and conventional in the art. Claim 9 constitutes data processing, this can be performed by a human as a person can provide multiple example prompts and expected outputs to guide text modifications. Claim 10 & 15 recite a mental process as the steps of analyzing, receiving, and tuning can be performed by a human, similar to claim 1. Accordingly, the analysis set forth above with respect to claim 1 is applicable. Claim 11 constitutes data processing. This can be performed by a human as a person can iteratively refine decisions based on accumulated feedback. Claim 12 constitutes data output. This can be performed by a human as a person can provide a recommendation via an email or similar communication. Claim 13 constitutes data processing, this can be performed by a human as a person can use a pair of inputs and outputs to guide how language should be modified. Claim 14 constitutes insignificant extra-solutional activity, as it recites updating and inputting data-pairs into a model, which is well-understood, routine, and conventional in the art. Claim 16 constitutes data output. This can be performed by a human as a person can provide a recommendation via a word processor program. Claim 17 constitutes data output. This can be performed by a human as a person can provide a recommendation via a team collaboration application. Claim 18 constitutes data output. This can be performed by a human as a person can provide a recommendation via an enterprise management program. Claim 19 constitutes data output. This can be performed by a human as a person can provide a recommendation via an enterprise social networking service. Claim 20 constitutes data output. This can be performed by a human as a person can provide a recommendation via a wiki service. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 5, 10, 11, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1) Regarding claim 1, Sharma teaches a system, comprising: at least one processor; and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations (Para 0004, “a content generating system having a processor and a memory in communication with the processor wherein the memory stores executable instructions that, when executed by the processor alone or in combination with other processors, cause the content generating system to perform multiple functions”); comprising: analyzing first text that is received based on first user input data (Para 0004, “delivering the input text to an inclusive prompt recommendation model as the text is being received, the inclusive prompt recommendation model being trained to process the input text to: determine at least one of an intent and a context”), and context of the first text, the analyzing using a large language model to identify a first recommendation to alter the first text (Fig 4, Teaches the use of the model to recommend additional words to add to text. This comprises an alteration.) to satisfy an inclusive-language criterion (Para 0004,“generate at least one inclusive prompt recommendation based at least in part on the determined at least one of the intent and context” and where the criterion is taught in the abstract, “The system can include an ethical filtering mechanism for ensuring that prompt recommendations do not have language that directly or indirectly promotes bias and/or stereotypes.”); tuning the large language model based on the user feedback data, to produce an updated large language model (Para 0023, “The training system uses training data based on user interactions and feedback pertaining the use of the system which has been collected over time. This training refines the model over time and can improve the system's understanding and performance.”); analyzing second text received based on second user input data with the updated large language model to identify a second recommendation to alter the second text to satisfy the inclusive-language criterion. (Sharma teaches that the system can be improved over time [Para 0023, “This training refines the model over time and can improve the system's understanding and performance.”] which implies the ability to receive a recommendation multiple times and repeat the process which is already taught for the first recommendation). Sharma does not teach receiving user feedback data based on the first recommendation. However, Najib teaches receiving user feedback data based on the first recommendation (Para 0175, “the method 400 includes receiving review submissions from customers, the review submissions initiated after each customer transaction”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Najib to gain the benefit of more opportunities to encourage customers to share their feedback (Para 0175, “triggering review prompts that encourage customers to share their feedback.”). Regarding claim 2, Sharma does not teach the system wherein the operations further comprise: receiving moderator approval data that is indicative of the user feedback data being approved by a monitor before performing the tuning of the language model based on the user feedback data. However, Najib teaches the system wherein the operations further comprise: receiving moderator approval data that is indicative of the user feedback data being approved by a monitor before performing the tuning of the language model based on the user feedback data. (Para 0163, “the outcomes of the moderation (i.e., reviews being approved or declined) are fed back into the device 300 to further refine the learning models.”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Najib to gain the benefit of the system being able to evolve in order to incorporate new data (Para 0086, “ensures that the algorithms performed or deployed at or by the Al-based analysis module 122 continue to evolve in response to new data”). Regarding claim 4, Sharma does not teach the system wherein the moderator approval data is first moderator approval data, wherein the user feedback data is first user feedback data and wherein the operations further comprise: refraining from updating the large language model based on receiving second moderator approval data that is indicative of the second user feedback data being rejected. However, Najib teaches the system wherein the moderator approval data is first moderator approval data, wherein the user feedback data is first user feedback data. (Para 0163, The system of moderation is taught in “the outcomes of the moderation (i.e., reviews being approved or declined) are fed back into the device 300 to further refine the learning models.”, during the first iteration of the process it would include the first moderation and first user feedback); and wherein the operations further comprise: refraining from updating the large language model based on receiving second moderator approval data that is indicative of the second user feedback data being rejected (Para 0148, “The decision-making module 340 automates the approval of reviews that align with moderation policies and rejects those that do not”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Najib to gain the benefit of the system being able to evolve in order to incorporate new data (Para 0086, “ensures that the algorithms performed or deployed at or by the Al-based analysis module 122 continue to evolve in response to new data”). Regarding claim 5, Sharma does not teach the system wherein the tuning of the large language model is performed based on the receiving of the moderator approval data. However, Najib teaches the system wherein the tuning of the large language model is performed based on the receiving of the moderator approval data (Para 0086, “where the outcomes of the moderation (i.e., reviews being approved or declined) are fed back into the system 100 to further refine the learning models”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Najib to gain the benefit of the system being able to evolve in order to incorporate new data (Para 0086, “ensures that the algorithms performed or deployed at or by the Al-based analysis module 122 continue to evolve in response to new data”). Regarding claim 10, Sharma teaches a method, comprising: determining, by a large language model of a system comprising at least one processor, a first recommendation to alter first text (Fig 4, Teaches the use of the model to recommend additional words to add to text. This comprises an alteration.) to utilize more-inclusive language compared to the first text according to a defined inclusivity criterion (Abstract, “The system can include an ethical filtering mechanism for ensuring that prompt recommendations do not have language that directly or indirectly promotes bias and/or stereotypes.”), wherein the first text is received based on first user input data, and wherein the determining is based on a context of the first text; (Para 0004, “delivering the input text to an inclusive prompt recommendation model as the text is being received, the inclusive prompt recommendation model being trained to process the input text to: determine at least one of an intent and a context” and “generate at least one inclusive prompt recommendation based at least in part on the determined at least one of the intent and context”); tuning, by the system, the large language model based on the user feedback data, to produce an updated large language model (Para 0023, “The training system uses training data based on user interactions and feedback pertaining the use of the system which has been collected over time. This training refines the model over time and can improve the system's understanding and performance.”); analyzing, by the system, second text received based on second user input data with the updated large language model to identify a second recommendation to alter the second text to utilize more-inclusive language. (Sharma teaches that the system can be improved over time [Para 0023, “This training refines the model over time and can improve the system's understanding and performance.”] which implies the ability to receive a recommendation multiple times and repeat the process which is already taught for the first recommendation). Sharma does not teach receiving, by the system, user feedback data based on the first recommendation. However, Najib teaches receiving, by the system, user feedback data based on the first recommendation; (Para 0175, “the method 400 includes receiving review submissions from customers, the review submissions initiated after each customer transaction”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Najib to gain the benefit of more opportunities to encourage customers to share their feedback (Para 0175, “triggering review prompts that encourage customers to share their feedback.”). Regarding claim 11, Sharma teaches the method comprising: iteratively updating, by the system, the large language model based on a group of user feedback data that comprises the user feedback data (Para 0023, “The training system uses training data based on user interactions and feedback pertaining the use of the system which has been collected over time. This training refines the model over time and can improve the system's understanding and performance.”). Regarding claim 15, Sharma teaches a non-transitory computer-readable medium comprising instructions that, in response to execution, cause a system comprising at least one processor to perform operations, comprising: providing a first recommendation to alter first text to utilize inclusive language, wherein the first text is received based on first user input data, wherein the first recommendation is determined with a large language model, and wherein the first recommendation is determined based on a context of the first text (Para 0004, “delivering the input text to an inclusive prompt recommendation model as the text is being received, the inclusive prompt recommendation model being trained to process the input text to: determine at least one of an intent and a context” and “generate at least one inclusive prompt recommendation based at least in part on the determined at least one of the intent and context”); tuning, by the system, the large language model based on the user feedback data, to produce an updated large language model (Para 0023, “The training system uses training data based on user interactions and feedback pertaining the use of the system which has been collected over time. This training refines the model over time and can improve the system's understanding and performance.”). Sharma does not teach receiving, by the system, user feedback data based on the first recommendation; However, Najib teaches receiving, by the system, user feedback data based on the first recommendation; (Para 0175, “the method 400 includes receiving review submissions from customers, the review submissions initiated after each customer transaction”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Najib to gain the benefit of more opportunities to encourage customers to share their feedback (Para 0175, “triggering review prompts that encourage customers to share their feedback.”). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1), as applied to claim 1, 2 above, and further in view of Khumbare et al. (US 20210082098 A1). Sharma modified by Najib does not teach the system wherein the moderator approval data indicates approval of cumulative user feedback data that comprises the user feedback data. However, Khumbare teaches the system wherein the data consists of cumulative user feedback data that comprises the user feedback data. (Para 0027, “the disclosed method and system performs selective re-training of deep-learning models based on cumulative user feedback and accumulated training data”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Khumbare to gain the benefit of the system being able to evolve in order to incorporate new data (Para 0086, “ensures that the algorithms performed or deployed at or by the Al-based analysis module 122 continue to evolve in response to new data”). Claims 6, 13 & 14 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1), as applied to claim 1, 2, 4, 5, 10, 11 and 15 above, and further in view of Khumbare et al. (US 20210082098 A1). Regarding claim 6, Sharma teaches the system wherein the operations further comprise: before analyzing the first text that is received based on the first user input data with the large language model, tuning the large language model (Para 0037 pre-trained model related to inclusive conditions), wherein respective pairs of the group of pairs comprise respective corresponding expected inclusive language examples (inclusive conditions). Sharma modified by Nijab does not teach tuning the large language model with a group of pairs; wherein respective pairs of the group of pairs comprise respective exclusive language examples and corresponding expected inclusive language examples However, Hajarnis teaches tuning the large language model with a group of pairs (Para 0083 - inclusive/exclusive detection); wherein respective pairs of the group of pairs comprise respective exclusive language examples and corresponding expected inclusive language examples ( Para 0083 For the exclusive word detection, processing device 102 may execute deep job profile customization application 108 to scan the text of the job description for any potential exclusive words that are stored in an exclusive word dictionary. Responsive to detecting any exclusive words, processing device 102 may determine inclusive words that semantically similar to the exclusive words and present these inclusive words as alternatives to the exclusive words on a user interface). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Hajarnis in order to gain the benefit of reducing potential bias of LLMs (Para 0083, “help preempt potential bias of a previously trained large language model”) Regarding claim 13, Sharma modified by Najib does not teach the method wherein the large language model has been tuned on pairs comprising respective inputs and respective outputs, wherein the respective inputs comprise respective examples of exclusive language, and wherein the respective outputs comprise respective corresponding examples of inclusive language, and further comprising: updating the pairs offline. However, Hajarnis teaches the method wherein the large language model has been tuned on pairs comprising respective inputs and respective outputs (Para 0083 - inclusive/exclusive detection) and wherein the respective outputs comprise respective corresponding examples of inclusive language ( Para 0083 For the exclusive word detection, processing device 102 may execute deep job profile customization application 108 to scan the text of the job description for any potential exclusive words that are stored in an exclusive word dictionary. Responsive to detecting any exclusive words, processing device 102 may determine inclusive words that semantically similar to the exclusive words and present these inclusive words as alternatives to the exclusive words on a user interface, and further comprising: updating the pairs offline (Fig. 11 Teaches the training process which occurs before deployment, hence, offline). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Hajarnis in order to gain the benefit of reducing potential bias of LLMs (Para 0083, “help preempt potential bias of a previously trained large language model”) Regarding claim 14, Sharma modified by Najib does not teach the method wherein updating the pairs offline produces updated pairs, wherein the tuning of the large language model comprises: inputting the updated pairs into the large language model. However, Hajarnis teaches the method wherein updating the pairs offline produces updated pairs (Para 0065, “Once the data has passed through all layers, the final output may be produced, representing the LLM's prediction or response to the input prompt. This output may be evaluated against an expected result, and the difference informs the LLM's adjustments during the backpropagation step”. Teaches the inputting of pairs, and using the output to improve the LLM which will thus lead to updated pairs), wherein the tuning of the large language model comprises: inputting the updated pairs into the large language model (Para 0063, “textual input data may be fed into the neural network.” And Para 0065 “Once the data has passed through all layers, the final output may be produced, representing the LLM's prediction or response to the input prompt. This output may be evaluated against an expected result”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Hajarnis in order to gain the benefit of learning and improving the model over time (Para 0065, “model improvement over time.”) Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1), as applied to claim 1, 6 above, and further in view of Mariko et al. (US 20250298958 A1). Sharma modified by Najib does not teach the system wherein the large language model is tuned to specialize in text classification or text-to-text generation. However, Mariko teaches the system wherein the large language model is tuned to specialize in text classification or text-to-text generation. (Para 0027, “Fine-tuned or domain-specific models are LLMs that have undergone additional training on domain-specific data to improve their performance in particular areas or with particular tasks like text classification and language generation”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Mariko to gain the benefit of improving their performance in particular areas (Para 0027). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1), as applied to claim 1, 6 above, and further in view of Chen et al. (US 20250252301 A1). Sharma modified by Najib teaches wherein the operations comprise: tuning the large language model with the group of pairs (As applied to claim 6). Sharma modified by Najib does not teach tuning the model with a low-rank adaptation of a defined large language models process. However, Chen teaches tuning the model with a low-rank adaptation of a defined large language models process. (Para 0025, “LoRA is a parameter-efficient fine-tuning (PEFT) method that approximates changes in the weights as a product of two low-rank matrices A and B and updates A and B incrementally. The low-rank matrices, A and B are initialized from a normal distribution and zero, respectively. This initialization facilitates the initial fine-tuning with pre-trained weights.”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Chen to gain the benefit of minimizing the number of trainable parameters (Abstract, “efficiently minimizing the number of trainable parameters”). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1), as applied to claim 1, 6 above, and further in view of Kaan et al. (WO 2025199345 A1). Sharma modified by Najib does not teach the system wherein the operations further comprise: before analyzing the first text that is received based on the first user input data with the large language model, tuning the large language model via providing a multi-shot prompt as input to the large language model, wherein the multi-shot prompt comprises a description of an intent to suggest inclusive language and an output that is to be output by the large language model. However, Kaan teaches tuning the large language model via providing a multi-shot prompt as input to the large language model, wherein the multi-shot prompt comprises a description of an intent to suggest inclusive language and an output that is to be output by the large language model. (Pg. 9 Ln 8: “For instance, a system prompt with fine-tuning examples could be as follows: System prompt: ‘You will recieve a stream of text, your task is to determine if someone is talking to you, or if it’s ambient conversation, and then extract the user’s intent. Do not answer their question and always respond in valid JSON format.’ Example input: ‘The weather is nice today, but how will it be tomorrow?’ Example output:”, this teaches a description of intent (determine if someone is talking to you), and an output (Example output)). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in order to incorporate the teachings of Kaan to gain the benefit of a more natural and intuitive user experience (Abstract, “This approach facilitates a more natural and intuitive user experience”). Claim 12, 16, 17, 18, 19, 20 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20250322168 A1) in view of Najib et al. (WO 2025000074 A1), as applied to claims 10 & 15 above, and further in view of Seth et al. (US 20250342630 A1). Regarding claim 12, Sharma modified by Najib does not teach providing, by the system, the first recommendation via a plugin to an email program. However, Seth does teach providing, by the system, the first recommendation via a plugin to an email program. (Para 0052, “For example, the system can work on the web or within a virtual meeting and collaboration application (e.g., Microsoft Teams®) or an email application (e.g., Outlook®)” where the output (the first recommendation) is rendered via an email application). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in such a way as to incorporate the teachings of Seth in order to allow more options to render the output. Regarding claim 16, Sharma modified by Najib does not teach sending, by the system, the first recommendation to be rendered via a word processor program. However, Seth does teach sending, by the system, the first recommendation to be rendered via a word processor program. (Para 0052, “Such applications can be a stand-alone applications, a plug-in or an Edit button of any application on the client device 105, such as the browser application 112, the native application 114, and the like” where an enterprise management program falls under being an application on the client device). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in such a way as to incorporate the teachings of Seth in order to allow more options to render the output. Regarding claim 17, Sharma modified by Najib does not teach sending, by the system, the first recommendation to be rendered via a team collaboration application. However, Seth does teach sending, by the system, the first recommendation to be rendered via a team collaboration application. (Para 0052, “For example, the system can work on the web or within a virtual meeting and collaboration application (e.g., Microsoft Teams®).” where Microsoft Teams comprises a team collaboration application). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in such a way as to incorporate the teachings of Seth in order to allow more options to render the output. Regarding claim 18, Sharma modified by Najib does not teach sending, by the system, the first recommendation to be rendered via an enterprise management program. However, Seth does teach sending, by the system, the first recommendation to be rendered via an enterprise management program. (Para 0052, “Such applications can be a stand-alone applications, a plug-in or an Edit button of any application on the client device 105, such as the browser application 112, the native application 114, and the like” where an enterprise management program falls under being an application on the client device). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in such a way as to incorporate the teachings of Seth in order to allow more options to render the output. Regarding claim 19, Sharma modified by Najib does not teach sending, by the system, the first recommendation to be rendered via an enterprise social networking service. However, Seth does teach sending, by the system, the first recommendation to be rendered via an enterprise social networking service (Para 0052, “The system can also work within a social media website/application (e.g., Facebook®, Instagram®)”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in such a way as to incorporate the teachings of Seth in order to allow more options to render the output. Regarding claim 20, Sharma modified by Najib does not teach sending, by the system, the first recommendation to be rendered via a wiki service. However, Seth does teach sending, by the system, the first recommendation to be rendered via a wiki service (Para 0052, “Such applications can be a stand-alone applications, a plug-in or an Edit button of any application on the client device 105, such as the browser application 112, the native application 114, and the like” where a wiki service is interpreted to compose a browser application.”). It would have been obvious to one of ordinary skill in the art to modify Sharma before the effective filing date in such a way as to incorporate the teachings of Seth in order to allow more options to render the output. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ALAN FOSTER JR. whose telephone number is (571)272-8874. The examiner can normally be reached M - Th 8:00am - 6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL A FOSTER JR/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Jul 10, 2024
Application Filed
Mar 23, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month