Prosecution Insights
Last updated: April 19, 2026
Application No. 18/441,714

IMPLICIT PROMPT REWRIWTING

Non-Final OA §101§103§112
Filed
Feb 14, 2024
Examiner
NGUYEN, QUYNH H
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
941 granted / 1078 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
29 currently pending
Career history
1107
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1078 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 1. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Each of the independent claims recites steps that result in obtaining a query, generating a description for the query and the description used to identify a top-k most similar prompt, generate an optimized prompt and an evaluation prompt to determine which response is more relevant to the query. All of the recited steps are processes that, under its broadest reasonable interpretation, cover the limitations under the organized human activity. The claim features under its broadest reasonable interpretation, are certain methods of organizing human activity performed by generic computer components. For example, but for the “receive” [human behavior: obtain, collect], “extract” [human activity: take out, remove], “generate” [human behavior: create, make, produce], “identify” [human behavior: name, recognize] “provide” [human behavior: give, issue, supply], and “surface” [human behavior: come up, arise] in the context of this claim encompasses methods of organized human activity. If the claim limitations, under its broadest reasonable interpretation, covers fundamental economic practice, commercial or legal interaction or managing personal behavior or relationships or interactions between people but for the recitation of generic computer components, then it falls within the "system/method of organized human activity" grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. "[A]fter determining that a claim is directed to a judicial exception, 'we then ask, [w]hat else is there in the claims before us?"' MPEP 2106.05 (emphasis in MPEP) citing Mayo, 566 U.S. at 78. "What is needed is an inventive concept in the non-abstract application realm." SAP Inc. v. lnvestPic, LLV, Appeal No. 2017-2081 (Fed. Cir. 2018). For step two, the examiner must "determine whether the claims do significantly more than simply describe [the] abstract method" and thus transform the abstract idea into patent-eligible subject matter. Ultramercial, Inc. v. Hutu, LLC, 772 F.3d 709 (Fed. Cir. 2014). A primary consideration when determining whether a claim recites "significantly more" than abstract idea is whether the additional element(s) are well-understood, routine, conventional activities previously known to the industry. See MPEP 2106.0S{d). "If the additional element (or combination of elements) is a specific limitation other than what is well- understood, routine and conventional in the field, for instance because it is an unconventional step that confines the claim to a particular useful application of the judicial exception, then this consideration favors eligibility. If, however, the additional element {or combination of elements) is no more than well-understood, routine, conventional activities previously known to the industry, which is recited at a high level of generality, then this consideration does not favor eligibility." Id. The Federal Circuit has held that "[w]hether something is well-understood, routine, and conventional to a skilled artisan at the time of the patent is a factual determination." Bahr, Robert (April 19, 2018). Changes in Examination Procedure Pertaining to Subject Matter Eligibility, Recent Subject Matter Eligibility Decision (Berkheimer v. HP, Inc.) citing Berkheimer at 1369. "As set forth in MPEP 2106.05(d)(I), an examiner should conclude that an element (or combination of elements) represents well-understood, routine, conventional activity only when the examiner can readily conclude that the element(s) is widely prevalent or in common use in the relevant industry. This memo [] clarifies that such a conclusion must be based upon a factual determination that is supported as discussed in section III [of the memo]." Berkheimer Memo at 3 (emphasis in memo). Generally, "[i]f a patent uses generic computer components to implement an invention, it fails to recite an inventive concept under Alice step two." West View Research v. Audi, CAFC Appeal Nos. 2016-1947-51 (Fed. Cir. 04/19/2017) citing Mortg. Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324-25 (Fed. Cir. 2016) (explaining that "generic computer components such as an 'interface,' 'network,' and 'database' ... do not satisfy the inventive concept requirement"; but see Bascom (finding that an inventive concept may be found in the non-conventional and non-generic arrangement of the generic computer components, i.e., the installation of a filtering tool at a specific location, remote from the end- users, with customizable filtering features specific to each end user). In accordance with the above guidance, the examiner has searched the claim(s) to determine whether there are any "additional elements" in the claims that constitute "inventive concept," thereby rendering the claims eligible for patenting even if they are directed to an abstract idea. Alice, 134 S. Ct. 2347 (2014). Those "additional features" must be more than "well understood, routine, conventional activity." See Alice. To note, "under the Mayo/Alice framework, a claim directed to a newly discovered ... abstract idea [] cannot rely on the novelty of that discovery for the inventive concept necessary for patent eligibility." Genetic Techs. Ltd v. Merial LLC, 818 F.3d 1369, 1376 (Fed. Cir. 2016); Diamond v. Diehr, 450 U.S. 175, 188-89 (1981). As an example, the Federal Circuit has indicated that "inventive concept" can be found where the claims indicate the technological steps that are undertaken to overcome the stated problem(s) identified in Applicant's originally-filed Specification. See Trading Techs. Inc. v. CQG, Inc., No. 2016-1616 (Fed. Cir. 2017); but see IV v. Erie Indemnity, No. 2016-1128 (Fed. Cir. March 7, 2017) ("The claims are not focused on how usage of the XML tags alters the database in a way that leads to an improvement in technology of computer databases, as in Enfish.") (emphasis in original) and IV. v. Capital One, Nos. 2016-1077 (Fed. Cir. March 7, 2017) ("Indeed, the claim language here provides only a result-oriented solution, with insufficient detail for how a computer accomplishes it. Our law demands more. See Elec. Power Grp., 830 F.3d 1356 (Fed. Cir. 2016) (cautioning against claims 'so result focused, so functional, as to effectively cover any solution to an identified problem.')"). Furthermore, "[a]bstraction is avoided or overcome when a proposed new application or computer-implemented function is not simply the generalized use of a computer as a tool to conduct a known or obvious process, but instead is an improvement to the capability of the system as a whole." Trading Techs. Int'l, Inc. v. CQG, Inc., No. 2016-1616 (Fed. Cir. 2017) (emphasis added). In the search for inventive concept, the Berkheimer Memo describes "an additional element (or combination of elements) is not well-understood, routine or conventional unless the examiner finds, and expressly supports a rejection in writing with, one or more of the following: A citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates the well-understood, routine, conventional nature of the additional element(s). A citation to one or more of the court decisions discussed in the MPEP as noting the well-understood, routine, conventional nature of the additional element(s). A citation to a publication that demonstrates the well-understood, routine, conventional nature of the additional element(s). A statement that the examiner is taking official notice of the well-understood, routine, conventional nature of the additional element(s). See Berkheimer Memo at 3-4. Accordingly, the examiner refers to the following generically-recited computer elements with their associated functions (and associated factual finding(s)), which are considered, individually and in combination, to be routine, conventional, and well-understood: “a system for rewriting a prompt for a language model, the system comprising”, “a computer-implemented method of rewriting a prompt, comprising” “a system for rewriting a prompt, comprising” As set forth in MPEP § 2106.0S(d)(I), an examiner should conclude that an element (or combination of elements) represents well-understood, routine, conventional activity only when the examiner can readily conclude that the element(s) is widely prevalent or in common use in the relevant industry. The Berkhiemer memo clarifies that such a conclusion must be based upon a factual determination that is supported as discussed in section III the memo. As seen in paragraphs ([28, 99, 104, 111]) of the instant Specification and Symantec.. 838 F.3d at 1.321, 110 USPQ2d at. 1362, the elements are viewed to be well-understood, routine and conventional. In sum, the Examiner finds that the claims "are directed to the use of conventional or generic technology in a nascent but well-known environment, without any claim that the invention reflects an inventive solution to any problem presented by combining the two." In re TLI Communications LLC, No. 2015-1372 (May 17, 2016). Similar to the claims in SAP v. lnvestPic, "[t]he claims here are ineligible because their innovation is an innovation in ineligible subject matter." Appeal No. 2017-2081 (Fed. Cir. 2018). In other words, "the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm." Id. Accordingly, when considered individually and in ordered combination, the examiner finds the claims to be directed to in-eligible subject matter. Next, it is determined whether the claim integrates the judicial expectation into a practical application by identifying whether “any additional elements recited in the claim beyond the judicial exception(s)” and evaluate those elements to determine whether the integrate the judicial exception into a recognized practical application. In this case, the additional elements do not integrate the judicial application into a practical application. The claim does not recite (i) an improvement to the functionality of a computer or other technology or technical field ; (ii) a "particular machine" to apply or use the judicial exception; (iii) a particular transformation of an article to a different thing or state; or (iv) any other meaningful limitation. The additional elements beyond the judicial exception are (i) by a system comprising at least one processor, memory storing instruction program, language model. Using a computing device to identify and determine a value and disposition of an object is merely applying the judicial exception using a generic computing component. Additionally, the claim identifies and determines a value and disposition of an object - the claim does not improve the functioning of the computing device, or other technology or field. The claims do not recite specific limitations (alone or when considered as an ordered combination) that were not well understood, routine, and conventional. As set forth in the Specification, the disclosed subject matter can be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. Dependent claims 2-11, 15-17, and 19-20 include further recited limitations, do not integrate the abstract idea into a practical application, and the additional elements taken individually and in combination, do not contribute to an inventive concept, In other words, the dependent claims are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 112 2. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 7 recites the limitation "the number of similar prompts" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1-8, 14-15, 17 are rejected under 35 U.S.C. 103 as being unpatentable over submitted prior Yager (“Domain specific ChatBots for Science using Embeddings”) in view of Lester et al. (2024/0378196). As to claim 1, Yager teaches a system for rewriting a prompt for a language model comprising: receive a query at an application (Section 2.1 and Fig. 1 – user query sent to a ML embedding model); generate a description for the query based on information extracted from the query (Section 2.1 and Fig. 1 – the user query sent to a ML embedding model which computes an embedding vector that captures the semantic content of the input; and an embeddings of user query computed and encoding semantic meaning of the query, i.e., similar to a description for example, text embeddings to retrieve potentially-relevant text extracts (“chunks”); LLM chatbot lookup involves constructing an input prompt involves user query that optionally prepends some additional contextual information…adding in text chunks relevant to the query); generate a prompt including the query and the identified similar to formed an optimized prompt (Section 2.1 and Fig. 1 – a small set (5-10) of the most relevant chunks are concatenated and prepended to the user query to construct a prompt); provide the constructed prompt text as input to the language model (Section 2.1 and Fig. 1); receive from the language model the optimized prompt (Section 2.1 and Fig. 1 – a small set (5-10) of the most relevant chunks are concatenated and prepended to the user query to construct a prompt. This constructed prompt is then sent to a LLM); provide the optimized prompt as input to the language model (Section 2.1 and Fig. 1 - This constructed prompt is then sent to a LLM); receive, from the language model in response to the optimized prompt, and optimized response (Section 2.1 and Fig. 1 - This constructed prompt is then sent to a LLM), which generates a coherent response to the query); and surfacing the optimized response (Section 2.1 and Fig. 1 – generates a text response for the user, response displayed). Yager does not explicitly discuss at least one processor; and memory storing instructions that when executed by the at least one processor, cause the system to perform operation; identify example prompts, from a prompt library, that are similar to the query; generate a revision prompt similar example prompts. Lester teaches at least one processor; and memory storing instructions that when executed by the at least one processor, cause the system to perform operation ([0007, 0011, 0013, 0143, 0147]; claims 1 and 13); a semantic search can then be completed to find one or more associated prompts (e.g., similar pretrained prompts/example prompts). For example, the semantic search can involve comparing the initial/query prompt to a library of pretrained prompts, supplied by the service/cloud provider for various tasks ([0061]); The initial user prompt, or first prompt, can then be utilized for semantic search over a library of prompts (e.g., a library of second prompts, in which the library of second prompts includes pretrained prompts trained based on datasets not used by the user) ([0062]); and the prompts (i.e., second prompts) determined to be associated with the first prompt and the metadata related to those prompts, can be ordered by their similarity to the query prompt. The second prompts, and associated metadata such as links to the dataset and prompt submitter information, can be returned to the user. The second prompts and/or their associated metadata can then be utilized to retrain or refine the first prompt. The prompt tuning can involve curriculum learning, multi-task learning, and/or retraining with the most similar second prompts being utilized as initialization points ([0064]). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Lester into the teachings of Yager for the purpose of completing a semantic search to find one or more associated prompts, e.g., similar pretrained prompts/example prompts; the second prompts and their associated metadata utilized to retrain or refine the first prompt. As to claim 2, Yager teaches the system of claim 1, wherein generating the description comprises generating an extraction prompt including the query and providing the extraction prompt as input to the language model (Section 2.1 and Fig. 1 – text embeddings to retrieve potentially-relevant text extracts (“chunks”)… In the embedding strategy, adding in text chunks relevant to the query… The user query is first sent to a ML embedding model, which computes an embedding vector (7) that captures the semantic content of the input. This vector is used to query a pre-computed database of text chunks. Text snippets that are similar to the query (“close” in the embedding space) are prepended to the user query to construct a prompt. The prompt is sent to a large language model (LLM)). As to claim 3, Yager teaches the system of claim 1, wherein the operations further comprise extracting at least one of a domain or a task for the query (Section 2.1 and Fig. 1 – an embeddings of user query computed and encoding semantic meaning of the query, i.e., similar to a description for example, text embeddings to retrieve potentially-relevant text extracts (“chunks”); LLM chatbot lookup involves constructing an input prompt involves user query that optionally prepends some additional contextual information…adding in text chunks relevant to the query). As to claim 4, Lester teaches the system of claim 3, wherein identifying the similar example prompts is based on at least one of the description, domain, or task of the query ([0061] - a semantic search can then be completed to find one or more associated prompts (e.g., similar pretrained prompts/example prompts). For example, the semantic search can involve comparing the initial/query prompt to a library of pretrained prompts, supplied by the service/cloud provider for various tasks). As to claim 5, Yager teaches the system of claim 1, wherein the operations further comprise: generating an embedding for the description of the query (Section 2.1 and Fig. 1 – an embeddings of user query computed and encoding semantic meaning of the query, i.e., similar to a description for example. In the embedding strategy, taking advantage of the space provided by the context window, adding in text chunks relevant to the query. Procedurally (Figure 1), this involves first computing the text embedding of the user query (q)). As to claim 6, Lester teaches the system of claim 5, wherein identifying the similar prompts further comprises the generated embedding ([0161] – generating an embedding for input data) to embeddings for the example prompts ([0061] - a semantic search can then be completed to find one or more associated prompts, e.g., similar pretrained prompts/example prompts) in the prompt library ([0062] - The initial user prompt, or first prompt, can then be utilized for semantic search over a library of prompts (e.g., a library of second prompts, in which the library of second prompts includes pretrained prompts trained based on datasets not used by the user). As to claim 7, Lester teaches the system of claim 1, wherein a semantic search can then be completed to find one or more associated prompts, e.g., similar pretrained prompts/example prompts ([0061]). Lester does not explicitly discuss the number of similar prompts is between 2 and 10. It would have been obvious that the similar pretrained prompts could be 2 or more. It is purely design choice. As to claim 8, Lester teaches the system of claim 1, wherein the prompt library includes: the example prompts ([0174] – Fig. 2 depicts a block diagram of an example prompt tuning system 200) and a description for each of the example prompts ([0061]); and Yager teaches an embedding for each descriptions of the prompts (Section 2.1 and Fig. 1 – text embedding is a NLP method where text is converted into a real-valued vector that encodes semantic meaning of the query and embedding of the user query is computed, i.e., similar to a description for example). Claim 14 is rejected for the same reasons discussed above with respect to claim 1. Furthermore, Yager teaches extraction additional information for the query, wherein the additional information includes at least one of a description, a task, or a domain for the query (Section 2.1 and Fig. 1 – the user query sent to a ML embedding model which computes an embedding vector that captures the semantic content of the input; and an embeddings of user query computed and encoding semantic meaning of the query, i.e., similar to a description for example, text embeddings to retrieve potentially-relevant text extracts (“chunks”); LLM chatbot lookup involves constructing an input prompt involves user query that optionally prepends some additional contextual information…adding in text chunks relevant to the query); based on the additional information, identifying a top-k most similar (text chunks from the database; Section 2.1 and Fig. 1 – This embedding vector is compared to the precomputed embeddings across all text chunks, A small set (5-10) of the most relevant chunks are concatenated and prepended to the user query); and generating a prompt including the query and the top-k most similar (text chunks from the database; Section 2.1 and Fig. 1 – A small set (5-10) of the most relevant chunks are concatenated and prepended to the user query). As to claim 15, Yager teaches the system of claim 14, wherein the identifying the top-k most similar further comprises receiving an embedding for the additional information extracted for the query and comparing the received embedding with embeddings across all text chunks (text chunks from the database; Section 2.1 and Fig. 1 – The embedding vector is compared to the precomputed embeddings across all text chunks and this vector is used to query a pre-computed database of text chunks; A small set (5-10) of the most relevant chunks are concatenated and prepended to the user query); and Lester teaches a semantic search can then be completed to find one or more associated prompts (e.g., similar pretrained prompts/example prompts). For example, the semantic search can involve comparing the initial/query prompt to a library of pretrained prompts, supplied by the service/cloud provider for various tasks ([0061]). As to claim 17, Yager teaches the system of claim 14, further comprising storing the optimized prompt (Section 2.1 and Fig. 1 – a small set (5-10) of the most relevant chunks are concatenated and prepended to the user query to construct a prompt) as an example prompt in the prompt library ([0061-0062, 0064] - The initial user prompt, or first prompt, can then be utilized for semantic search over a library of prompts (e.g., a library of second prompts, in which the library of second prompts includes pretrained prompts trained based on datasets not used by the user). These prompts can have associated metadata, such as the frozen model used, the date trained, and, most importantly, the dataset use). Allowable Subject Matter 5. Claims 9, 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 10-13 are objected because they depend on objected claim 9. Claims 18-20 would be allowable if claim 18 rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action. The following is an examiner’s statement of reasons for allowance: As to claim 18, prior arts of record fail to teach, or render obvious, alone or in combination a system for rewriting a prompt comprising the claimed components, relationships, and functionalities as specifically recited in claim 18. Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUYNH H NGUYEN whose telephone number is (571)272-7489. The examiner can normally be reached Monday-Thursday 7:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUYNH H NGUYEN/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Jan 17, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591740
METHODS AND SYSTEMS FOR GENERATING TEXTUAL FEATURES
2y 5m to grant Granted Mar 31, 2026
Patent 12567409
RESTRICTING THIRD PARTY APPLICATION ACCESS TO AUDIO DATA CONTENT
2y 5m to grant Granted Mar 03, 2026
Patent 12566920
System and Method to Generate and Enhance Dynamic Interactive Applications from Natural Language Using Artificial Intelligence
2y 5m to grant Granted Mar 03, 2026
Patent 12563141
SYSTEM AND METHOD OF CONNECTING A CALLER TO A RECIPIENT BASED ON THE RECIPIENT'S STATUS AND RELATIONSHIP TO THE CALLER
2y 5m to grant Granted Feb 24, 2026
Patent 12554761
DATA SOURCE CURATION FOR LARGE LANGUAGE MODEL (LLM) PROMPTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+17.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1078 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month