Prosecution Insights
Last updated: April 19, 2026
Application No. 18/917,941

RETRIEVAL-AUGMENTED CONTENT GENERATION FOR LEGAL RESEARCH

Non-Final OA §101§102§103
Filed
Oct 16, 2024
Examiner
FOROUHARNEJAD, FAEZEH
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Thomson Reuters Enterprise Centre GmbH
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
70 granted / 104 resolved
+12.3% vs TC avg
Strong +31% interview lift
Without
With
+31.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
19 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 104 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis below of the claims’ subject matter eligibility follows the guidance set forth in MPEP 2106 which has incorporated the 2019 PEG. Independent claim 1 recites a method, and independent claim 19 recites a method. Therefore, Step 1 is satisfied for claims 1-19. The independent claim 1 recites: A method, comprising: receiving, by one or more processors, input specifying a set of search criteria using natural language text; executing, by the one or more processors, one or more searches based on the set of search criteria specified in the input, the one or more searches comprising a search of at least one data source; obtaining, by the one or more processors, an initial set of search results based on the one or more searches; providing, by the one or more processors, one or more prompts to one or more large language models (LLMs), wherein the one or more prompts comprise information associated with the initial set of search results, the set of search criteria, or both; and outputting, by the one or more processors, a response to the input based on content generated by the one or more LLMs, wherein the response is generated by the one or more LLMs based on the prompt. Step 1 Analysis: Claim 1 is directed to a method, which is directed to a process, one of the statutory categories. Step 2A Prong One Analysis: The claim is directed to an abstract idea. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The above-noted limitations of receiving, executing, obtaining, providing and outputting as drafted, are processes that, under their broadest reasonable interpretation, covers performance of the limitations in the mind. That is nothing in the claim precludes these steps from practically being performed in the mind. For example, receiving input specifying a set of search criteria; executing one or more searches based on the set of search criteria specified in the input, the one or more searches comprising a search of at least one data source; obtaining an initial set of search results based on the one or more searches; providing one or more prompts to one or more large language models (LLMs), wherein the one or more prompts comprise information associated with the initial set of search results, the set of search criteria, or both; and outputting a response to the input based on content generated by the one or more LLMs, wherein the response is generated by the one or more LLMs based on the prompt in the context of this claim encompasses a concept performed in the human mind (including observations and preform an evaluation, judgment, and opinion) and can be performed with pen and paper. If the claim limitations, under their broadest reasonable interpretations, cover performance of the limitation in the mind but for the recitation of generic computer components, then they fall within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong Two Analysis: In Step 2A Prong 2, we are directed to Identify whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluate those additional elements to determine whether they integrate the exception into a practical application of the exception. The claim 1 recites additional elements of one or more processors and one or more large language models (LLMs). The one or more processors and one or more large language models (LLMs) are so generic that represents no more than mere instructions to apply the judicial exception on a computer. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the judicial exception is not integrated into a practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In particular, the claim 1 recites additional elements of one or more processors and one or more large language models (LLMs). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “one or more processors and one or more large language models (LLMs)” are simply performing a generic computer function amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Accordingly, this additional element, taken individually and in combination, does not result in the claim as a whole amounting to significantly more than the judicial exception. The claim is not patent eligible. Claims 2-18 are dependent on claim 1 and includes all the limitations of claim 1. Therefore, claims 2-18 recites the same abstract idea as in claim1. The dependent claims further recite limitations corresponding to the judicial exception recited in independent claim 1. The dependent claims do not recite any further limitations; therefore, the judicial exception is not integrated into a practical application or amount to significantly more. Claims 2-18 depend from claim 1 and are rejected accordingly. The independent claim 19 recites: A method comprising: receiving, by one or more processors, a set of search criteria via a graphical user interface; providing, by the one or more processors, the set of search criteria or information derived from the set of search criteria as one or more prompts to one or more large language models (LLMs); generating, by the one or more LLMs, textual content based on the one or more prompts, wherein the textual content comprises information associated with one or more legal issues associated with the set of search criteria. Step 1 Analysis: Claim 1 is directed to a method, which is directed to a process, one of the statutory categories. Step 2A Prong One Analysis: The claim is directed to an abstract idea. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgment, opinion). The above-noted limitations of receiving, providing and generating as drafted, are processes that, under their broadest reasonable interpretation, covers performance of the limitations in the mind. That is nothing in the claim precludes these steps from practically being performed in the mind. For example, receiving a set of search criteria ; providing the set of search criteria or information derived from the set of search criteria as one or more prompts to one or more large language models (LLMs); generating, by the one or more LLMs, textual content based on the one or more prompts, wherein the textual content comprises information associated with one or more legal issues associated with the set of search criteria. in the context of this claim encompasses a concept performed in the human mind (including observations and preform an evaluation, judgment, and opinion) and can be performed with pen and paper. If the claim limitations, under their broadest reasonable interpretations, cover performance of the limitation in the mind but for the recitation of generic computer components, then they fall within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong Two Analysis: In Step 2A Prong 2, we are directed to Identify whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluate those additional elements to determine whether they integrate the exception into a practical application of the exception. The claim 1 recites additional elements of one or more processors, a graphical user interface and one or more large language models (LLMs). The one or more processors, a graphical user interface and one or more large language models (LLMs) are so generic that represents no more than mere instructions to apply the judicial exception on a computer. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the judicial exception is not integrated into a practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In particular, the claim 1 recites additional elements of one or more processors, a graphical user interface and one or more large language models (LLMs). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “one or more processors, a graphical user interface and one or more large language models (LLMs)” are simply performing a generic computer function amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Accordingly, this additional element, taken individually and in combination, does not result in the claim as a whole amounting to significantly more than the judicial exception. The claim is not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 16-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Revach (US 2025/0117381) Regarding claim 1, Revach discloses: A method, comprising: receiving, by one or more processors, input specifying a set of search criteria using natural language text; (Revach, [0095], e.g. in response to receiving the NL based input, determining, based on one or more criteria, whether to generate and execute multiple subqueries based on the NL based input. …the one or more criteria can additionally or alternatively include one or more search criteria; [0007], e.g. a NL based input ( e.g., a textual query submitted via a search system interface) executing, by the one or more processors, one or more searches based on the set of search criteria specified in the input, the one or more searches comprising a search of at least one data source; (Revach, [0055] If, at block 354, the system determines the criteria are not satisfied, the system proceeds to block 356 and a search is performed based on the NL based input; [0012] search system resources (corresponding to “at least one data source”) are utilized; [0044] The search result engine 130, for each of the subqueries of the subset selected by subset selection engine 128, interacts with search system(s) 140 (corresponding to “at least one data source”) to obtain result(s) for the subquery) obtaining, by the one or more processors, an initial set of search results based on the one or more searches; (Revach, Fig. 5A; [0076], e.g. an NL based input 501A has been provided…and includes two search results (A and B) for the first subquery; [0044] The search result engine 130, for each of the subqueries of the subset selected by subset selection engine 128, interacts with search system(s) 140 to obtain result(s) for the subquery) providing, by the one or more processors, one or more prompts to one or more large language models (LLMs), (Revach, Fig. 3; [0057] At block 360, the system processes the subquery generation prompt, using an LLM, to generate LLM output; [0088], e.g. receiving natural language (NL) based input associated with a client device. The method further includes, in response to receiving the NL based input: generating a subquery generation prompt that includes the NL based input and additional NL content that promotes subquery generation;) wherein the one or more prompts comprise information associated with the initial set of search results, the set of search criteria, or both; (Revach, Fig. 3, item 352 “IDENTIFY NL BASED INPUT”, item 354 “CRITERIA? YES”, item 358 “GENERATE SUBQUERY GENERATION PROMPT THAT INCLUDES NL BASED INPUT AND ADDITIONAL NL CONTENT THAT PROMOTES SUBQUERY GENERATION; item 360 “PROCESS SUBQUERY GENERATION PROMPT (corresponding to “prompts comprise… the set of search criteria”), USING LLM, TO GENERATE LLM OUTPUT”; [0088], e.g. receiving natural language (NL) based input associated with a client device. The method further includes, in response to receiving the NL based input: generating a subquery generation prompt that includes the NL based input and additional NL content that promotes subquery generation; [0057] At block 360, the system processes the subquery generation prompt, using an LLM, to generate LLM output.) and outputting, by the one or more processors, a response to the input (Revach, Fig. 3, item 352 “IDENTIFY NL BASED INPUT”, item 372 “GENERATE RESPONSE BASED ON SEARCH RESULTS) based on content generated by the one or more LLMs, (Fig. 3, item 372 “GENERATE RESPONSE BASED ON SEARCH RESULTS, item 362, “GENERATE ONE OR MORE CANDIDATE SUBQUERIES BASED ON LLM OUTPUT (corresponding to ”content generated by the one or more LLMs”) wherein the response is generated by the one or more LLMs based on the prompt. (Revach, [0088], e.g. generating a response to the NL based input based on the corresponding search results for the candidate subqueries of the subset; [0010] the first subquery generation prompt can be processed, using the LLM, to generate first LLM output and the first LLM output utilized ( e.g., decoded) to determine multiple first candidate subqueries; [0016], e.g. the response can be a shortened summary of the top ranked search results, such as a shortened summary that is generated based on processing, using an LLM, each of the search results along with a summarization prompt ( e.g., "generate a summary of [ search results]").) Regarding claim 16, Revach discloses all of the features with respect to claim 1 as outlined above. Claim 16 further recites: wherein the response comprises a summary of one or more search results included in the initial set of search results. (Revach, [0016] the response can be a shortened summary of the top ranked search results, such as a shortened summary that is generated based on processing, using an LLM, each of the search results along with a summarization prompt ( e.g., "generate a summary of [ search results]"). Regarding claim 17, Revach discloses all of the features with respect to claim 16 as outlined above. Claim 17 further recites: wherein the summary comprises information associated with negative treatment of at least one search result of the initial set of search results, information associated with fact patterns for at least one search result of the initial set of search results, information summarizing a portion of the initial set of search results, suggestions to expand a search based on the inputs, or a combination thereof. (Revach, [0016] the response can be a shortened summary of the top ranked search results (corresponding to “information summarizing a portion of the initial set of search results”), such as a shortened summary that is generated based on processing, using an LLM, each of the search results along with a summarization prompt ( e.g., "generate a summary of [ search results]"). Claim 19 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zangrilli (US 2025/0117863 Al) Regarding claim 19, Zangrilli discloses: A method comprising: receiving, by one or more processors, a set of search criteria via a graphical user interface; (Zangrilli, [0017] A user may input into the prompt interface 34 an instruction requesting a verbose tax category description (corresponding to “a set of search criteria “)… The instruction includes instruction text 40 indicating a tax category for which the verbose tax category description is requested, such as a tax category name, a tax category type, a product, and/or a jurisdiction, for example.) providing, by the one or more processors, the set of search criteria or information derived from the set of search criteria as one or more prompts to one or more large language models (LLMs); (Zangrilli [0018], e.g. the prompt generator 48 generates a prompt 50 for the GLM (corresponding to “to one or more large language models (LLMs)”) based on at least the matching source text data 46 and the instruction text 40 (corresponding to “a set of search criteria “); [0017] A user may input into the prompt interface 34 an instruction requesting a verbose tax category description (corresponding to “a set of search criteria “)… The instruction includes instruction text 40 indicating a tax category for which the verbose tax category description is requested, such as a tax category name, a tax category type, a product, and/or a jurisdiction, for example; [0011], e.g. GLMs with large model sizes such as these, are referred to as large language models (LLMs).) generating, by the one or more LLMs, textual content based on the one or more prompts, (Zangrilli, [0020] The prompt 50 is input to the to the GLM 36, which is configured to output a verbose tax category description 60. The verbose tax category description 60 is output and displayed as verbose tax category description text 60A in the prompt interface 34 of the GUI 38.) wherein the textual content comprises information associated with one or more legal issues associated with the set of search criteria. (Zangrilli, [0036], e.g. storing the text data associated with the defined tax category in a legal definition database and identifying at least one governing body for the defined tax category… The text data may include at least one of jurisdictional rules, jurisdictional regulations, industry bodies, and industry standards for defining the tax category; [0020] The prompt 50 is input to the to the GLM 36, which is configured to output a verbose tax category description 60. The verbose tax category description 60 is output and displayed as verbose tax category description text 60A in the prompt interface 34 of the GUI 38.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Revach (US 2025/0117381) in view of Liao (US 2010/0312764 Al) Regarding claim 2, Revach discloses all of the features with respect to claim 1 as outlined above. Claim 2 further recites: wherein the initial set of search results comprise search results corresponding to different result types. (Revach, Fig. 5A, item 502A1 “SUBQUERY 1 (DERIVED FROM INPUT) - RESULT A - RESULT B, item 502AN, “SUBQUERY N (DERIVED FROM INPUT) - RESULT N”; [0008] generate a response to the NL based input based on the corresponding search results for the candidate subqueries of the subset, and cause the response to be rendered responsive to the NL based input; [0044] The search result engine 130, for each of the subqueries of the subset selected by subset selection engine 128, interacts with search system(s) 140 to obtain result(s) for the subquery. For example, the search result engine 130 can obtain a top result, the top N results, or any result(s) having a quality score (and/or other score(s)) above a threshold; [0050] The search result engine 130 interacts with search system(s) 140 to obtain, for each of the subqueries of the subset, one or more corresponding results, and provides the collective results 205 to the response engine 132;) However Revach does not clearly disclose: different result types However Liao discloses: different result types (Liao, [0052]-[0054]; [0047] Search-results region …presents a variety of types of information in response to a case law query; [0011] providing search results includes receiving a first signal indicative of a first set of document results from a search engine and a user query, generating attributes of each document in the first set of document results using feature values derived from a surrogate document, the surrogate document identifying at least one document and corresponding user actions and search queries, and ranking each document of the first set of document results using the feature values; [0033] Databases 110 includes a set of primary databases 112, a set of secondary databases 114, and a set of metadata databases 116. Primary databases 112, in the exemplary embodiment, include a caselaw database 1121 and a statutes database 1122, which respectively include judicial opinions and statutes from one or more local, state, federal, and/or international jurisdictions. Secondary databases 114, which contain legal documents of secondary legal authority or more generally authorities subordinate to those offered by judicial or legislative authority in the primary database) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Liao to identify a larger set of recommended cases (documents), (Liao, [0053]), and also to provide information-retrieval systems, such as those that provide legal documents or other related content, (Liao, [0003]) and also to provide improvement of information-retrieval systems for document retrieval systems that can effectively leverage query log data, (Liao, [0008]). Regarding claim 3, Revach in view of Liao discloses all of the features with respect to claim 2 as outlined above. Claim 3 further recites: restricting a number of search results included in the initial set of search results for each of the different result types. (Revach, Fig. 5A, item 502A1 “SUBQUERY 1 (DERIVED FROM INPUT) - RESULT A - RESULT B, item 502AN, “SUBQUERY N (DERIVED FROM INPUT) - RESULT N” (corresponding to “different result types”) ; [0044] The search result engine 130, for each of the subqueries of the subset selected by subset selection engine 128, interacts with search system(s) 140 to obtain result(s) for the subquery. For example, the search result engine 130 can obtain a top result, the top N results, or any result(s) having a quality score (and/or other score(s)) above a threshold), (corresponding to “restricting a number of search results”); [0017]For example, generating and executing of multiple subqueries can occur for a given NL based input based on the given NL based input having a length that is greater than a threshold, being submitted less than a threshold frequency, and/or having results that are of low quality; [0050] The search result engine 130 interacts with search system(s) 140 to obtain, for each of the subqueries of the subset, one or more corresponding results, and provides the collective results 205 to the response engine 132; [0095], e.g. generating the subquery generation prompt, generating the plurality of candidate subqueries, selecting the subset of the candidate subqueries, obtaining the corresponding search results, generating the response to the NL based input based on the corresponding search results, and/or causing the response to be rendered at the client device responsive to the NL based input are only performed in response to determining to generate and execute the multiple subqueries based on the NL based input) However Revach does not clearly disclose: different result types However Liao discloses: different result types (Liao [0052], e.g. executing the query against the primary databases and identifying documents, such as case law documents, that satisfy the query criteria. A number of the starter set of documents, for example 2-5, based on relevance to the query are then selected as starter cases; [0033] Primary databases 112, in the exemplary embodiment, include a caselaw database 1121 and a statutes database 1122, which respectively include judicial opinions and statutes from one or more local, state, federal, and/or international jurisdictions; [0070], e.g. presenting search results. In the exemplary embodiment, this entails displaying a listing of one or more of the top ranked recommended case law documents in results region, such as region 1382 in FIG.1; [0096] First, the behavior module 128 selects a set of source documents Ds, typically the highest ranked results by the primary search module 124) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Liao to identify a larger set of recommended cases (documents), (Liao, [0053]), and also to provide information-retrieval systems, such as those that provide legal documents or other related content, (Liao, [0003]) and also to provide improvement of information-retrieval systems for document retrieval systems that can effectively leverage query log data, (Liao, [0008]). Claims 4-6 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Revach (US 2025/0117381) in view of Shao (“Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy”, (hereinafter “Shao”) ) Regarding claim 4, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: wherein the one or more prompts comprises the input and portions of the initial set of search results identified as being relevant to the set of search criteria. However Shao discloses: wherein the one or more prompts comprises the input and portions of the initial set of search results identified as being relevant to the set of search criteria. (Shao, page 3, section 3 Iterative Retrieval-Generation Synergy, 3.1 Overview Given a question q and a retrieval corpus D = {d} where d is a paragraph, ITER-RETGEN repeats retrieval-generation for T iterations; in iteration t, we (1) leverage the generation yt−1 from the previous iteration, concatenated with q, to retrieve top-k paragraphs, and then (2) prompt an LLM M to produce an output yt, with both the retrieved paragraphs (denoted as Dyt−1||q) and q integrated into the prompt. Therefore, each iteration can be formulated as follows: yt = M(yt|prompt(Dyt−1||q,q)), ∀1 ≤ t ≤ T The last output yt will be produced as the final answer; page 4, section 3.4 , e.g. Re-ranker A re-ranker, parametrized by ϕ, outputs the probability of a paragraph being relevant to a query; we denote the probability as sϕ(q, d).) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Shao to improve relevance modeling by having large language models actively involved in retrieval, i.e., to improve retrieval with generation that it can flexibly leverage parametric knowledge and non-parametric knowledge, and is superior to or competitive with state-of-the-art retrieval-augmented baselines while causing fewer overheads of retrieval and generation and also to improve performance via generation augmented retrieval adaptation, (Shao, abstract). Regarding claim 5, Revach in view of Shao discloses all of the features with respect to claim 4 as outlined above. Claim 5 further recites: generating the response via an iterative process. (Revach, [0042] When multiple prompts are provided for an NL based input, the subquery generation engine 126 can perform multiple iterations of processing, each using the LLM and a different one of the prompts, generating multiple of the candidate subqueries based on the LLM output at each iteration; [0048] The subquery generation engine 126 performs three iterations of processing ( optionally in parallel), using the LLM, with each iteration processing a different one of the subquery generation prompts 202A, 202B, and 202C.) Regarding claim 6, Revach in view of Shao discloses all of the features with respect to claim 5 as outlined above. Revach does not clearly disclose: wherein, during each iteration of the iterative process, a portion of the initial set of search results is presented to the one or more LLMs and an interim response is generated, and wherein the interim response and a next portion of the initial set of search results are provided as input to a next iteration of the iterative process until the response is output. However Shao discloses: wherein, during each iteration of the iterative process, (Shao, page 3, section 3 Iterative Retrieval-Generation Synergy) a portion of the initial set of search results is presented to the one or more LLMs and an interim response is generated, (Shao, page 3, section 3 Iterative Retrieval-Generation Synergy, 3.1 Overview Given a question q and a retrieval corpus D = {d} where d is a paragraph, ITER-RETGEN repeats retrieval-generation for T iterations; in iteration t, we (1) leverage the generation yt−1 from the previous iteration, concatenated with q, to retrieve top-k paragraphs, and then (2) prompt an LLM M to produce an output yt, with both the retrieved paragraphs (denoted as Dyt−1||q) and q integrated into the prompt. Therefore, each iteration can be formulated as follows: yt = M(yt|prompt(Dyt−1||q,q)), ∀1 ≤ t ≤ T The last output yt will be produced as the final answer.) and wherein the interim response and a next portion of the initial set of search results are provided as input to a next iteration of the iterative process until the response is output. (Shao, page 3, section 3 Iterative Retrieval-Generation Synergy, 3.1 Overview Given a question q and a retrieval corpus D = {d} where d is a paragraph, ITER-RETGEN repeats retrieval-generation for T iterations; in iteration t, we (1) leverage the generation yt−1 from the previous iteration, concatenated with q, to retrieve top-k paragraphs, and then (2) prompt an LLM M to produce an output yt, with both the retrieved paragraphs (denoted as Dyt−1||q) and q integrated into the prompt. Therefore, each iteration can be formulated as follows: yt = M(yt|prompt(Dyt−1||q,q)), ∀1 ≤ t ≤ T The last output yt will be produced as the final answer. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Shao to improve relevance modeling by having large language models actively involved in retrieval, i.e., to improve retrieval with generation that it can flexibly leverage parametric knowledge and non-parametric knowledge, and is superior to or competitive with state-of-the-art retrieval-augmented baselines while causing fewer overheads of retrieval and generation and also to improve performance via generation augmented retrieval adaptation, (Shao, abstract). Regarding claim 9, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: identifying portions of each search result in the initial set of search results relevant to the set of search criteria. However Shao discloses: identifying portions of each search result in the initial set of search results relevant to the set of search criteria. (Shao, page 3, section 3 Iterative Retrieval-Generation Synergy, 3.1 Overview Given a question q and a retrieval corpus D = {d} where d is a paragraph, ITER-RETGEN repeats retrieval-generation for T iterations; in iteration t, we (1) leverage the generation yt−1 from the previous iteration, concatenated with q, to retrieve top-k paragraphs, and then (2) prompt an LLM M to produce an output yt, with both the retrieved paragraphs (denoted as Dyt−1||q) and q integrated into the prompt. Therefore, each iteration can be formulated as follows: yt = M(yt|prompt(Dyt−1||q,q)), ∀1 ≤ t ≤ T The last output yt will be produced as the final answer; page 4, section 3.4 , e.g. Re-ranker A re-ranker, parametrized by ϕ, outputs the probability of a paragraph being relevant to a query; we denote the probability as sϕ(q, d). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Shao to improve relevance modeling by having large language models actively involved in retrieval, i.e., to improve retrieval with generation that it can flexibly leverage parametric knowledge and non-parametric knowledge, and is superior to or competitive with state-of-the-art retrieval-augmented baselines while causing fewer overheads of retrieval and generation and also to improve performance via generation augmented retrieval adaptation, (Shao, abstract). Regarding claim 10, Revach in view of Shao discloses all of the features with respect to claim 9 as outlined above. Claim 10 further recites: ranking or re-ranking each portion of the initial set of search results (Revach, [0016] Implementations obtain search results for only subqueries of the selected subset, and generate a response, to the NL based input, based on those search results. For example, top ranked search result(s) for each subquery can be obtained, and the response can be generated based on the top ranked search results for the subqueries. Revach does not clearly disclose: ranking or re-ranking each portion of the initial set of search results identified as relevant to the set of search criteria. However Shao discloses: ranking or re-ranking each portion of the initial set of search results identified as relevant to the set of search criteria. (Shao, page 3, section 3 Iterative Retrieval-Generation Synergy, 3.1 Overview Given a question q and a retrieval corpus D = {d} where d is a paragraph, ITER-RETGEN repeats retrieval-generation for T iterations; in iteration t, we (1) leverage the generation yt−1 from the previous iteration, concatenated with q, to retrieve top-k paragraphs, and then (2) prompt an LLM M to produce an output yt, with both the retrieved paragraphs (denoted as Dyt−1||q) and q integrated into the prompt. Therefore, each iteration can be formulated as follows: yt = M(yt|prompt(Dyt−1||q,q)), ∀1 ≤ t ≤ T The last output yt will be produced as the final answer; page 4, section 3.4 , e.g. Re-ranker A re-ranker, parametrized by ϕ, outputs the probability of a paragraph being relevant to a query; we denote the probability as sϕ(q, d). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Shao to improve relevance modeling by having large language models actively involved in retrieval, i.e., to improve retrieval with generation that it can flexibly leverage parametric knowledge and non-parametric knowledge, and is superior to or competitive with state-of-the-art retrieval-augmented baselines while causing fewer overheads of retrieval and generation and also to improve performance via generation augmented retrieval adaptation, (Shao, abstract). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Revach (US 2025/0117381) in view of Zangrilli (US 2025/0117863 Al) Regarding claim 7, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: outputting a question to the user, wherein the question is configured to obtain additional information related to the set of search criteria, and wherein the response is updated based on information received in response to the question. However Zangrilli discloses: outputting a question to the user, wherein the question is configured to obtain additional information related to the set of search criteria, and wherein the response is updated based on information received in response to the question. (Zangrilli [0032], e.g. the GLM 28 displayed a previous response 52B to ask if there was a specific question regarding the previous prompt 52A; [0017], e.g. a user requesting the system to generate a verbose tax category description can revise or fine tune the description through the use of multiple prompts in an interaction session, with later prompts building on or revising the output generated in response to earlier prompts.) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Zangrilli to revise or fine tune the description through the use of multiple prompts in an interaction session, with later prompts building on or revising the output generated in response to earlier prompts (Zangrilli, [0017]) Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Revach (US 2025/0117381) in view of Mukherjee (US 2024/0354436 Al) Regarding claim 8, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: identifying at least a portion of the initial set of search results based on outputs of a clustering algorithm. However Mukherjee discloses: identifying at least a portion of the initial set of search results based on outputs of a clustering algorithm. (Mukherjee [0036], e.g. the system may identify relevant portions of a set of documents based on the user query through chunking and vectorizing documents and executing similarity search on documents; [0041], e.g. The system may execute the similarity search using one of the cosine similarity search, approximate nearing neighbor (ANN) algorithms, k nearest neighbors (KNN) method, locality sensitive hashing (LSH), range queries, or any other vector clustering and/or similarity search algorithms.) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Mukherjee to advantageously generate a prompt for the LLMs using a user query and portions of a set of documents that are more relevant or bear similarity to the user query, rather than including the set of documents in its entirety that might exceed a size limit on the prompt into the prompt. (Mukherjee, [0013]) Regarding claim 18, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: training the one or more LLMs. However Mukherjee discloses: training the one or more LLMs. (Mukherjee, [0142] the document search system 102 may train one or more LLMs using the training data.) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Mukherjee to calculate the probability of different word combinations based on the patterns learned during training (based on a set of text data from books, articles, websites, audio files, etc.). A language model may generate many combinations of one. (Mukherjee, [0059]) Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Revach (US 2025/0117381) in view of Khosla (US 2025/0005058 Al ) Regarding claim 11, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: evaluating an accuracy of the response to the set of search criteria. However Khosla discloses: evaluating an accuracy of the response to the set of search criteria. (Khosla, fig. 3, item 8) DETERMINE IF ANSWER(S) ARE HALLUCINATED; [0014], e.g. the natural language question answer service can utilize a verifier to verify the answer generated by the LLM to ensure it was not generated in error ( e.g., hallucinated). The verifier may utilize one or more modules to ensure the answer was not generated in error; [0066] At (8), the verifier component 108 determines if the answer was generated in error ( e.g., hallucinated). As stated above, the verifier component 108 may look for textual overlap between an answer and retrieved passages, determine whether there is a contradiction between the answers and the retrieved passages, use head/tail/relational triples to confirm faithfulness, use membership inference attacks techniques to confirm whether a question (e.g., or similar) is in a dataset, and/or a score of any of the four combined) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Khosla to determine whether the answer generated from the LLM is not generated in error in relation to the natural language question by using head, tail, and relation triples (Khosla, [0014]) and also to provide reference links and titles to the retrieved passages used by the LLM component (e.g., retrieved passages used as context to generate the answer), which may allow the submitter of the question to get more details on the referenced passages, (Khosla, [0067]). Regarding claim 12, Revach in view of Khosla discloses all of the features with respect to claim 11 as outlined above. Revach does not clearly disclose: enhancing the response based at least in part on the evaluating. However Khosla discloses: enhancing the response based at least in part on the evaluating. (Khosla [0068] At (11), the watermarking component 110 adds patterns to the answer to make the answer proprietary to the natural language question answering service 102 and verifiable against subsequent copying; fig. 3, item 8) DETERMINE IF ANSWER(S) ARE HALLUCINATED; [0014], e.g. the natural language question answer service can utilize a verifier to verify the answer generated by the LLM to ensure it was not generated in error ( e.g., hallucinated). The verifier may utilize one or more modules to ensure the answer was not generated in error;) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Khosla to determine whether the answer generated from the LLM is not generated in error in relation to the natural language question by using head, tail, and relation triples (Khosla, [0014]) and also to provide reference links and titles to the retrieved passages used by the LLM component (e.g., retrieved passages used as context to generate the answer), which may allow the submitter of the question to get more details on the referenced passages, (Khosla, [0067]). Regarding claim 13, Revach in view of Khosla discloses all of the features with respect to claim 12 as outlined above. Revach does not clearly disclose: wherein enhancing the response comprises determining one or more authorities to cite in the response, detecting negative treatment of one or more results included in the initial set of search results, altering a format of the response, incorporating treatment information into the response, or a combination thereof. However Khosla discloses: wherein enhancing the response comprises determining one or more authorities to cite in the response, detecting negative treatment of one or more results included in the initial set of search results, altering a format of the response, incorporating treatment information into the response, or a combination thereof. (Khosla, [0067] At (10), the attribution component 109 may provide references to the retrieved passages, inline citations to sentences of retrieved passages used in the answer, or provide similar questions to the natural language question. For example, the attribution component 109 may provide reference links and titles to the retrieved passages used by the LLM component 106 (e.g., retrieved passages used as context to generate the answer), which may allow the submitter of the question to get more details on the referenced passages; [0068] At (11), the watermarking component 110 adds patterns to the answer to make the answer proprietary to the natural language question answering service 102 and verifiable against subsequent copying.) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Khosla to determine whether the answer generated from the LLM is not generated in error in relation to the natural language question by using head, tail, and relation triples (Khosla, [0014]) and also to provide reference links and titles to the retrieved passages used by the LLM component (e.g., retrieved passages used as context to generate the answer), which may allow the submitter of the question to get more details on the referenced passages, (Khosla, [0067]). Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Revach (US 2025/0117381) in view of Wang (US 2023/0245651 Al) Regarding claim 14, Revach discloses all of the features with respect to claim 1 as outlined above. Revach does not clearly disclose: analyzing the input to determine a suitability of the input for LLM content generation. However Wang discloses: analyzing the input to determine a suitability of the input for LLM content generation. (Wang, [0414] In FIG. 19, the AI system is shown evaluating the user's inputs and contextual information to determine if any additional information is required to improve the accuracy of understanding the most likely intent and objective 1900; [0416] Next, the AI system determines whether any additional information is needed 1904. If the AI system determines that the available contextual information is insufficient or the AI system is unable to determine the user's intent and objective with a reasonable level of confidence, it may request additional information again or provide alternative options for the user to choose from;) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Wang to enable contextually relevant conversational interaction and also to determine an understanding of a most relevant intent and a most relevant objective which is validated by the AI system with the user until the user agrees. The validated most relevant intent and the most relevant objective is utilized to facilitate the user-centered and contextually relevant conversational interaction, (Wang, abstract). Regarding claim 15, Revach in view of Wang discloses all of the features with respect to claim 14 as outlined above. Revach does not clearly disclose: prompting the user for additional information based on the analyzing. However Wang discloses: prompting the user for additional information based on the analyzing. (Wang [0414] In FIG. 19, the AI system is shown evaluating the user's inputs and contextual information to determine if any additional information is required to improve the accuracy of understanding the most likely intent and objective 1900; [0416] Next, the AI system determines whether any additional information is needed 1904. If the AI system determines that the available contextual information is insufficient or the AI system is unable to determine the user's intent and objective with a reasonable level of confidence, it may request additional information again or provide alternative options for the user to choose from; [0419] If the user responds to the AI system and provides additional information, the AI system retrieves the relevant information from the appropriate sources 1909 and integrates it with the available contextual information 1910.) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Revach with the teaching of Wang to enable contextually relevant conversational interaction and also to determine an understanding of a most relevant intent and a most relevant objective which is validated by the AI system with the user until the user agrees. The validated most relevant intent and the most relevant objective is utilized to facilitate the user-centered and contextually relevant conversational interaction, (Wang, abstract). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Faezeh Forouharnejad whose telephone number is (571)270-7416. The examiner can normally be reached on generally Monday through Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shah Sanjiv can be reached on (571)272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free) /F.F. / Examiner, Art Unit 2166 /SANJIV SHAH/ Supervisory Patent Examiner, Art Unit 2166
Read full office action

Prosecution Timeline

Oct 16, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12407645
SYSTEMS AND METHODS OF DATABASE INSTANCE CONTAINER DEPLOYMENT
2y 5m to grant Granted Sep 02, 2025
Patent 12298959
SYSTEMS AND METHODS FOR PROVIDING CUSTOM OBJECTS FOR A MULTI-TENANT PLATFORM WITH MICROSERVICES ARCHITECTURE
2y 5m to grant Granted May 13, 2025
Patent 12235877
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Feb 25, 2025
Patent 12189600
DISTRIBUTING ROWS OF A TABLE IN A DISTRIBUTED DATABASE SYSTEM
2y 5m to grant Granted Jan 07, 2025
Patent 12153624
METHOD AND SYSTEM FOR IDEOGRAM CHARACTER ANALYSIS
2y 5m to grant Granted Nov 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+31.4%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 104 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month