Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,183

GENERATIVE TEXT MODEL QUERY SYSTEM

Non-Final OA §103§DP
Filed
Jul 15, 2024
Examiner
SAINT CYR, LEONARD
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Casetext Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
882 granted / 1144 resolved
+15.1% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
1176
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
39.1%
-0.9% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1144 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1 – 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 7, 9 – 15, 17 - 22 of U.S. Patent No. 12,159,119. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1 – 20 of the instant application are similar in scope and content of the claims of the cited US patent. It would have been obvious to an artisan at the time the invention was made to use the teaching of claims 1 – 7, 9 – 15, 17 - 22 of the '119’ Patent as a general teaching for generating prompts, to perform method as claimed in the present invention. The instant claims obviously encompass the claimed invention of the '119’ Patent and differ only in the method steps. The extent that the instant claims are broaden and therefore generic to claimed invention of '119’ Patent [species], In re Goodman 29 USPQ 2d 2010 CAFC 1993, states that a generic claim cannot be issued without a terminal disclaimer, if a species claim has been previously been claimed in a co-pending application. And since the structure is as recited, the method step is obtained and therefore, obvious. Here is a comparison between claim 11 of the instant application and claim 1 of the cited patent (12,159,119). Instant Application 18/773,183 Patent Application 12,159,119 Comparison 1.A method comprising: A method for generating prompts for input to a large language model, the method comprising: Similar receiving, from a user device, a natural language prompt and an indication of a data store comprising a plurality of input documents associated with the natural language prompt; receiving a request from a client machine to generate a novel text portion providing a designated written response to factual assertions in an input document; dividing the input document into a plurality of input text portions each having a respective number of words below a designated chunk threshold; Similar generating a first text generation prompt based on the natural language prompt, the plurality of input documents, and a first text generation prompt template of a plurality of stored text generation prompt templates; determining via a processor a first plurality of text generation prompts by combining the plurality of input text portions with a first text generation prompt template including a first fillable portion and a first natural language instruction to the large language model to identify factual assertions in an input text portion and a first fillable portion, each of the first plurality of text generation prompts including a respective input text portion replacing the first fillable portion of the first text generation prompt template; Similar transmitting the first text generation prompt and the indication of the data store comprising the plurality of input documents to a remote text generation modeling system; transmitting a first one or more text generation prompt messages including the first plurality of text generation prompts to the large language model for text generation via a communication interface; Similar receiving, a first text generation prompt response message from the remote text generation modeling system, the first text generation prompt response message comprising first novel text portions generated by the remote text generation modeling system; identifying one or more factual assertions in the first text generation prompt response message; receiving a first plurality of text generation prompt response messages from the large language model for text generation via the communication interface, each of the first plurality of text generation prompt response messages identifying a respective one or more factual assertions in the respective input text portion; receiving user input including a plurality of response instructions corresponding to the one or more factual assertions identified in the first plurality of text generation prompt response messages, the plurality of response instructions indicating whether to admit or deny the corresponding factual assertions; Similar generating a second text generation prompt based on the first text generation prompt response message, the one or more factual assertions, and a second text generation prompt template of the plurality of stored text generation prompt templates, the second text generation prompt comprising natural language instructions for the remote text generation modeling system to compare the one or more factual assertions to the plurality of input documents; determining via the processor a second plurality of text generation prompts based on a second text generation prompt template including a second fillable portion and a second natural language instruction to the large language model to generate a written response to one or more factual assertions identified in the first plurality of text generation prompt response messages, each of the second plurality of text generation prompts including a respective one or more of the factual assertions replacing the second fillable portion of the second text generation prompt template and indicating a respective one or more of response instructions to the large language model indicating whether to admit or deny the corresponding factual assertions, Similar receiving a second text generation prompt response message from the remote text generation modeling system comprising second novel text portions generated by the remote text generation modeling system; and transmitting a second one or more text generation prompt messages including the second plurality of text generation prompts to the large language model for factual assertion text generation via the communication interface; receiving a second plurality of text generation prompt response messages from the large language model for text generation via the communication interface, each of the second plurality of text generation prompt response messages including a respective written response admitting or denying the respective one or more factual assertions determined based on the response instructions; Similar identifying at least one factual assertion of the one or more factual assertions as a hallucination generated by the remote text generation modeling system based on the second text generation prompt response message. determining a response message by combining the respective written responses admitting or denying the respective one or more factual assertions; and transmitting the response message to the client machine. Similar Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8, 10, 11, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Araki (US PAP 2023/0316001) in view of TUNSTALL-PEDOE et al. (US PAP 2023/0316006). As per claims 1, 11, Araki teaches a system comprising: one or more processors; and a non-transitory memory in communication with the one or more processors, the non-transitory memory comprising a plurality of stored text generation prompt templates and instructions stored thereon, that when executed by the one or more processors, are configured to cause the system to (paragraphs 4 – 6): receive, from a user device, a natural language prompt and an indication of a data store comprising a plurality of input documents associated with the natural language prompt (“the entity type generator 206 is configured to leverage a heuristics-based process to extract high-level concepts of target entities from first sentences in one or more electronic documents 214, such as a corpus of Wikipedia articles. Such heuristic-based processes are beneficial when an electronic source (e.g., Wikipedia) includes documents or articles with answer candidates and entity types.”; paragraphs 4 – 6, 33); generate a first text generation prompt based on the natural language prompt, the plurality of input documents, and a first text generation prompt template of the plurality of stored text generation prompt templates (“As one example, for instance, the prompt may be generated by the processing system 110 via a template with two distinct slots: an input slot [X] and an answer slot [Y]. More specifically, in FIG. 3, as a non-limiting example, the prompt was generated using a template (e.g., x.sub.template=“[X] was founded in [Y].”). In this example, the prompt is generated when the input slot [X] is filled with an input entity x.sub.entity (e.g., “Robert Bosch GmbH) such that the x.sub.template is instantiated into x.sub.prompt=“Robert Bosch GmbH was founded in [Y].” FIG. 3 provides an example of a prompt that may be obtained as input data by the KRETC system 200.”; paragraphs 4 – 6, 22 – 24); transmit the first text generation prompt and the indication of the data store comprising the plurality of input documents to a remote text generation modeling system (“the knowledge retrieval framework 130 includes a number of software components, such as a candidate generator 202, a sub-prompt generator 204, an entity type generator 206, and entity clarification interface 208… the machine learning system 140 includes at least one pre-trained language model, any suitable artificial neural network model, or any number and combination thereof. For instance, as a non-limiting example, the pre-trained language model may include BERT, GPT-3”; paragraphs 19- 23); receive, a first text generation prompt response message from the remote text generation modeling system, the first text generation prompt response message comprising first novel text portions generated by the remote text generation modeling system (“the machine learning system 140 is configured to generate output that includes at least four answer candidates (“Germany”, “Stuttgart”, “1886”, and “the 19th century”) along with their corresponding confidence scores (−0.89, −1.57, −2.45, and −3.12). In this non-limiting example, the answer candidate of “Germany” is considered to have the highest confidence (−0.89) while the answer candidate of “the 19.sup.th century” is considered to have the lowest confidence (−3.12) among the four answer candidates shown in FIG. 3. In this regard, the candidate generator 202 is configured to perform the confidence computation and select a set of candidates for the prompt (e.g., “Robert Bosch GmbH was founded in [Y],” where [Y] is the answer slot of the prompt). As shown in FIG. 3, the set of candidates includes four answer candidates, which are selected from a collection of answer candidates that are generated by the machine learning system 140.”; paragraphs 24 – 27); identify one or more factual assertions in the first text generation prompt response message (“the entity type generator 206 is configured perform factual knowledge retrieval on each sub-prompt to obtain the entity types of the answer candidates.”; paragraphs 14, 15, 29 - 33); generate a second text generation prompt based on the first text generation prompt response message, the one or more factual assertions, and a second text generation prompt template of the plurality of stored text generation prompt templates, the second text generation prompt comprising natural language instructions for the remote text generation modeling system to compare the one or more factual assertions to the plurality of input documents (“generating a set of second prompts that are based on the set of candidates. The method includes generating a set of entity types using the set of second prompts. The set of entity types categorize the set of candidates…Upon generating each sub-prompt for each answer candidate within the set of candidates, the entity type generator 206 is configured perform factual knowledge retrieval on each sub-prompt to obtain the entity types of the answer candidates.”; paragraphs 4 – 6, 29 – 33); receive a second text generation prompt response message from the remote text generation modeling system comprising second novel text portions generated by the remote text generation modeling system (“the machine learning system 200 is configured to generate a number of valid answer candidates for the prompt (e.g., “Robert Bosch GmbH was founded in ______”). In this regard, the entity clarification interface 208 is advantageous in enabling a user 300 to select from among a set of answer candidates via selecting a desired entity type. This entity type selection is advantageous as the answer candidate with the highest confidence score may not provide the user with the desired entity data of the desired scope.”; paragraphs 29 – 35). However, Araki does not specifically teach identify at least one factual assertion of the one or more factual assertions as a hallucination generated by the remote text generation modeling system based on the second text generation prompt response message. TUNSTALL-PEDOE et al. disclose that the fact checking could be text generated by an LLM in a chat or other application where the purpose of the fact checking was to minimise hallucination or otherwise incorrect information given to the user…LLMs frequently hallucinate answers to questions and produce facts in text they generate that are not grounded in reality (paragraphs 770, 845). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine hallucination in factual assertions as taught by TUNSTALL-PEDOE et al. in Araki, because that would help improve the output, such as continuation text output, generated by the LLM in response to a prompt (paragraph 22). As per claims 8, 18, Araki in view of TUNSTALL-PEDOE et al. further disclose identifying one or more factual assertions in the first text generation prompt response message comprises including within the first text generation prompt natural language instructions for the remote text generation modeling system to identify each factual assertion within the first text generation prompt response message (“Extracting from the natural language a collection of one or more factual assertions asserted within the natural language [0775] Checking the one or more factual assertions for factual accuracy.”; Araki, paragraphs 29 – 35; TUNSTALL-PEDOE et al. paragraphs 773 – 782). As per claims 10, 20, Araki in view of TUNSTALL-PEDOE et al. further disclose selecting a relevant text generation prompt template; and modifying the relevant text generation prompt template to include portions of the natural language prompt (“in FIG. 3, the sub-prompt generator 204 includes a sub-prompt template 210A, which is defined as z.sub.template=[Y] is a [Z] and applied to each of the answer candidates. More specifically, the sub-prompt generator 204 fills the input slot [Y] with an answer candidate to create each sub-prompt, where [Z] represents the output slot (or the type slot) that contains the entity type. The KRETC system 200 is not limited to using z.sub.template=[Y] is a [Z] as the sub-prompt template.”; Araki, paragraphs 28 – 35; TUNSTALL-PEDOE et al. paragraphs 50, 773 – 782). Allowable Subject Matter Claims 2 – 7, 9, 12 – 17, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and filing a terminal disclaimer. The following is a statement of reasons for the indication of allowable subject matter: As to claims 2 – 7, 9, 12 – 17, and 19, the prior art made of record does not teach or suggest generating a third text generation prompt comprising instructions for the remote text generation modeling system to correct the at least one factual assertion identified as the hallucination; transmitting the third text generation prompt to the remote text generation modeling system; receiving a third text generation prompt response message from the remote text generation modeling system comprising third novel text portions generated by the remote text generation modeling system; parsing the first text generation prompt response message, the second text generation prompt response message, and the third text generation prompt response message to generate a plurality of answers corresponding with a plurality of natural language questions; transmitting an output message comprising the plurality of answers to the user device. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rhodes et al. teach METHODS FOR IMPROVING NATURAL LANGUAGE PROCESSING WITH ENHANCED AUTOMATED SCREENING FOR AUTOMATED GENERATION OF A CLINICAL SUMMARIZATION REPORT AND DEVICES THEREOF. Ferrucci et al. teach Knowledge Acquisition Tool. Boytsov et al. teach INTERACTION LAYER NEURAL NETWORK FOR SEARCH, RETRIEVAL, AND RANKING. Arnold et al. teach Generation And Management Of An Artificial Intelligence (AI) Model Documentation Throughout Its Life Cycle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEONARD SAINT-CYR whose telephone number is (571)272-4247. The examiner can normally be reached Monday- Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEONARD SAINT-CYR/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Jan 31, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603100
SYSTEM AND METHOD FOR OPTIMIZED AUDIO MIXING
2y 5m to grant Granted Apr 14, 2026
Patent 12597415
VOICE RECOGNITION GRAMMAR SELECTION BASED ON CONTEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12592227
DIALOG UNDERSTANDING DEVICE AND DIALOG UNDERSTANDING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591765
SYSTEMS AND METHODS FOR BUILDING A CUSTOMIZED GENERATIVE ARTIFICIAL INTELLIGENT PLATFORM
2y 5m to grant Granted Mar 31, 2026
Patent 12585884
DIALOGUE APPARATUS, DIALOGUE METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.2%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 1144 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month