Prosecution Insights
Last updated: April 19, 2026
Application No. 18/408,320

DETECTION OF HALLUCINATIONS IN LARGE LANGUAGE MODEL RESPONSES

Final Rejection §101§102§103
Filed
Jan 09, 2024
Examiner
SERRAGUARD, SEAN ERIN
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
92 granted / 134 resolved
+6.7% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
177
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . All objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Status of the Claims Prior to entry of the amendment(s) and/or consideration of the argument(s), the status of the claims is as follows. Claim(s) 1-18 is/are pending. Claim(s) 1-18 is/are rejected under 35 U.S.C. §101 as being directed to an abstract idea without significantly more. Claim(s) 1-4, 6-9, 11-12, and 18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhou (U.S. Pat. App. Pub. No./U.S. Pat. No. 2025/0061286, hereinafter Zhou). Claims 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou as applied to claim 1 above, and further in view of Cunningham (U.S. Pat. App. Pub. No. 2024/0394600, hereinafter Cunningham). Claims 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou as applied to claim 1 above, and further in view of Bright (U.S. Pat. App. Pub. No. 2025/0139375, hereinafter Bright). Claim 13-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Zhang (U.S. Pat. App. Pub. No. 2025/0077777, hereinafter Zhang). Claims 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou and Zhang as applied to claim 13 above, and further in view of Cunningham. Response to Amendments Applicant’s amendment filed on 23 December 2025 has been entered. In view of the amendment to the claim(s), the amendment of claim(s) 1, 4, and 18; the cancellation of claim(s) 3 and 13-17; and the addition of claim(s) 19-25 have been acknowledged and entered. After entry of the amendments, claim(s) 1-2, 4-12, and 18-25 remain pending. The rejection(s) of claim(s) 1-18 under 35 U.S.C. §101 is/are withdrawn. In view of the amendment to claim(s) 1, 4, and 18, the cancellation of claim(s) 3 and 13-17, the rejection of claims 3 and 13-17 under 35 U.S.C. §102 and 103 is withdrawn. The rejection(s) of claim(s) 1-2, 4-12, and 18 under 35 U.S.C. §102 and 103 is/are maintained/maintained as modified in response to amendment, for the reasons provided in the action below. In light of the newly added claims, new grounds for rejection under 35 U.S.C. §102 and 35 U.S.C. §103 are provided in the action below. Response to Arguments Applicant’s arguments regarding the prior art rejections under 35 U.S.C. §102/103, see pages 7-9 of the Response to Non-Final Office Action dated 01 October 2025, which was received on 23 December 2025 (hereinafter Response and Office Action, respectively), have been fully considered. As Applicant has amended independent claim(s) 1 and 18 to incorporate the limitations of claim(s) 3, the rejections of claim(s) 1 and 18 have been amended to incorporate the rejection of the respective limitations of claim(s) 3, as appropriate. With respect to the rejection(s) of claim(s) 1 and 18 under 35 U.S.C. §102 (a)(2) as being anticipated by Zhou, applicant asserts that Zhou “fails to disclose ‘modifying the NL based input’ that is received ‘to generate a modified NL based input’, and then ‘processing the modified NL based input ... to generate the second LLM response’ as set forth in the amended independent claims.” (Response, pg. 8, emphasis added by applicant). Applicant’s arguments are not persuasive. More specifically, applicant “submits that ‘deleting the hallucinated content from the natural language answer’ or ‘retriev[ing] modified natural language context information’ in relied upon paragraph [0062] of Zhou fails to disclose ‘modifying the NL based input’ that is received “to generate a modified NL based input’, and then ‘processing the modified NL based input ... to generate the second LLM response’ as set forth in the amended independent claims. However, this argument is not persuasive. Contrary to applicant’s assertions above, Zhou discloses “modifying the NL based input… to generate a modified NL based input”. based on the broadest reasonable interpretation of “NL based input” in the context of the Instant Application. In determining the broadest reasonable interpretation of the phrase “NL based input”, it is noted that the specification fails to expressly define the phrase. Instead, the specification provides numerous examples of possible embodiments. However, said examples provide significant breadth to “NL based input”. For example, as understood in the context of the specification, “an NL based input described herein can be a query for an NL response that is formulated based on user input provided by a user of the client device” (Instant Application, [0027]). For example, NL-based input can be “a prompt for NL content that is formulated based on user input provided by a user of the client device”. (Id.) In further examples and embodiments, the NL based input may refer to the original input from the user with no processing (e.g., “a typed query that is typed via a physical or virtual keyboard”), modified input (e.g., “a prompt for NL content that is formulated based on user input provided by a user of the client device”), or even a selection input (e.g., “a suggested prompt that is selected via a touch screen or a mouse of the client device 110). (Id.) Therefore, in light of the above and the ordinary meaning of the words as used in the general art, the NL-based input as presented in claim 1 is understood as an input of some kind, such as a prompt, “formulated” based on a “user input provided by a user”, where user input is not limited to any particular medium or expression (e.g., selection, images, spoken/typed text, etc.). (Id.) In the cited paragraphs of Zhou, Zhou discloses reperforming a prompt construction based on (1) a detected hallucination for the original prompt, where the original prompt is generated based on “NL based input” and a first information retrieval technique (e.g., the information retrieval technique to which the “better (likely more computationally expensive) information retrieval technique” is being compared) was applied to generate context for the original prompt, as generated from the NL based input. (Zhou, [0062]-[0063]). The original prompt construction is further explained in Zhou at [0021], where “the QA system constructs a prompt 10… that includes both the natural language question {the NL based input} and the natural language context information.” (Zhou, [0021]). Thus, when Zhou “switches to [the] better… information retrieval technique and re-runs the information retrieval [and] prompt construction,” a new prompt is generated from “the natural language question {the same NL based input} and the modified natural language context information.” “Modifying” in this context is understood as adding to, subtracting from or otherwise changing a portion of the NL-based input. As each of the prompt constructions (e.g., prompt construction in the original instance and the second prompt construction after the detection of a hallucination) adds context and reformulates the “NL based input” into a prompt, which both changes and adds to the NL-based input, the NL based input is modified both in the original “prompt construction” and in the “re-run” of the “prompt construction”, both of which resulting in separate modifications to the NL-based input, and each of these separate modified NL based inputs are subjected to “LLM-based answer generation.” (Zhou, [0062]-[0063]). Therefore, Zhou discloses both “modifying the NL based input… to generate a modified NL based input” and “processing the modified NL based input ... to generate the second LLM response”. The rejection is therefore maintained in light of the above arguments. Claims 3 and 13-17 are cancelled in this response. Therefore, the rejection of claims 3 and 13-17 are rendered moot. Applicant further argues that the rejection(s) of dependent claims 2 and 4-12 should be withdrawn for at least the same reasons as independent claims 1 and 18. Applicant’s arguments/arguments in light of the amended claims are not persuasive for the same reasons as claims 1 and 18. As such, the rejections of claims 2 and 4-12 under 35 U.S.C. §102 and 35 U.S.C. §103 are maintained, as modified in response to the amendments. However, in response to the newly added claims, new ground(s) of rejection under 35 U.S.C. §102 and 35 U.S.C. §103 are made in light of combinations of Zhou, Cunningham, and Bright. The Applicant has not provided any further statement and therefore, the Examiner directs the Applicant to the below rationale. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 6-9, 11-12, 18-20, and 22-25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhou. Regarding claim 1, Zhou discloses A method implemented by one or more processors, the method comprising (Systems and methods described with reference to “handling hallucinated content” from a LLM; Zhou, ¶ [0037]): receiving natural language (NL) based input associated with a client device (“the processor 112 receives a natural language question, e.g., from a user” where the processor “operates the display screen 138 to display a graphical user interface on the display screen 138, which includes a question prompt via which the user can type the natural language question in the form of text using a keyboard” or “speak[ing] the natural language question into the microphone”; Zhou, ¶ [0038]); generating, based on processing the NL based input using a large language model (LLM), a first LLM response (The system further includes “generating a natural language answer {generating…a first LLM response} using a language model based on the natural language question”; Zhou, ¶ [0043]); determining, based on processing the first LLM response, whether the first LLM response contains at least one hallucination (The system “continues with detecting hallucinated content in the natural language answer” which can include detecting hallucinations “by comparing sentences and/or keywords in the natural language answer with sentences and words in the previously retrieved natural language context information”; Zhou, ¶ [0045]); responsive to determining that the first LLM response contains at least one hallucination, generating a second LLM response (“the method 200 continues with generating a modified the natural language answer that reduces the hallucinated content in the natural language answer”; Zhou, ¶ [0062]) based on processing at least the NL based input (generating the modified answer can include rerunning “the information retrieval, prompt construction, and LLM-based answer generation” processes, which can include an alternative “information retrieval technique”; Zhou, ¶ [0063]), wherein generating the second LLM response comprises: modifying the NL based input to generate a modified NL based input (In some examples, after detecting hallucinations in the natural language answer, the system performs a different “information retrieval technique and re-runs the information retrieval, prompt construction, and LLM-based answer generation,” where the “prompt construction” based on the different “information retrieval technique” is a modified NL based input.; Zhou, ¶ [0062]-[0063]); and processing the modified NL based input, using the LLM or an alternative LLM, to generate the second LLM response (The system performs “LLM-based answer generation” using the “prompt construction” based on the different “information retrieval technique” {modified NL based input} using the LLM to generate the modified natural language answer.; Zhou, ¶ [0063]); determining whether the second LLM response contains at least one hallucination (The process can be repeated until a desired outcome is achieved, including “repeated until the modified natural language answer includes less than a threshold amount of hallucinated content”, “repeated until no hallucination is detected in the generated answer”, “or until no further information retrieval techniques result a reduced amount of the hallucinated content in the modified natural language answer,” where the procedure being repeated includes further detection of hallucinations in the modified natural language answer {determining whether the second LLM response contains at least one hallucination}.; Zhou, ¶ [0063], [0067]); and responsive to determining that the second LLM response does not contain at least one hallucination (Relying on the embodiment of “repeated until no hallucination is detected in the generated answer,” the system can repeat the hallucination detection and mitigation steps using the modified natural language answer, and each modified natural language thereafter, until the amount of hallucinated content is “no hallucination”; Zhou, ¶ [0063], [0067]), causing the second LLM response to be rendered at the client device (once the content is free of hallucinations, “the method 200 continues with outputting the modified natural language answer” using “at least one user interface device... to output the final modified natural language answer to a user” where “a display screen is operated to display the final modified natural language answer in graphical form”; Zhou, ¶ [0072]). Regarding claim 2, Zhou discloses wherein the NL based input comprises a request for the LLM to perform a summarization task (discloses that a potential user input can include a summarization task, in that “LLMs enable systems to better understand a user’s input” such as “in summarization”; Zhou, ¶ [0003]). Regarding claim 4, Zhou discloses further comprising: determining a type of hallucination contained in the first LLM response, (Discloses “the modified natural language context information can be generated in a manner depending on a hallucination level of the natural language answer,” and indicates the detection of “mildly hallucinated” responses, which are a first type of hallucination, and “heavily hallucinated” responses, a second type of hallucination.; Zhou, ¶ [0063]) wherein modifying the NL based input is based on the type of hallucination (“If the answer is only mildly hallucinated, the processor 112 deletes the detected hallucinated sentences from the answer. However, if the answer is heavily hallucinated,...the processor 112 switches to a better (likely more computationally expensive) information retrieval technique and re-runs the information retrieval, prompt construction, and LLM-based answer generation,” which is a modification of the construction of the prompt {NL based input} that is based on the type of hallucination.; Zhou, ¶ [0063]). Regarding claim 6, Zhou discloses wherein causing the second LLM response to be rendered at the client device comprises transmitting data to the client device (“the processor 112 of the server 110 operates the network communication module 118 to transmit the final modified natural language answer to the client device 130. “; Zhou, ¶ [0073]) that is operable for causing the client device to render the second LLM response (“The processor 132 of the client device 130 operates the transceivers 136 to receive the final modified natural language answer from the server 110. The processor 132 operates at least one output device to perceptibly output the final modified natural language answer to the user.”; Zhou, ¶ [0073]). Regarding claim 7, Zhou discloses wherein determining whether the first LLM response contains at least one hallucination is performed without using the LLM (Discloses a “sentence similarity-based hallucination detection technique” where “for each sentence in the natural language answer (sen_A), for each sentence in the natural language context information (denoted as sen_C), the processor 112 determines an embedding similarity using the sentence embedding-based similarity calculation and a word pattern overlap rate using the pattern overlapping rate-based similarity calculation,” which does not require and is not described as being performed in the context of the LLM.; Zhou, ¶ [0056]). Regarding claim 8, Zhou discloses wherein determining whether the first LLM response contains at least one hallucination comprises comparing the first LLM response to the NL based input (The “sentence similarity-based hallucination detection technique” determines a “sentence embedding-based similarity calculation and a word pattern overlap rate” using keywords and embeddings derived from “each sentence in the natural language answer (sen_A)” and “each sentence in the natural language context information (denoted as sen_C),” where “natural language context information” is part of the NL based input.; Zhou, ¶ [0056]). Regarding claim 9, Zhou discloses further comprising: determining a length of the first LLM response, (In another embodiment, “the processor 112 first determines an overlap length based on determined mapping and/or optimal path as a sum of overlapping words between the respective sentence in the natural language answer and the respective in the natural language context information.”; Zhou, ¶ [0053]) wherein determining whether the first LLM response contains at least one hallucination is based at least in part on the NL based input and the length of the first LLM response (“the processor 112 determines the word pattern overlap rate by dividing the respective overlap length by the number of words in a shorter sentence of the respective sentence in the natural language answer and the respective in the natural language context information”; Zhou, ¶ [0053]-[0054]). Regarding claim 11, Zhou discloses wherein generating the first LLM response comprises: transmitting instructions to an LLM frontend (“the processor 132 of the client device 130 executes program instructions of the question answering system application 142 to receive the natural language question from the user” and constructs “a natural language prompt based on the natural language question and the natural language context information, that is designed to solicit an answer from the language model that is constrained based on the natural language context information” and “the natural language prompt” is provided “to the language model as input”, where the portion of the LLM which receives the natural language prompt is the front end; Zhou, ¶ [0038], [0044]), the instructions comprising the NL based input (The natural language prompt includes the NL based input; Zhou, ¶ [0044]-[0045]); and receiving, from the LLM frontend, the first LLM response (The system then “generates the natural language answer by providing the natural language prompt to the language model as input” where the system further “detects hallucinated content in the natural language answer based on previously retrieved natural language context information... by comparing sentences and/or keywords in the natural language answer with sentences and words in the previously retrieved natural language context information,” which occurs after receipt of the answer from the LLM, but prior to output is received by the user. Thus, all LLM responses are received from a LLM frontend (e.g., the portion of the system which receives the natural language prompt and/or detects hallucinations).; Zhou, ¶ [0044]-[0045]). Regarding claim 12, Zhou discloses further comprising: responsive to determining that the second LLM response contains at least one hallucination, generating a third LLM response (The process can be repeated until a desired outcome is achieved, including “repeated until the modified natural language answer includes less than a threshold amount of hallucinated content”, “repeated until no hallucination is detected in the generated answer”, “or until no further information retrieval techniques result a reduced amount of the hallucinated content in the modified natural language answer,” where the procedure being repeated includes both the generation of further modified natural language answers and further detection of hallucinations in said further modified natural language answers {determining whether the second LLM response contains at least one hallucination}.; Zhou, ¶ [0063], [0067]); determining whether the third LLM response contains at least one hallucination (Relying on the embodiment of “repeated until no hallucination is detected in the generated answer,” the system can repeat the hallucination detection and mitigation steps using the further modified natural language answer, and each modified natural language answer thereafter, until the amount of hallucinated content is “no hallucination”; Zhou, ¶ [0063], [0067]); and responsive to determining that the third LLM response does not contain at least one hallucination, causing the third LLM response to be rendered at the client device (once the content is free of hallucinations, “the method 200 continues with outputting the [further] modified natural language answer” using “at least one user interface device... to output the final modified natural language answer to a user” where “a display screen is operated to display the final modified natural language answer in graphical form” (embodiments also include rendering in audio form); Zhou, ¶ [0072]). Regarding claim 18, Zhou discloses A system comprising (Systems and methods described with reference to “handling hallucinated content” from a LLM; Zhou, ¶ [0037]): at least one processor; and memory (Discloses an exemplary system comprising a “server 110” which “includes at least one processor 112, memory 114,”; Zhou, ¶ [0025], [0037]) storing instructions that, when executed, cause the at least one processor to be operable to (“The memory 114 is configured to store program instructions that, when executed by the processor 112, enable the server 110 to provide the features, functionality, characteristics and/or the like as described herein. “; Zhou, ¶ [0029], [0037]): receive natural language (NL) based input associated with a client device (“the processor 112 receives a natural language question, e.g., from a user” where the processor “operates the display screen 138 to display a graphical user interface on the display screen 138, which includes a question prompt via which the user can type the natural language question in the form of text using a keyboard” or “speak[ing] the natural language question into the microphone”; Zhou, ¶ [0038]); generate, based on processing the NL based input using a large language model (LLM), a first LLM response (The system further includes “generating a natural language answer {generating…a first LLM response} using a language model based on the natural language question”; Zhou, ¶ [0043]); determine, based on processing the first LLM response, whether the first LLM response contains at least one hallucination (The system “continues with detecting hallucinated content in the natural language answer” which can include detecting hallucinations “by comparing sentences and/or keywords in the natural language answer with sentences and words in the previously retrieved natural language context information”; Zhou, ¶ [0045]); responsive to determining that the first LLM response contains at least one hallucination, generate a second LLM response (“the method 200 continues with generating a modified the natural language answer that reduces the hallucinated content in the natural language answer”; Zhou, ¶ [0062]) based on processing at least the NL based input (generating the modified answer can include rerunning “the information retrieval, prompt construction, and LLM-based answer generation” processes, which can include an alternative “information retrieval technique”; Zhou, ¶ [0063]), wherein the instructions to generate the second LLM response comprises instructions to: modify the NL based input to generate a modified NL based input (In some examples, after detecting hallucinations in the natural language answer, the system performs a different “information retrieval technique and re-runs the information retrieval, prompt construction, and LLM-based answer generation,” where the “prompt construction” based on the different “information retrieval technique” is a modified NL based input.; Zhou, ¶ [0062]-[0063]); and process the modified NL based input, using the LLM or an alternative LLM, to generate the second LLM response (The system performs “LLM-based answer generation” using the “prompt construction” based on the different “information retrieval technique” {modified NL based input} using the LLM to generate the modified natural language answer.; Zhou, ¶ [0063]); determine whether the second LLM response contains at least one hallucination (The process can be repeated until a desired outcome is achieved, including “repeated until the modified natural language answer includes less than a threshold amount of hallucinated content”, “repeated until no hallucination is detected in the generated answer”, “or until no further information retrieval techniques result a reduced amount of the hallucinated content in the modified natural language answer,” where the procedure being repeated includes further detection of hallucinations in the modified natural language answer {determining whether the second LLM response contains at least one hallucination}.; Zhou, ¶ [0063], [0067]); and responsive to determining that the second LLM response does not contain at least one hallucination (Relying on the embodiment of “repeated until no hallucination is detected in the generated answer,” the system can repeat the hallucination detection and mitigation steps using the modified natural language answer, and each modified natural language thereafter, until the amount of hallucinated content is “no hallucination”; Zhou, ¶ [0063], [0067]), cause the second LLM response to be rendered at the client device (once the content is free of hallucinations, “the method 200 continues with outputting the modified natural language answer” using “at least one user interface device... to output the final modified natural language answer to a user” where “a display screen is operated to display the final modified natural language answer in graphical form”; Zhou, ¶ [0072]). Regarding claim 19, the rejection of claim 18 is incorporated. Claim 19 is substantially the same as claim 2 and is therefore rejected under the same rationale as above. Regarding claim 20, the rejection of claim 18 is incorporated. Claim 20 is substantially the same as claim 4 and is therefore rejected under the same rationale as above. Regarding claim 22, the rejection of claim 18 is incorporated. Claim 22 is substantially the same as claim 6 and is therefore rejected under the same rationale as above. Regarding claim 23, the rejection of claim 18 is incorporated. Claim 23 is substantially the same as claim 7 and is therefore rejected under the same rationale as above. Regarding claim 24, the rejection of claim 18 is incorporated. Claim 24 is substantially the same as claim 11 and is therefore rejected under the same rationale as above. Regarding claim 25, the rejection of claim 18 is incorporated. Claim 25 is substantially the same as claim 12 and is therefore rejected under the same rationale as above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou as applied to claims 1 and 18 above, and further in view of Cunningham. Regarding claim 5, the rejection of claim 1 is incorporated. Zhou discloses all of the elements of the current invention as stated above. However, Zhou fails to expressly recite further comprising: determining, based on processing the NL based input, a type of task to be performed by the LLM, wherein determining whether the first LLM response contains at least one hallucination is performed based on the type of task. Cunningham teaches systems and methods “for mitigating hallucination in systems employing generative AI.” (Cunningham, ¶ [0002]). Regarding claim 5, Cunningham teaches further comprising: determining, based on processing the NL based input, a type of task to be performed by the LLM, (“receiving user input specifying a query or task relating to information contained in a data object” and “identify[ing] one or more parts of the data object which most closely match the query or task specified in the user input;”; Cunningham, ¶ [0025]-[0029]) wherein determining whether the first LLM response contains at least one hallucination is performed based on the type of task (“generating an input for a generative AI system comprising the user input and the identified one or more parts of the data object” and “analyzing an output produced by the generative AI system to determine if the output contains information also present in the data object,” and if not “initiating an error process”; Cunningham, ¶ [0030]-[0034]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the hallucination detection and handling systems of Zhou to incorporate the teachings of Cunningham to include further comprising: determining, based on processing the NL based input, a type of task to be performed by the LLM, wherein determining whether the first LLM response contains at least one hallucination is performed based on the type of task. In light of “the high plausibility of… hallucinations” and the ability of “these errors” to “pass unnoticed in manual reviews or standard automated checks,” the systems and methods of Cunningham provide an individualized comparison between the elements of the prompt and the generated response, such that “serious consequences” of hallucinations “when used in critical business operations” can be avoided, as recognized by Cunningham. (Cunningham, ¶ [0008]-[0010]). Regarding claim 21, the rejection of claim 18 is incorporated. Claim 21 is substantially the same as claim 5 and is therefore rejected under the same rationale as above. Claims 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou as applied to claim 1 above, and further in view of Bright. Regarding claim 10, the rejection of claim 1 is incorporated. Zhou discloses all of the elements of the current invention as stated above. However, Zhou fails to expressly recite further comprising: determining a tense of the NL based input; and determining a tense of the first LLM response, wherein determining whether the first LLM response contains at least one hallucination is based at least in part on a comparison between the tense of the NL based input and the tense of the first LLM response (“the self-evaluation agent employs machine learning models” to “predict the likelihood of a question passing or failing the predetermined model-driven conditions” using “a variety of features to make their predictions, including...grammatical correctness”, where grammatical correctness “is evaluated by scanning the text for grammatical errors such as subject-verb agreement, incorrect verb tenses, and misplaced modifiers.”; Bright, ¶ [0080], [0082], [0084]). Bright teaches a “validation framework seeks to ensure the accuracy, relevance, and reliability of AI-generated content.” (Bright, ¶ [0022]). Regarding claim 10, Bright teaches further comprising: determining a tense of the NL based input; and determining a tense of the first LLM response, (“At operation 406, the generated questions and corresponding answers are evaluated by an agent such as a self-evaluation agent” for grammatical correctness which includes determining “verb tenses” for both the prompt and the response.; Bright, ¶ [0080], [0082], [0084]) wherein determining whether the first LLM response contains at least one hallucination is based at least in part on a comparison between the tense of the NL based input and the tense of the first LLM response (“the self-evaluation agent employs machine learning models” to “predict the likelihood of a question passing or failing the predetermined model-driven conditions,” thus containing at least one hallucination, using “a variety of features to make their predictions, including...grammatical correctness”, where grammatical correctness “is evaluated by scanning the text for grammatical errors such as subject-verb agreement, incorrect verb tenses, and misplaced modifiers.”; Bright, ¶ [0080], [0082], [0084]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the hallucination detection and handling systems of Zhou to incorporate the teachings of Bright to include further comprising: determining a tense of the NL based input; and determining a tense of the first LLM response, wherein determining whether the first LLM response contains at least one hallucination is based at least in part on a comparison between the tense of the NL based input and the tense of the first LLM response. The validation framework of Bright implements “multiple layers of validation on both the user input and the model output” to “address the potential for erroneous, misleading, or otherwise undesirable responses from the generative AI engine,” which provides the known benefit of making received answers more reliable in light of the recognized hallucination concerns in the art, as recognized by Bright. (Bright, ¶ [0022]-[0024]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hawes et al. (U.S. Pat. App. Pub. No. 2024/0403290) discloses systems and methods generating computer language translations including updated or revised prompts in response to hallucinations. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sean E. Serraguard whose telephone number is (313)446-6627. The examiner can normally be reached 07:00-17:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sean E Serraguard/Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jan 09, 2024
Application Filed
Sep 26, 2025
Non-Final Rejection — §101, §102, §103
Dec 23, 2025
Response Filed
Mar 27, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603095
Stereo Audio Signal Delay Estimation Method and Apparatus
2y 5m to grant Granted Apr 14, 2026
Patent 12598250
SYSTEMS AND METHODS FOR COHERENT AND TIERED VOICE ENROLLMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597429
PACKET LOSS CONCEALMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12512093
Sensor-Processing Systems Including Neuromorphic Processing Modules and Methods Thereof
2y 5m to grant Granted Dec 30, 2025
Patent 12505835
HOME APPLIANCE AND SERVER
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+33.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month