DETAILED ACTION
This is responsive to the amendment filed 07 January 2026.
Claims 1-18 remain pending and are considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 07 January 2026 have been fully considered but they are not persuasive.
Regarding the rejections under 35 USC 101, Applicant argues:
As amended, the claims are directed to a specific, technical improvement in the operation of large language model systems, namely, the generation and comparison of multiple, distinct metric sets derived from different model executions to compute a hallucination score that controls subsequent system behavior. The claimed invention does not merely evaluate content, but instead modifies how an Al system operates, including how prompts are refined, how outputs are generated, and how reliability is quantitatively assessed across iterations.
The amended claims are integrated into a practical application under Step 2A, Prong Two of the USPTO Subject Matter Eligibility Guidance. The claimed operations improve the technical functioning of Al systems by reducing hallucinations through controlled, metric-driven prompt refinement and output validation. This improvement is rooted in computer technology itself and is not a mere automation of human judgment.
The Examiner respectfully disagrees. A human may generate multiple, distinct metric sets derived from different model executions. A human may also compare the distinct metrics to generate a score. The claimed invention does not alter the operation of the AI system, but merely inputs prompts and refined prompts to generate different outputs which are used to calculate respective metrics. The calculated metrics are then compared to determine a hallucination score.
The abstract idea is not integrated into a practical application. In particular, the claim recites the additional elements – a “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations” and an “ artificial intelligence (AI) model” which are recited at a high-level of generality (i.e., as generic processors performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using a generic computer components.
The claims also recite the additional elements “processing the prompt with an artificial intelligence (AI) model to generate a basis output” and “processing the fine-tuned prompt with the AI model to generate an enhanced output”. The claims do not impose any limits on how the AI model processes the prompt and the fine-tuned prompt. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations therefore represent extra-solution activity because they are mere nominal or tangential addition to the claims. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea.
Regarding the rejections under 35 USC 102 of claims 17-18, Applicant argues:
The claims now expressly require generating a first plurality of metrics from a first model execution, modifying the prompt based on extracted entities and prior prompt history, generating a second plurality of metrics from a second model execution, and determining a hallucination score based on a comparison between the pluralities of metrics.
These limitations are neither disclosed nor suggested by Tishibi, alone or in combination with any cited reference. Tishibi evaluates correspondence between a model output and a database result but does not disclose generating multiple metric sets from separate model executions nor determining a hallucination score based on a comparison between such metric sets.
The Examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., generating a first plurality of metrics from a first model execution, modifying the prompt based on extracted entities and prior prompt history, generating a second plurality of metrics from a second model execution, and determining a hallucination score based on a comparison between the pluralities of metrics) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Therefore, all of Applicant’s arguments have been considered and they are not persuasive.
Claim Objections
Claims 1-16 are objected to because of the following informalities: in lines 1-2 of claims 1 and 17, it is believed “A computing system: one or more processors … ” should be ‘A computing system comprising: one or more processors … ’. The dependent claims are objected to for depending upon an objected to claim without providing a remedy.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 17-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 17, in lines 14-16, recites the limitation “wherein the first plurality of metrics and the second plurality of metrics are generated from different executions of the AI model using different prompt configurations”. There is no antecedent basis for neither “the first plurality of metrics” nor “the second plurality of metrics” in the claim. The limitation will not be given patentable weight.
Claim 18 is rejected for depending upon a rejected claim without providing a remedy.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Further, this judicial exception is not integrated into a practical application.
In claim 1 the limitations a metrics by comparing the enhanced output to the reference database according to the plurality of criterion, wherein the first plurality of metrics and the second plurality of metrics are generated from different executions of the Al model using different prompt configurations, and determining a hallucination score according to a comparison of the first plurality of metrics to the second plurality of metrics, as drafted, are processes that, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components.
That is, other than reciting a “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations” and an “ artificial intelligence (AI) model” nothing in the claim precludes the steps from practically being performed in the mind. For example, a person has the ability of receiving a prompt (e.g. a human may be given a prompt printout), generating a first plurality of metrics by comparing the basis output to the reference database according to a plurality of criterion (e.g. a human may compare output text to a reference database to generate first metrics), extracting one or more entities from the prompt (e.g. a human may extract portions from the prompt), fine tuning the prompt according to one or more previous prompts associated with the one or more extracted entities (e.g. a human may adjust the prompt based on prior prompts associated with the extracted portions), generating a second plurality of metrics by comparing the enhanced output to the reference database according to the plurality of criterion, wherein the first plurality of metrics and the second plurality of metrics are generated from different executions of the Al model using different prompt configurations (e.g. a human may compare enhanced output text to a reference database to generate second metrics), and determining a hallucination score according to a comparison of the first plurality of metrics to the second plurality of metrics (e.g. a human may compare the first and second metrics to determine a hallucination score).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements – a “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations” and an “ artificial intelligence (AI) model” which are recited at a high-level of generality (i.e., as generic processors performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using a generic computer components.
The claims also recite the additional elements “processing the prompt with an artificial intelligence (AI) model to generate a basis output” and “processing the fine-tuned prompt with the AI model to generate an enhanced output”. The claims do not impose any limits on how the AI model processes the prompt and the fine-tuned prompt. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations therefore represent extra-solution activity because they are mere nominal or tangential addition to the claims. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As stated above, the claims recite the additional limitations of “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations” and an “ artificial intelligence (AI) model”. However, these are recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications (see Applicant’s specification [0030], [0039] and [0065]). Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
The claims also recite the additional elements “processing the prompt with an artificial intelligence (AI) model to generate a basis output” and “processing the fine-tuned prompt with the AI model to generate an enhanced output”. The claims do not impose any limits on how the AI model processes the prompt and the fine-tuned prompt. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations represent the extra-solution activity of processing prompt using an AI model which is well-understood, routine and conventional activity. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
The dependent claims, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea.
The dependent claims recite:
wherein the operations comprise: rewriting the initial query to generate a new query; processing the new query with the AI model to generate a new output, generating a plurality of new metrics by comparing the new output to the reference database according to the plurality of criterion, determining a new hallucination score according to a comparison of the first plurality of metrics to the new plurality of metrics;
wherein the new query is based on the hallucination score;
wherein the operations comprise: iteratively performing the rewriting, the processing, the generating, and the determining until the hallucination score or the new hallucination score is acceptable;
wherein the hallucination score or the new hallucination score is acceptable as compared to a threshold;
wherein the operations comprise: providing a user interface configured to receive a new query according to the hallucination score;
wherein the operations comprise: processing the new query with the AI model to generate a new output, generating a plurality of new metrics by comparing the new output to the reference database according to the plurality of criterion, determining a new hallucination score according to an evaluation of the plurality of new metrics;
wherein the operations comprise: iteratively performing the rewriting, the processing, the generating, and the determining until the hallucination score or the new hallucination score is acceptable;
wherein the hallucination score or the new hallucination score is acceptable as compared to a threshold;
wherein the operations comprise: providing a user interface configured to allow a user to change the output according to the hallucination score;
wherein the operations comprise: generating a new query according to the hallucination score and the changed output;
wherein the operations comprise: processing the new query with the AI model to generate a new output, generating a plurality of new metrics by comparing the new output to the reference database according to the plurality of criterion, determining a new hallucination score according to an evaluation of the plurality of new metrics;
wherein the operations comprise: iteratively performing the rewriting, the processing, the generating, and the determining until the hallucination score or the new hallucination score is acceptable;
wherein the hallucination score or the new hallucination score is acceptable as compared to a threshold;
wherein the AI model is a large language model (LLM); and
wherein the reference database comprises is a real-time content feed.
The additional recited limitations further narrow the steps of the independent claims without however providing “a practical application of” or "significantly more than" the underlying “Mental Processes” abstract idea. Therefore, the dependent claims are also not patent eligible.
Moreover, see Recentive Analytics, Inc. v. Fox Corp. (Fed. Cir. April 18, 2025)- “Machine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology. Today, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.”
In claim 17 the limitations a prompt,
That is, other than reciting a “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations”, an “ artificial intelligence (AI) model” and a “large language model (LLM)” nothing in the claim precludes the steps from practically being performed in the mind. For example, a person has the ability of receiving a prompt (e.g. a human may be given a prompt printout), extracting one or more entities from the prompt (e.g. a human may extract portions from the prompt), cross-validating the second output according to one or more previous output associated with the one or more extracted entities, wherein the first plurality of metrics and the second plurality of metrics are generated from different executions of the Al model using different prompt configurations (e.g. a human may compare a second output text to other output related to the extracted portions), determining a hallucination score according to the cross validation (e.g. a human may determine a hallucination score based on the comparing).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements – a “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations”, an “artificial intelligence (AI) model” and a “large language model (LLM)” which are recited at a high-level of generality (i.e., as generic processors performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using a generic computer components.
The claims also recite the additional elements “processing the prompt with an artificial intelligence (AI) model to generate a first output” and “inputting the first output into a system which contains a large language model (LLM) to generate a second output”. The claims do not impose any limits on how the AI model processes the prompt and how the system containing the LLM generates the second output. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations therefore represent extra-solution activity because they are mere nominal or tangential addition to the claims. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As stated above, the claims recite the additional limitations of a “computing system: one or more processors; a reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations”, an “artificial intelligence (AI) model” and a “large language model (LLM)”. However, these are recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications (see Applicant’s specification [0030], [0039] and [0065]). Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
The claims also recite the additional elements “processing the prompt with an artificial intelligence (AI) model to generate a first output” and “inputting the first output into a system which contains a large language model (LLM) to generate a second output”. The claims do not impose any limits on how the AI model processes the prompt and how the system containing the LLM generates the second output. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations represent the extra-solution activity of processing prompt using an AI model and output generation using a system containing an LLM which are well-understood, routine and conventional activities. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
The dependent claim, when analyzed as a whole, is held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claim is not directed to an abstract idea.
The dependent claim recites:
wherein the operations comprise: generating a plurality of metrics by comparing the second output to a reference database according to a plurality of criterion, and determining a new hallucination score according to an evaluation of the plurality of metrics.
The additional recited limitations further narrow the steps of the independent claims without however providing “a practical application of” or "significantly more than" the underlying “Mental Processes” abstract idea. Therefore, the dependent claims are also not patent eligible.
Moreover, see Recentive Analytics, Inc. v. Fox Corp. (Fed. Cir. April 18, 2025)- “Machine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology. Today, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.”
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 17-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Tishibi et al. (US 2024/0427810).
Claims 17:
Tishibi discloses a computing system: one or more processors; a first reference database; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations ([0021], see also [0127]), the operations comprising:
receiving a prompt (“an input prompt 210 is received by an LLM system”, [0052]),
processing the prompt with an artificial intelligence (AI) model to generate a first output (“the LLM 240 is configured to generate an output answer 245”, [0061]),
inputting the first output into a system which contains a large language model (LLM) to generate a second output (a result) (“the output answer 245 into tokens 260-1 through 260-M”, [0062], “the tokens 260 are provided to a second LLM 270”, [0064], “the second LLM 270 is configured to generate an output query 275”, [0066] and “executing the output query 275 on the database 280 configures the database 280 to generate a result which is provided to a comparator 285”, [0067]),
extracting one or more entities from the prompt (“the input prompt 210 is processed by a tokenization layer 220. In some embodiments, the tokenization layer is implemented as a software component which is configured to receive an input from the input prompt 210 and generate a tokenized input, such as token 230-1 through 230-N”, [0054]),
cross-validating the second output according to one or more previous output associated with the one or more extracted entities, wherein the first plurality of metrics and the second plurality of metrics are generated from different executions of the Al model using different prompt configurations (“the comparator 285 is configured to receive an output answer 245 and a database result. In certain embodiments, the comparator 285 is configured to generate a comparison between the output answer 245 and the database result”, [0068], note the wherein clause is not given patentable weight),
determining a hallucination score according to the cross validation (“the comparison is utilized to determine if the output answer 245 matches the database result. This allows to determine if the output answer 245 is a usable result, or if the output answer 245 is a hallucination (i.e., a false result) generated by the LLM 240”, [0069]).
Claim 18:
Tishibi discloses the computing system of claim 1, wherein the operations comprise: generating a plurality of metrics by comparing the second output to a reference database according to a plurality of criterion ([0068], see also [0070]), and determining a new hallucination score (similar within a predefined threshold) according to an evaluation of the plurality of metrics (“the output answer 245 and the database result are determined to match if a value of the output answer 245 and a value of the database result are similar within a predefined threshold”, [0070]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL G NEWAY whose telephone number is (571)270-1058. The examiner can normally be reached Monday-Friday 9:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL G NEWAY/ Primary Examiner, Art Unit 2657