Prosecution Insights
Last updated: April 18, 2026
Application No. 18/887,751

LARGE LANGUAGE MODEL OUTPUT ENTAILMENT

Non-Final OA §101§103§112
Filed
Sep 17, 2024
Examiner
PEREZ-ARROYO, RAQUEL
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
171 granted / 296 resolved
+2.8% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
28 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 296 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action has been issued in response to Applicant’s Communication of application S/N 18/887,751 filed on September 17, 2024. Claims 1 to 20 are currently pending with the application. Priority The instant application claims priority from provisional Application No. 63/539,033, filed on September 18, 2023. Applicant’s claim for the benefit of prior-filed application under 35 U.S.C. 119(e), 120, 121, or 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statements (IDS) submitted on 09/17/2024, and 12/31/2024 were filed before the mailing date of the first action on the merits. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 to 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “the user” in line 2. There is insufficient antecedent basis for this limitation in the claim. Same rationale applies to claims 16 and 19, since they recite similar limitations, and to claims 2 to 15, 17, 18, and 20, since they inherit the same deficiencies by virtue of their dependency. Claim 14 further recites the limitation “the NL responsive to the query is rendered at the client device without annotations prior to the one or more annotations being rendered”. This limitation is not clear. More specifically, it appears that the limitation is requiring to render the NL responsive to the query without annotations when the annotations have not been rendered, or prior to the annotations being rendered. However, if a result is rendered prior to rendering annotations, obviously it is rendered without the annotations. The intention of the claim is not clear, therefore rendering the claim indefinite. For purposes of examination, Examiner will interpret the claim as “rendering the NL responsive to the query initially without annotations, detecting actuation of a user interface element to render the annotations, and in response to detecting the actuation of a user interface element to render the annotations, updating the display by rendering the annotations”, in line with paragraph [0065] of the specification. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 to 7, 9, and 13 to 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 16, and 20 recite extracting, classifying, and analyzing fragments, and generating predictions. The limitation of extracting fragments, which specifically recites “extracting a plurality of textual fragments from the generative model output”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by one or more processors”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by one or more processors” language, “extracting”, in the context of this claim encompasses the user mentally, with the aid of pen and paper, reading information and separating the information into fragments. The limitation of classifying fragments, which specifically recites “classifying a subset of the textual fragments as being suitable for textual entailment analysis”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by the one or more processors”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by the one or more processors” language, “classifying”, in the context of this claim encompasses the user mentally and with the aid of pen and paper, determining whether or not the previously identified fragments are suitable for entailment analysis, or express a verifiable assertion. The limitation of analyzing fragments, which specifically recites “individually performing textual entailment analysis on each textual fragment of the subset, wherein the textual entailment analysis includes, for each textual fragment of the subset: formulating a search query based on the textual fragment, retrieving at least one document that is responsive to the search query, and processing the textual fragment and the at least one document generate one or more predictions of whether the at least one document corroborates or contradicts the textual fragment”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by one or more processors”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by one or more processors” language, “analyzing”, in the context of this claim encompasses the user mentally, with the aid of pen and paper, determining questions based on each fragment, reading information related to the question, and comparing the fragment with the information to determine a prediction of whether the fragment is true or false. If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements – “receiving a query associated with a client device operated by the user”, “generating generative model output based on processing, using a generative model, data indicative of the query”, “using one or more entailment machine learning models”, “causing natural language (NL) responsive to the query to be rendered at the client device”, “causing one or more annotations to be rendered at the client device, wherein the one or more annotations express one or more of the predictions for one or more of the textual fragments of the subset”, one or more processors, and memory. The limitations “receiving a query associated with a client device operated by the user”, “causing natural language (NL) responsive to the query to be rendered at the client device”, and “causing one or more annotations to be rendered at the client device, wherein the one or more annotations express one or more of the predictions for one or more of the textual fragments of the subset” amount to data-gathering steps which is considered to be insignificant extra-solution activity (See MPEP 2106.05(g)). Continuing with the analysis of the additional elements, the limitation “generating generative model output based on processing, using a generative model, data indicative of the query” is recited at a high-level of generality, with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, and is equivalent to merely saying “applying it”, therefore, does not integrate the judicial exception into a practical application nor amount to significantly more. The limitation “using one or more entailment machine learning models”, the one or more processors and one or more storage devices in these steps are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activity identified above, which include the data gathering steps, is recognized by the courts as well-understood, routine, and conventional activity when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claims are not patent eligible. Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 2 recites the same abstract idea of claim 1. The claim recites the additional limitations of “a corroboration machine learning model trained to generate first output indicative of whether a document corroborates a textual fragment; and a contradiction machine learning model trained to generate second output indicative of whether a document contradicts a textual fragment”, which is recited at a high-level of generality, with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, and is equivalent to merely saying “applying it”, therefore, does not integrate the judicial exception into a practical application nor amount to significantly more. Claim 3 is dependent on claim 2 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of claim 1. The claim recites the additional limitations of “one or more of the predictions is determined based on a comparison of the first and second outputs”, which can be performed in the human mind with the aid of pen and paper, and therefore is further elaborating on the abstract idea. The claim does not amount to significantly more. Same rationale applies to claim 15. Claim 4 is dependent on claim 3 and includes all the limitations of claim 1. Therefore, claim 4 recites the same abstract idea of claim 1. The claim recites the additional limitations of “the annotations is rendered using one or more visual attributes that are selected based on the comparison”, which amounts to data presentation steps, considered to be insignificant extra-solution activity, (See MPEP 2106.05(g)), and recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d) (II)(v) Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93)). Therefore, the limitations do not amount to significantly more than the abstract idea. Same rationale applies to claims 13 and 14. Claim 5 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 5 recites the same abstract idea of claim 1. The claim recites the additional limitation of “each textual fragment of the subset is classified as suitable for textual entailment analysis using a classifier machine learning model that is trained to classify textual fragments as capable or incapable of textual entailment analysis”, which is recited at a high-level of generality, with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, and is equivalent to merely saying “applying it”, therefore, does not integrate the judicial exception into a practical application nor amount to significantly more. Same rationale applies to claims 6, 7, Claim 9 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 9 recites the same abstract idea of claim 1. The claim recites the additional limitation of “a given annotation of the annotations expressing one or more of the predictions is operable to retrieve at least a portion of the document that corroborates or contradicts the textual fragment underlying the given annotation”, which amounts to data gathering steps, which is considered to be insignificant extra-solution activity (See MPEP 2106.05(g)), and recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d) (II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claim does not amount to significantly more than the abstract idea. Additionally, the claims do not include a requirement of anything other than conventional, generic computer technology for executing the abstract idea, and therefore, do not amount to significantly more than the abstract idea. Same rationale applies to claims 17, 18, and 20 since they recite similar limitations. Claims 1 to 7, 9, and 13 to 20 are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 to 3, 5, 7, 9, and 13 to 20 are rejected under 35 U.S.C. 103 as being unpatentable over Oyamada (U.S. Publication No. 2025/0086206), and further in view of BAX et al. (U.S. Publication No. 2025/0005266) hereinafter Bax. As to claim 1: Oyamada discloses: A method implemented by one or more processors, comprising: receiving a query associated with a client device operated by the user [Paragraph 0039 teaches receiving input from a user; Paragraph 0082 teaches acquiring a string indicating a question or instruction inputted by a user via the input section]; generating generative model output based on processing, using a generative model, data indicative of the query [Paragraph 0035 teaches examples of the language model may include large language models (LLMs); Paragraph 0083 teaches generating a first result corresponding to the string with the use of a language model trained to generate a text based on the input text]; extracting a plurality of textual fragments from the generative model output [Paragraph 0084 teaches cutting out spans of the first result; Paragraph 0049 teaches cutting out one or more parts of the first result (text etc.) by cutting out multiple spans of a string included in the first result, where a span is a semantic block, where, for example, each sentence included in the string of the first result can be divided as a span, or when the search result includes multiple paragraphs, each paragraph can be a span, in other words, extracting a plurality of textual fragments from the first result of the generative model]; classifying a subset of the textual fragments as being suitable for textual entailment analysis [Paragraph 0051 teaches extracting keywords from a span by inputting the span into a model that has been trained by machine learning so as to extract characteristic keywords, where the result can be categorization using named entity classification, by estimating a type of each word constituting the query by the named entity recognition method and extracts a word of a specific type as a keyword, therefore, classifying the textual fragments as suitable for textual entailment analysis (See Para [0046] of the specification, “identify entities and/or facts in content fragments, which in turn may suggest suitability for textual entailment analysis”)]; individually performing textual entailment analysis on each textual fragment of the subset, wherein the textual entailment analysis includes, for each textual fragment of the subset [Paragraph 0071 teaches matching degree calculation carries out a test for entailment on the result and the extracted document, to calculate the matching degree; Paragraph 0085 teaches steps S251 to S254 are carried out for each span which has been cut out by the query generation section; Paragraph 0087 teaches calculating a matching degree between the span cut out of the first result and the extracted document]: formulating a search query based on the textual fragment [Paragraph 0050 teaches generating queries based on each of the cut spans; Paragraph 0084 teaches generating queries based on the cut spans], retrieving at least one document that is responsive to the search query [Paragraph 0053 extracting a document related to the query from a database in a retrieval process using the generated query; Paragraph 0085 teaches extracting a document related to the query in a retrieval process using the query; Paragraph 0086 teaches extracting a document related to the query from a database in a retrieval process using the query generated by the generation section], and processing the textual fragment and the at least one document using one or more entailment machine learning models to generate one or more predictions of whether the at least one document corroborates or contradicts the textual fragment [Paragraph 0055 teaches retrieved document can be a document that has a high probability of being support for the output of the language model, in other words, a prediction that the document corroborates the textual fragment; Paragraph 0071 teaches calculating a matching degree between the text and the extracted document, by carrying out a test for entailment; Paragraph 0087 teaches for each document extracted by the extraction section, calculating a matching degree between the span cut out of the first result and the extracted document; Paragraph 0088 teaches selecting a document having a calculated matching degree that exceeds a predetermined threshold, for each of the multiple spans]; causing natural language (NL) responsive to the query to be rendered at the client device [Paragraph 0096 teaches output data indicating the result to the display device connected to the output control section; Fig. 6 teaches output result in natural language]; and causing one or more annotations to be rendered at the client device [Paragraph 0057 teaches presenting the reliability of the second result to the user; Paragraph 0096 teaches output the span and the information of the document extracted; Fig. 6 teaches annotations presented in association with the textual fragments or spans; Paragraph 0117 teaches output control section may output, in addition to the second result, the calculated reliability]. Oyamada does not appear to expressly disclose wherein the one or more annotations express one or more of the predictions for one or more of the textual fragments of the subset. Bax discloses: wherein the one or more annotations express one or more of the predictions for one or more of the textual fragments of the subset [Paragraph 0017 teaches displaying statement labels for each of the statements, the statement labels representing trustworthiness of each of the statements; Paragraph 0031 teaches label both the statements and the entire generated text for trustworthiness]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by incorporating one or more annotations expressing one or more of the predictions for one or more of the textual fragments of the subset, as taught by Bax [Paragraph 0017, 0031], because both applications are directed to identification or corroboration of information, including entailment or veracity; rendering annotations expressing the one or more predictions for the textual fragments improves the user’s experience by providing the information in a more comprehensive and easily identifiable manner. As to claim 2: Oyamada discloses: a corroboration machine learning model trained to generate first output indicative of whether a document corroborates a textual fragment [Paragraph 0068 teaches determining whether the first result is correct or not, considering the identified document; Paragraph 0077 teaches instructing to answer whether the first result is correct or not, considering the identified document, by calculating a matching degree, in other words, generating an output indicating whether a document corroborates the text]; and a contradiction machine learning model trained to generate second output indicative of whether a document contradicts a textual fragment [Paragraph 0068 teaches determining whether the first result is correct or not, considering the identified document; Paragraph 0077 teaches instructing to answer whether the first result is correct or not, considering the identified document, by calculating a matching degree, in other words, generating an output indicating whether a document contradicts the text]. As to claim 3: The combination of Oyamada and Bax discloses: wherein one or more of the predictions is determined based on a comparison of the first and second outputs [Oyamada - Paragraph 0057 teaches determining that the reliability calculated by the calculation section exceeds a threshold; Paragraph 0069 teaches determining whether the value exceeds a threshold, therefore, comparing the output with a threshold; Paragraph 0088 teaches for each of the multiple spans, selecting a document having a calculated matching degree that exceeds a predetermined threshold, in other words, where the matching degree is indicative of prediction of corroboration, and it is compared to a threshold; Bax – Paragraph 0058 teaches count the number of trustworthy results and the number of untrustworthy results and determine which value is higher, where, if the trustworthy results are high, the statement may be labeled as trustworthy, and vice versa]. As to claim 5: Oyamada as modified by Bax discloses: each textual fragment of the subset is classified as suitable for textual entailment analysis using a classifier machine learning model that is trained to classify textual fragments as capable or incapable of textual entailment analysis [Bax – Paragraph 0045 teaches identifying statements in the generated text, where a statement refers to an actionable assertion, and sentences that include no statements, for example, the introductory sentence, “Sure, below is a brief biography of Abraham Lincoln” does not include any actionable assertions, in other words, classifying textual fragments as capable or incapable of textual entailment analysis; Paragraph 0046 teaches using an LLM to identify assertions present in the text; Paragraph 0051 teaches using natural language programming to identify statements; Paragraph 0076 teaches identifying whether a given sentence includes informational content (versus tangential content) using, i.e., named entity recognition (NER), keyword extraction, topic modeling, semantic parsing, transformer/ML models, etc.]. As to claim 7: Oyamada as modified by Bax discloses: the subset includes a plurality of textual fragments, and the textual entailment analysis is performed for the plurality of textual fragments in parallel [Bax - Paragraph 0052 teaches executing step 210 and step 212 in parallel for each statement, where for example, parallel processes or threads can be used to process each statement independently and simultaneously; Paragraph 0053 teaches step 210 includes searching for sources for the statement; Paragraph 0079 teaches step 212 is implemented as the method in Fig. 4; Paragraph 0080 teaches classifying the trustworthiness of each of the statements based on the sources of the results, hence, entailment analysis]. As to claim 9: Oyamada discloses: wherein a given annotation of the annotations expressing one or more of the predictions is operable to retrieve at least a portion of the document that corroborates or contradicts the textual fragment underlying the given annotation [Paragraph 0096 teaches output the span and the document extracted, where, when the user carries out an operation of selecting a document associated with a span through the input section, the output control section may acquire the selected document by using the link information thereof, and display the acquired document on the display device]. As to claim 13: Oyamada as modified by Bax discloses: causing one or more interactive feedback elements to be rendered at the client device, wherein the one or more interactive feedback elements are operable to accept or reject one or more of the predictions for one or more of the textual fragments of the subset [Bax – Paragraph 0038 teaches display a UI that presents the generated text and the overall label, and controls or other UI elements to allow users to view the citations (and individual trustworthiness scores), where the UI allows the user to remove citations and/or statements completely from the generated text; Paragraph 0062 teaches receiving user feedback from the reading user to determine which statements to remove, and how; Paragraph 0068 teaches the UI can also include controls to allow a user to remove statements from a given generated text, where the user can review the results for a given statement and make their own determination of whether the statement is trustworthy or not]. As to claim 14: Oyamada as modified by Bax discloses: the NL responsive to the query is rendered at the client device without annotations prior to the one or more annotations being rendered [Bax - Paragraph 0065 teaches transmit a user interface to the user displaying the generated text; Paragraph 0066 teaches displaying a control allowing the user to toggle displaying the individual statement labels, where in response to a selection of this control, the UI can be updated to display the corresponding statement labels for each statement]. As to claim 15: Oyamada discloses: wherein the one or more predictions of whether the at least one document corroborates or contradicts the textual fragment are generated conditionally based on a responsive content quality metric determined for the at least one document [Paragraph 0075 teaches uses a learning model that has been learned by machine learning to receive two sentences as input and to output the similarity between the two sentences, where the matching degree calculation section inputs, into the learning model, the string outputted by the generation section and the document extracted by the extraction section, where the matching degree calculation section calculates the matching degree such that the higher the degree of the similarity outputted from the learning model is, the higher the matching degree is]. Same rationale applies to claims 16 to 20, since they recite similar limitations, and are therefore similarly rejected. Claims 4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Oyamada (U.S. Publication No. 2025/0086206), in view of BAX et al. (U.S. Publication No. 2025/0005266) hereinafter Bax, and further in view of Gardner (U.S. Publication No. 2024/0296219). As to claim 4: Oyamada discloses all the limitations as set forth in the rejections of claim 3 above, but does not appear to expressly disclose wherein one or more of the annotations is rendered using one or more visual attributes that are selected based on the comparison. Gardner discloses: wherein one or more of the annotations is rendered using one or more visual attributes that are selected based on the comparison [Paragraph 0089 teaches reliability indicators include a corresponding accuracy value, and a graphic field containing color-coded graphics each corresponding to a sub-range within a spectrum of the percentage value, therefore, selecting visual attributes, e.g., color-coded graphics, based on the comparison]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by rendering one or more of the annotations using one or more visual attributes that are selected based on the comparison, as taught by Gardner [Paragraph 0089], because the applications are directed to identification or corroboration of information, including entailment or veracity; rendering annotations with corresponding visual attributes enhances legibility and enables faster identification of information as required by the user, thereby improving the user’s experience. As to claim 8: Oyamada discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose wherein the one or more annotations expressing one or more of the predictions comprise: a first annotation that visually highlights one of the textual fragments that is corroborated by one of the documents in a first color; and a second annotation that visually highlights another of the textual fragments that is contradicted by one of the documents in a second color that is different than the first color. Gardner discloses: wherein the one or more annotations expressing one or more of the predictions comprise: a first annotation that visually highlights one of the textual fragments that is corroborated by one of the documents in a first color [Paragraph 0089 teaches reliably accurate representations may be within a percentage range of, e.g., 95% or higher, and may have at least one of a green color-coded graphic]; and a second annotation that visually highlights another of the textual fragments that is contradicted by one of the documents in a second color that is different than the first color [Paragraph 0089 teaches unreliable, inaccurate, or misleading citations or representations may be within a percentage range of, e.g., below 65%, and may have at least one of a red color-coded graphic]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by visually highlighting a first annotation of the textual fragments that is corroborated by one of the documents in a first color; and a second annotation of the textual fragments that is contradicted by one of the documents in a second color that is different than the first color, as taught by Gardner [Paragraph 0089], because the applications are directed to identification or corroboration of information, including entailment or veracity; rendering annotations with corresponding visual attributes enhances legibility and facilitates identification of information as required by the user with ease, thereby improving the user’s experience. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Oyamada (U.S. Publication No. 2025/0086206), in view of BAX et al. (U.S. Publication No. 2025/0005266) hereinafter Bax, and further in view of Byron et al. (U.S. Publication No. 2016/0078018) hereinafter Byron. As to claim 6: Oyamada as modified by Bax discloses: classifying textual fragments as suitable for entailment analysis [Paragraph 0046 teaches using an LLM to identify assertions present in the text; Paragraph 0051 teaches using natural language programming to identify statements; Paragraph 0076 teaches identifying whether a given sentence includes informational content (versus tangential content) using, i.e., named entity recognition (NER), keyword extraction, topic modeling, semantic parsing, transformer/ML models, etc.]. Neither Oyamada nor Bax appear to expressly disclose classifying textual fragments as suitable for textual entailment analysis based on an entailment score predicted for the textual fragment using a regression machine learning model that is trained to predict textual entailment analysis suitability scores for textual fragments. Byron teaches: classifying textual fragments as suitable for textual entailment analysis based on an entailment score predicted for the textual fragment using a regression machine learning model that is trained to predict textual entailment analysis suitability scores for textual fragments [Paragraph 0015 teaches annotated statements are processed with a machine learning algorithm to generate a verifiable statement classification model, which is referenced by a verifiable statement classification system to distinguish verifiable and non-verifiable statements contained within an input corpus of text; Paragraph 0048 teaches a verifiable statement classification system implemented to distinguish between assertions within an input corpus of text that state a claim that is appropriate for verification, as opposed to assertions that are either subjective in nature, related to private information states that cannot be verified, or are not conducive to checking; Paragraph 0063 teaches the extracted features may correspond to sentiment, verbs, verb tense, nouns, proper nouns, magnitude, velocity, importance, quantified items, quantitative comparison operators, therefore, including scores; Paragraph 0066 teaches classifying the segmented statements as verifiable or not verifiable, by using the statement classification model, hence, machine learning model]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by classifying textual fragments as suitable for textual entailment analysis based on an entailment score predicted for the textual fragment using a regression machine learning model that is trained to predict textual entailment analysis suitability scores for textual fragments, as taught by Byron [Paragraph 0015, 0048, 0063, 0066], because the applications are directed to identification of information, including entailment or veracity; identifying fragments suitable for entailment analysis using scores or an algorithm different than the ones used by the aforementioned references, is a simple substitution of one known element for another to obtain predictable results. Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Oyamada (U.S. Publication No. 2025/0086206), in view of BAX et al. (U.S. Publication No. 2025/0005266) hereinafter Bax, and further in view of Stanley et al. (U.S. Publication No. 2024/0086448) hereinafter Stanley. As to claim 10: Oyamada discloses: rendering the portion of the document that corroborates or contradicts the textual fragment underlying the given annotation [Paragraph 0096 teaches when the user carries out an operation of selecting a document associated with a span through the input section, the output control section may acquire the selected document by using the link information thereof, and display the acquired document on the display device]. Oyamada does not appear to expressly disclose causing a pop-up window to be rendered at the client device, wherein the pop-up window conveys the portion of the document. Stanley discloses: causing a pop-up window to be rendered at the client device, wherein the pop-up window conveys the portion of the document [Paragraph 0052 teaches receive an input corresponding to a selected element corresponding to a particular search result of the set of search results, and displaying a corresponding document, for example, displaying the document or a portion of the document over the top of the display region in a pop-up display]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by causing a pop-up window to be rendered at the client device, wherein the pop-up window conveys the portion of the document, as taught by Stanley [Paragraph 0052], because the applications are directed to identification of information; rendering the document in a pop-up window is a simple substitution of one known element for another to obtain predictable results. As to claim 11: Oyamada discloses: rendering all or a portion of the document that corroborates or contradicts the textual fragment underlying the given annotation [Paragraph 0096 teaches when the user carries out an operation of selecting a document associated with a span through the input section, the output control section may acquire the selected document by using the link information thereof, and display the acquired document on the display device]. Oyamada does not appear to expressly disclose causing a new web browser tab to be rendered at the client device, wherein the new web browser tab conveys the portion of the document. Stanley discloses: causing a new web browser tab to be rendered at the client device, wherein the new web browser tab conveys the portion of the document [Paragraph 0052 teaches receive an input corresponding to a selected element corresponding to a particular search result of the set of search results, and displaying a corresponding document, for example, displaying the document in a new browser tab]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by causing a new web browser tab to be rendered at the client device, wherein the new web browser tab conveys the portion of the document, as taught by Stanley [Paragraph 0052], because the applications are directed to identification of information; rendering the document in new web browser tab is a simple substitution of one known element for another to obtain predictable results. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Oyamada (U.S. Publication No. 2025/0086206), in view of BAX et al. (U.S. Publication No. 2025/0005266) hereinafter Bax, in view of Stanley et al. (U.S. Publication No. 2024/0086448) hereinafter Stanley, and further in view of RELIGA et al. (U.S. Publication No. 2023/0082729) hereinafter Religa. As to claim 12: Oyamada discloses: rendering the portion of the document that corroborates or contradicts the textual fragment underlying the given annotation [Paragraph 0096 teaches when the user carries out an operation of selecting a document associated with a span through the input section, the output control section may acquire the selected document by using the link information thereof, and display the acquired document on the display device]. Oyamada does not appear to expressly disclose the new web browser tab is automatically scrolled to a location of the document that contains the portion. Religa discloses: the new web browser tab is automatically scrolled to a location of the document that contains the portion [Paragraph 0028 teaches receives a selected label of the plurality of labels and identifies the document control item corresponding to the selected label and renders a section of the document corresponding to the selected label using the document control item]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Oyamada, by automatically scrolling to a location of the document that contains the portion, as taught by Religa [Paragraph 0028], because the applications are directed to identification of information; automatically scrolling to a location of the document that contains the relevant or pertinent portion is a simple substitution of one known element for another to obtain predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAQUEL PEREZ-ARROYO whose telephone number is (571)272-8969. The examiner can normally be reached Monday - Friday, 8:00am - 5:30pm, Alt Friday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAQUEL PEREZ-ARROYO/Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Sep 17, 2024
Application Filed
Apr 02, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566786
NATURAL LANGUAGE PROCESSING WORKFLOW FOR RESPONDING TO CLIENT QUERIES
2y 5m to grant Granted Mar 03, 2026
Patent 12566726
ENABLING EXCLUSION OF ASSETS IN IMAGE BACKUPS
2y 5m to grant Granted Mar 03, 2026
Patent 12555109
DETERMINISTIC CONCURRENCY CONTROL FOR PRIVATE BLOCKCHAINS
2y 5m to grant Granted Feb 17, 2026
Patent 12547602
LOG ENTRY REPRESENTATION OF DATABASE CATALOG
2y 5m to grant Granted Feb 10, 2026
Patent 12517948
INFORMATION PROCESSING METHOD AND DEVICE FOR SORTING MUSIC IN A PLAYLIST
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
90%
With Interview (+32.3%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 296 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month