Prosecution Insights
Last updated: April 19, 2026
Application No. 19/231,808

SYSTEMS AND METHODS FOR GENERATING AGGREGATED REVIEWS

Non-Final OA §101§103
Filed
Jun 09, 2025
Examiner
GODBOLD, DAVID GARRISON
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Walmart Apollo LLC
OA Round
1 (Non-Final)
22%
Grant Probability
At Risk
1-2
OA Rounds
2y 1m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
18 granted / 82 resolved
-30.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
34 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
46.2%
+6.2% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
6.1%
-33.9% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 82 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are currently pending and have been examined in this application. This communication is the first action on the merits. Information Disclosure Statement The information disclosure statement (IDS) submitted on June 9, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-9 are directed to a system (i.e., a machine); claims 10-15 are directed to a method (i.e., a process); claims 16-20 are directed to a non-transitory computer-readable storage medium (i.e., a machine). Therefore, claims 1-20 all fall within the one of the four statutory categories of invention. Step 2A, Prong One Independent claims 1, 10, and 16 substantially recite receiving review data associated with an item object, wherein the review data includes one or more review features including one or more target keywords; selecting a subset of the review data based on a first selection criteria configured to select a first subset of the review data; generating a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords; selecting a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores; generating one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords; generating an item object summary based on the one or more target keyword summaries using a summarization model; iteratively modifying the item object summary based on at least one or more characteristics of the item object summary, wherein: including evaluating the at least one or more characteristics of the item object summary and adjust the item object summary in accordance with a determination that the at least one or more characteristics of the item object summary does not meet a first criteria; and generating a set of instructions to cause the item object summary to be displayed. The limitations stated above are processes/functions that under broadest reasonable interpretation covers “certain methods of organizing human activity” (commercial interactions) of aggregating, summarizing, and providing reviews for an item. Therefore, the claim recites an abstract idea. Step 2A, Prong Two The judicial exception is not integrated into a practical application. Claims 1, 10, and 16 as a whole amount to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent), and (ii) generally links the use of a judicial exception to a particular technological environment or field of use. The claim recites the additional elements of: (i) a non-transitory memory having instructions stored thereon (claims 1, 16), (ii) a processor configured to read the instructions/one or more processors of a computing device (claims 1, 16), (iii) a database (claims 1, 10, 16), (iv) a self-critique module (claims 1, 10, 16), (v) one or more large language models (claims 1, 10, 16), and (vi) a user interface (claims 1, 10, 16). The additional elements of (i) a non-transitory memory having instructions stored thereon, (ii) a processor configured to read the instructions/one or more processors of a computing device, (iii) a database, (iv) a self-critique module, and (vi) a user interface are recited at a high level of generality (see [0042] of the Applicants PG Specification discussing the non-transitory memory having instructions stored thereon, [0040] discussing the processor configured to read the instructions/one or more processors of a computing device, [0034] discussing the database, [0054/0063] discussing the self-critique module, and [0050] discussing the user interface) such that, when viewed as whole/ordered combination, it amounts to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). The additional element of (v) one or more large language models are recited at a high level of generality (See [0065] of the Applicant’s PG Publication discussing the one or more large language models) such that when viewed as whole/ordered combination, do no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e. Large Language Modeling) (See MPEP 2106.05(h)). Accordingly, these additional elements, when viewed as a whole/ordered combination [See Figures 1, 2, and 4 showing all the (i) a non-transitory memory having instructions stored thereon, (ii) a processor configured to read the instructions/one or more processors of a computing device, (iii) a database, (iv) a self-critique module, (v) one or more large language models, and (vi) a user interface in combination], do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) “apply it” (or an equivalent), and (ii) generally link the use of a judicial exception to a particular technological environment or field of use, and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)); and (ii) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, the claims 1, 10, and 16 are ineligible. Dependent Claims 2-4, 6, 7, 11-13, 15, and 17-19 merely narrow the previously recited abstract idea limitations. For reasons described above with respect to claims 1, 10, and 16 these judicial exceptions are not meaningfully integrated into a practical application or significantly more than the abstract idea. Thus, claims 2-4, 6, 7, 11-13, 15, and 17-19 are also ineligible. Step 2A, Prong Two Dependent Claims 5, 14, and 20 further narrow the previously recited abstract idea limitations. Claims 5, 14, and 20 also recite the additional elements of a large language model, which is recited at a high-level of generality (See [0059/0065] of the Applicants PG Specification disclosing the large language model) such that when viewed as whole/ordered combination, the additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e. LLMs) (See MPEP 2106.05(h)). Accordingly, the additional elements, when viewed individually and as a whole/ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional element amounts to no more than: generally linking the use of a judicial exception to a particular technological environment or field of use, and is not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional element of a large language model does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, claims 5, 14, and 20 are ineligible. Step 2A, Prong Two Dependent Claim 8 further narrows the previously recited abstract idea limitations. Claim 8 also recites the additional elements of an untrained self-critique module, and a database. The additional element of an untrained self-critique module is recited at a high-level of generality (See [0090] of the Applicants PG Specification disclosing the untrained self-critique module) such that when viewed as whole/ordered combination, the additional element does no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e. machine learning modeling) (See MPEP 2106.05(h)). The additional element of a database is recited at a high level of generality (See [0035] of the Applicants PG Specification disclosing the database) such that, when viewed as whole/ordered combination, it amounts to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). Accordingly, the additional elements, when viewed individually and as a whole/ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional element amounts to no more than: (i) “apply it” (or an equivalent), and (ii) generally link the use of a judicial exception to a particular technological environment or field of use, and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)); and (ii) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional elements of an untrained self-critique module, and a database do not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, claim 8 is ineligible. Step 2A, Prong Two Dependent Claim 9 further narrows the previously recited abstract idea limitations. Claim 9 also recites the additional elements of a landing page which is recited at a high-level of generality (See [0031] of the Applicants PG Specification disclosing the landing page) such that, when viewed as whole/ordered combination, it amounts to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). Accordingly, the additional elements, when viewed individually and as a whole/ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional element amounts to no more than: (i) “apply it” (or an equivalent), and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional element of a landing page does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, claim 9 is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 7, 9-13, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Dong (US 20140351079) (hereafter Dong) in view of Norton (US 20220156464) (hereafter Norton) and further in view of Rahman (US 20240330661) (hereafter Rahman). In regards to claim 1, Dong discloses a system, comprising: a non-transitory memory having instructions stored thereon; and a processor configured to read the instructions to: receive review data associated with an item object in a database, wherein the review data includes one or more review features including one or more target keywords; (Paras. 6, 179, 183) (“accessing one or more commodity reviews; extracting one or more feature indicators (i.e. one or more target keywords) from the one or more commodity reviews, each feature indicator being associated with a feature of a commodity; extracting one or more sentiment indicators (i.e. one or more target keywords) from the one or more commodity reviews” “Any suitable computing device can be used to implement the consumer computing device or the components of the provider's computer system. … computing devices may include control circuitry, input/output ("I/O") circuitry, and a processor that communicates with a number of peripheral subsystems via a bus subsystem. These peripheral subsystems may include a storage subsystem, a memory subsystem, a user-interface subsystem, a user-interface output subsystem, and a network-interface subsystem. By processing instructions stored on one or more storage devices, the processor may perform the steps of the present method” “consumer-generated, opinionated product reviews are stored in a database on a provider's server.”) Dong discloses select a subset of the review data based on a first selection criteria configured to select a first subset of the review data; (Para. 58-59) (“The invention mines product experiences to implement a practical technique for turning user-generated product reviews into rich, feature-based, experiential product cases. The features of these cases relate to topics that are discussed by reviewers and their aggregate opinions. The 3-step approach is summarised in FIG. 5 for a given product, P (i.e. a first selection criteria configured to select a first subset of the review data): (1) use shallow NLP techniques to extract a set of candidate features from Reviews(P), the reviews of P (i.e. select a subset of the review data based on a first selection criteria configured to select a first subset of the review data);”) Dong discloses generate a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords; (Para. 60, 66) (“(2) each feature, Fi, (i.e. target keyword) is associated with a sentiment label (positive, negative, or neutral) (i.e. a polarity of the target keywords) based on the opinion expressed in review, Rk, for P” “The invention calculates how frequently each feature co-occurs with a sentiment word (i.e. a polarity of the target keyword) in the same sentence (i.e. generate a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords), and retains a single-noun only if its frequency is greater than some fixed threshold”) Dong discloses select a subset of the target keywords each having a respective target keyword score based on the target keyword scores; (Para. 58-59, 63-66, 91, 106, 141) (“The invention mines product experiences to implement a practical technique for turning user-generated product reviews into rich, feature-based, experiential product cases. The features of these cases relate to topics that are discussed by reviewers and their aggregate opinions. The 3-step approach is summarised in FIG. 5 for a given product, P: (1) use shallow NLP techniques to extract a set of candidate features from Reviews(P), the reviews of P;” “Considering two basic types of features--bi-gram features and single-noun features--and the invention uses a combination of shallow NLP and statistical methods to mine them. … For single-noun features the invention also extracts a candidate set (i.e. select a subset of the target keywords), this time nouns, from the reviews but validates them by eliminating nouns that are rarely associated with sentiment words. The reason is that such nouns are unlikely to refer to product features.” “Again, based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries. For example, the advantages and disadvantages of a particular product can be highlighted (i.e. those features which are associated with mainly positive and negative sentiment in reviews (i.e. based on the target keyword scores)).” “The sentiment of a particular feature is indicated by colour, from red (strong negative sentiment) to green (strong positive sentiment)” “where Pos(Fi; P), Neg(Fi; P), and Neut(Fi; P) denote the number of times that feature Fi has positive, negative and neutral sentiment in the reviews for product P, respectively). Thus, each product can be represented as a product case, Case(P) (i.e. the target keyword scores), which aggregates product features, popularity and sentiment data as in Equation 3 below.”) Dong discloses generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords; (Para. 70, 89) (“The invention uses the feature-based product representations (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords) to implement a content-based approach” “Given the representation of products as sets of features and associated sentiment (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords) which are mined from product reviews, it is possible to leverage this information to provide explanations to users as to why particular products are being recommended. For example, an explanation for the top-recommended product may be: ‘this product is recommended as it has superior lens quality and battery life etc. (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords)”) Dong discloses generate an item object summary based on the one or more target keyword summaries using a summarization model; (Para. 88, 91) (“The invention provides a system (i.e. a summarization model) that generates a ‘meta-review’ which corrals and parses multiple customer/user product reviews to create a unified review (i.e. generate an item object summary based on the one or more target keyword summaries using a summarization model). This meta-review gives the reader the consensus opinion from all the collated reviews about various product features discussed in reviews. The invention also explains the various choices that were made when the meta-review was being created.” “based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries (i.e. generate an item object summary based on the one or more target keyword summaries).”) Dong discloses generate a set of instructions to cause the item object summary to be displayed on a user interface. (Para. 8, 96) (“The invention provides a recommendation to the consumer for the best or most suitable commodity appropriate to the needs of the consumer. The commodity review may be accessible to the consumer by means of a website interface. The commodity review stores the previous experiences of other consumers relating to the same or a similar commodity. By extracting the sentiment indicators and evaluating these sentiment indicators, this arrangement leverages the previous experiences of other consumers to provide a more nuanced and sophisticated recommendation to the consumer.” “After the commodity recommendation has been formed in step 7, the computer system delivers 8 the commodity recommendation for the second commodity C to the user using the website interface.”) As discussed above, Dong discloses select a subset of the target keywords each having a respective target keyword score based on the target keyword scores. Dong does not explicitly disclose, however Norton, in the same field of endeavor, discloses select a subset of the target keywords each having a respective target keyword score of Dong based on a predetermined range of the target keyword scores; (Para. 103, 108, 187-189) (“Sentiment-score thresholds may be relative or fixed with respect to sentiment scores. For example, a positive-sentiment-score threshold (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) may include a certain number or percentage of highest sentiment scores (e.g., extracted sentences corresponding to the highest 150 sentiment scores). A negative-sentiment-score threshold (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) may include a certain number or percentage of lowest sentiment scores (e.g., extracted sentences corresponding to the lowest 150 sentiment scores). … a positive-sentiment-score threshold, a negative-sentiment-score threshold, … may include sentiment scores above a threshold sentiment score (e.g., above +5), below a threshold sentiment score (e.g., above −5) (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores)” “determining the sentiment indicator for each extracted sentence from the set of extracted sentences by determining, for each extracted sentence, a sentiment score indicating a positive sentiment corresponding to the topic or a negative sentiment corresponding to the topic (i.e. a predetermined range of the target keyword scores). … further include providing the response summary for the set of textual responses (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) … the acts 800 further include generating the response summary for the set of textual responses” That is, in order to create the textual responses, the invention must select a subset of topics (i.e. keywords) based on their scores (i.e. target keyword score) to be included in the modeling of the textual response.) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong with the machine learning based summarizations of Norton in order to improve the “efficiency and flexibility” of the system’s textual responses (Norton – Para. 41) Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses iteratively modify the item object summary of Dong using a self-critique module based on at least one or more characteristics of the item object summary, wherein: the self-critique module includes one or more large language models configured to evaluate the at least one or more characteristics of the item object summary of Dong and adjust the item object summary of Dong in accordance with a determination that the at least one or more characteristics of the item object summary does not meet a first criteria; and (Para. 24, 47-48, 54-57) (“natural language application 120 can identify relevant context information from the source documents, prompt the language model to generate a response using the context information (i.e. the item object summary of Dong), and progressively correct the response” “At step 408, natural language application 120 prompts the language model to generate a response (i.e. the item object summary of Dong) using the prompt generated at step 406. At step 410, natural language application (i.e. self critique module) 120 detects and corrects an incorrect and/or incomplete response (i.e. a first criteria). In some embodiments, natural language application 120 detects conflicts (hallucinations) between one or more portions of the response and one or more portions of the context, and natural language application 120 corrects the hallucinations in the response, as discussed in greater detail below in conjunction with FIGS. 5-6. In some embodiments, natural language application 120 also checks whether each portion of the context relevant to the user request is within the response to identify any portions of context that are not within the response, indicating that the response is incomplete. In such cases, natural application 120 can prompt the language model to add the portion(s) of context that are relevant to the user request to the response.” “At step 510, natural language application 120 uses a textual entailment model (i.e. a self-critique module based on at least one or more characteristics of the item object summary) to compute an entailment score for each portion of the response and each augmented portion of the response based on the corresponding relevant portions of the context determined at step 506 (i.e. evaluate the at least one or more characteristics of the item object summary and adjust the item object summary in accordance with a determination that the at least one or more characteristics of the item object summary does not meet a first criteria). if a negated portion in the response is associated with a higher entailment score than the corresponding portion of the response is, then the response of the language model 204 conflicts with the portion of the context and is potentially hallucinatory, and vice versa if the corresponding portion of the response is associated with a higher entailment score than the negated portion of the response, which can be reflected in the entailment scores that are computed for the portion of the response and the negated portion of the response. At step 512, natural language application 120 computes a hallucination score for the response using the computed entailment scores for each portion of the response. … At step 514, natural language application 120 determines if the hallucination score is above a threshold. At step 514, natural language application 120 determines if the hallucination score is above a threshold.). … if the hallucination score is above the threshold, then method 400 continues to step 516, where natural language application 120 prompts the language model (i.e. the self-critique module includes one or more large language models) to modify the potentially hallucinatory portions of the response to better match the corresponding relevant portions of context, thereby generating a new response. Method 400 then returns to step 502 (i.e. iteratively modify the item object summary using a self-critique module)”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton with the LLM corrections of Rahman in order to improve the accuracy and completeness of model outputs (Rahman – Para. 8) In regards to claim 2, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 1. Dong discloses wherein the processor is configured to read the instructions to generate the target keyword score based a number of reviews that include each respective target keyword and a sentiment score of the respective target keyword. (Para. 67, 74) (“For each feature indicator extracted from the plurality of commodity reviews, the computer system determines 5 the popularity of this particular feature indicator. The popularity is determined by determining the number of commodity reviews from which this particular feature indicator is extracted (i.e. a number of reviews that include each respective target keyword). This number of commodity reviews from which this particular feature indicator is extracted is represented as a popularity indicator.” “For these features the invention calculates an overall sentiment score as shown in Equation 1 and their popularity as per Equation 2. Then each product case, Case(P), can be represented as shown in Equation 3. Note, Pos(Fi; P), Neg(Fi; P), and Neut(Fi; P) denote the number of times that feature Fi has positive, negative and neutral sentiment in the reviews for product P”) In regards to claim 3, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 2. Dong discloses wherein the processor is configured to read the instructions to generate the sentiment score of the respective target keyword based on a first number of words of one or more reviews that surround the respective target keyword with a positive connotation, and a second number of words of the one or more reviews that surround the respective target keyword with a negative connotation. (Para. 14) (“Preferably evaluating the one or more sentiment indicators comprises determining the number of positive sentiment indicators associated with a first feature indicator. Ideally evaluating the one or more sentiment indicators comprises determining the number of negative sentiment indicators associated with the first feature indicator.”) In regards to claim 4, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 1. Dong discloses wherein the target keyword scores comprises the highest target keyword scores and the lowest target keyword scores. (Para. 66, 91, 106) (“The invention calculates how frequently each feature co-occurs with a sentiment word in the same sentence, and retains a single-noun only if its frequency is greater than some fixed threshold (in this case 70%).” “Again, based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries. For example, the advantages and disadvantages of a particular product can be highlighted (i.e. those features which are associated with mainly positive (i.e. highest target keyword scores) and negative sentiment in reviews (i.e. lowest target keyword scores)).” “The sentiment of a particular feature is indicated by colour, from red (strong negative sentiment) (i.e. lowest target keyword scores) to green (strong positive sentiment) (i.e. highest target keyword scores) … Both the feature columns and product rows are sorted by average sentiment.”) Dong does not explicitly disclose, however Norton, in the same field of endeavor, discloses wherein the predetermined range of the target keyword scores comprises a first number of the highest target keyword scores of Dong and a second number of the lowest target keyword scores of Dong. (Para. 102) (“Sentiment-score thresholds may be relative or fixed with respect to sentiment scores. For example, a positive-sentiment-score threshold may include a certain number or percentage of highest sentiment scores (i.e. a first number of the highest target keyword scores) (e.g., extracted sentences corresponding to the highest 150 sentiment scores). A negative-sentiment-score threshold may include a certain number or percentage of lowest sentiment scores (i.e. a second number of the lowest target keyword scores) (e.g., extracted sentences corresponding to the lowest 150 sentiment scores” “a sentiment score indicating a positive sentiment corresponding to the topic or a negative sentiment corresponding to the topic.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong with the machine learning based summarizations of Norton in order to improve the “efficiency and flexibility” of the system’s textual responses. (Norton – Para. 41) In regards to claim 7, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 1. Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses wherein the processor is configured to read the instructions to execute the self-critique module and, in response, adjust the item object summary. (Para. 57) (“At step 514, natural language application 120 determines if the hallucination score is above a threshold At step 514, natural language application 120 determines if the hallucination score is above a threshold.). … if the hallucination score is above the threshold, then method 400 continues to step 516, where natural language application 120 prompts the language model to modify the potentially hallucinatory portions of the response to better match the corresponding relevant portions of context, thereby generating a new response.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton with the LLM corrections of Rahman in order to improve the accuracy and completeness of model outputs (Rahman – Para. 8) In regards to claim 9, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 1. Dong discloses wherein the processor is configured to read the instructions to display the user interface on a landing page associated with the item object that a user is viewing. (Para. 8) (“The commodity review may be accessible to the consumer by means of a website interface (i.e. to display the user interface on a landing page associated with the item object that a user is viewing).”) In regards to claim 10, Dong discloses a computer implemented method, comprising: receiving review data associated with an item object in a database, wherein the review data includes one or more review features including one or more target keywords; (Para. 6, 179, 183) (“accessing one or more commodity reviews; extracting one or more feature indicators (i.e. one or more target keywords) from the one or more commodity reviews, each feature indicator being associated with a feature of a commodity; extracting one or more sentiment indicators (i.e. one or more target keywords) from the one or more commodity reviews” “Any suitable computing device can be used to implement the consumer computing device or the components of the provider's computer system. … By processing instructions stored on one or more storage devices, the processor may perform the steps of the present method.” “consumer-generated, opinionated product reviews are stored in a database on a provider's server.”) Dong discloses selecting a subset of the review data based on a first selection criteria configured to select a first subset of the review data; (Para. 58-59) (“The invention mines product experiences to implement a practical technique for turning user-generated product reviews into rich, feature-based, experiential product cases. The features of these cases relate to topics that are discussed by reviewers and their aggregate opinions. The 3-step approach is summarised in FIG. 5 for a given product, P (i.e. a first selection criteria configured to select a first subset of the review data): (1) use shallow NLP techniques to extract a set of candidate features from Reviews(P), the reviews of P (i.e. selecting a subset of the review data based on a first selection criteria configured to select a first subset of the review data);”) Dong discloses generating a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords; (Para. 60, 66) (“(2) each feature, Fi, (i.e. target keyword) is associated with a sentiment label (positive, negative, or neutral) (i.e. a polarity of the target keywords) based on the opinion expressed in review, Rk, for P” “The invention calculates how frequently each feature co-occurs with a sentiment word (i.e. a polarity of the target keyword) in the same sentence (i.e. generate a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords), and retains a single-noun only if its frequency is greater than some fixed threshold”) Dong discloses selecting a subset of the target keywords each having a respective target keyword score based on the target keyword scores; (Para. 58-59, 63-66, 91, 106, 141) (“The invention mines product experiences to implement a practical technique for turning user-generated product reviews into rich, feature-based, experiential product cases. The features of these cases relate to topics that are discussed by reviewers and their aggregate opinions. The 3-step approach is summarised in FIG. 5 for a given product, P: (1) use shallow NLP techniques to extract a set of candidate features from Reviews(P), the reviews of P;” “Considering two basic types of features--bi-gram features and single-noun features--and the invention uses a combination of shallow NLP and statistical methods to mine them. … For single-noun features the invention also extracts a candidate set (i.e. select a subset of the target keywords), this time nouns, from the reviews but validates them by eliminating nouns that are rarely associated with sentiment words. The reason is that such nouns are unlikely to refer to product features.” “Again, based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries. For example, the advantages and disadvantages of a particular product can be highlighted (i.e. those features which are associated with mainly positive and negative sentiment in reviews (i.e. based on the target keyword scores)).” “The sentiment of a particular feature is indicated by colour, from red (strong negative sentiment) to green (strong positive sentiment)” “where Pos(Fi; P), Neg(Fi; P), and Neut(Fi; P) denote the number of times that feature Fi has positive, negative and neutral sentiment in the reviews for product P, respectively). Thus, each product can be represented as a product case, Case(P) (i.e. the target keyword scores), which aggregates product features, popularity and sentiment data as in Equation 3 below.”) Dong discloses generating one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords; (Para. 70, 89) (“The invention uses the feature-based product representations (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords) to implement a content-based approach” “Given the representation of products as sets of features and associated sentiment (i.e. generating one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords) which are mined from product reviews, it is possible to leverage this information to provide explanations to users as to why particular products are being recommended. For example, an explanation for the top-recommended product may be: ‘this product is recommended as it has superior lens quality and battery life etc. (i.e. generating one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords)”) Dong discloses generating an item object summary based on the one or more target keyword summaries using a summarization model; (Para. 88, 91) (“The invention provides a system (i.e. a summarization model) that generates a ‘meta-review’ which corrals and parses multiple customer/user product reviews to create a unified review (i.e. generate an item object summary based on the one or more target keyword summaries using a summarization model). This meta-review gives the reader the consensus opinion from all the collated reviews about various product features discussed in reviews. The invention also explains the various choices that were made when the meta-review was being created.” “based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries (i.e. generate an item object summary based on the one or more target keyword summaries).”) Dong discloses generating a set of instructions to cause the item object summary to be displayed on a user interface. (Para. 8, 96) (“The invention provides a recommendation to the consumer for the best or most suitable commodity appropriate to the needs of the consumer. The commodity review may be accessible to the consumer by means of a website interface. The commodity review stores the previous experiences of other consumers relating to the same or a similar commodity. By extracting the sentiment indicators and evaluating these sentiment indicators, this arrangement leverages the previous experiences of other consumers to provide a more nuanced and sophisticated recommendation to the consumer.” “After the commodity recommendation has been formed in step 7, the computer system delivers 8 the commodity recommendation for the second commodity C to the user using the website interface.”) As discussed above, Dong discloses selecting a subset of the target keywords each having a respective target keyword score based on the target keyword scores. Dong does not explicitly disclose, however Norton, in the same field of endeavor, discloses selecting a subset of the target keywords each having a respective target keyword score of Dong based on a predetermined range of the target keyword scores; (Para. 103, 108, 187-189) (“Sentiment-score thresholds may be relative or fixed with respect to sentiment scores. For example, a positive-sentiment-score threshold (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) may include a certain number or percentage of highest sentiment scores (e.g., extracted sentences corresponding to the highest 150 sentiment scores). A negative-sentiment-score threshold (i.e. selecting a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) may include a certain number or percentage of lowest sentiment scores (e.g., extracted sentences corresponding to the lowest 150 sentiment scores). … a positive-sentiment-score threshold, a negative-sentiment-score threshold, … may include sentiment scores above a threshold sentiment score (e.g., above +5), below a threshold sentiment score (e.g., above −5) (i.e. selecting a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores)” “determining the sentiment indicator for each extracted sentence from the set of extracted sentences by determining, for each extracted sentence, a sentiment score indicating a positive sentiment corresponding to the topic or a negative sentiment corresponding to the topic (i.e. a predetermined range of the target keyword scores). … further include providing the response summary for the set of textual responses (i.e. selecting a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) … the acts 800 further include generating the response summary for the set of textual responses” That is, in order to create the textual responses, the invention must select a subset of topics (i.e. keywords) based on their scores (i.e. target keyword score of Dong) to be included in the modeling of the textual response.) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong with the machine learning based summarizations of Norton in order to improve the “efficiency and flexibility” of the system’s textual responses (Norton – Para. 41) Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses iteratively modifying the item object summary of Dong using a self-critique module based on at least one or more characteristics of the item object summary, wherein:the self-critique module includes one or more large language models configured to evaluate the at least one or more characteristics of the item object summary of Dong and adjust the item object summary of Dong in accordance with a determination that the at least one or more characteristics of the item object summary of Dong does not meet a first criteria; and (Para. 24, 47-48, 54-57) (“natural language application 120 can identify relevant context information from the source documents, prompt the language model to generate a response using the context information (i.e. the item object summary of Dong), and progressively correct the response” “At step 408, natural language application 120 prompts the language model to generate a response (i.e. the item object summary of Dong) using the prompt generated at step 406. At step 410, natural language application (i.e. self critique module) 120 detects and corrects an incorrect and/or incomplete response (i.e. a first criteria). In some embodiments, natural language application 120 detects conflicts (hallucinations) between one or more portions of the response and one or more portions of the context, and natural language application 120 corrects the hallucinations in the response, as discussed in greater detail below in conjunction with FIGS. 5-6. In some embodiments, natural language application 120 also checks whether each portion of the context relevant to the user request is within the response to identify any portions of context that are not within the response, indicating that the response is incomplete. In such cases, natural application 120 can prompt the language model to add the portion(s) of context that are relevant to the user request to the response.” “At step 510, natural language application 120 uses a textual entailment model (i.e. a self-critique module based on at least one or more characteristics of the item object summary) to compute an entailment score for each portion of the response and each augmented portion of the response based on the corresponding relevant portions of the context determined at step 506 (i.e. evaluate the at least one or more characteristics of the item object summary and adjust the item object summary in accordance with a determination that the at least one or more characteristics of the item object summary does not meet a first criteria). if a negated portion in the response is associated with a higher entailment score than the corresponding portion of the response is, then the response of the language model 204 conflicts with the portion of the context and is potentially hallucinatory, and vice versa if the corresponding portion of the response is associated with a higher entailment score than the negated portion of the response, which can be reflected in the entailment scores that are computed for the portion of the response and the negated portion of the response. At step 512, natural language application 120 computes a hallucination score for the response using the computed entailment scores for each portion of the response. … At step 514, natural language application 120 determines if the hallucination score is above a threshold At step 514, natural language application 120 determines if the hallucination score is above a threshold.). … if the hallucination score is above the threshold, then method 400 continues to step 516, where natural language application 120 prompts the language model (i.e. the self-critique module includes one or more large language models) to modify the potentially hallucinatory portions of the response to better match the corresponding relevant portions of context, thereby generating a new response. Method 400 then returns to step 502 (i.e. iteratively modifying the item object summary using a self-critique module)”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton with the LLM corrections of Rahman in order to improve the accuracy and completeness of model outputs (Rahman – Para. 8) In regards to claim 11, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 10. The remainder of the limitations of this claim are rejected using the same rationale as claim 2. In regards to claim 12, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 11. The remainder of the limitations of this claim are rejected using the same rationale as claim 3. In regards to claim 13, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 10. The remainder of the limitations of this claim are rejected using the same rationale as claim 4. In regards to claim 16, Dong discloses a non-transitory computer-readable storage medium comprising executable instructions that, when executed by one or more processors of a computing device, cause the one or more processors to: receive review data associated with an item object in a database, wherein the review data includes one or more review features including one or more target keywords; (Paras. 6, 179, 183) (“accessing one or more commodity reviews; extracting one or more feature indicators (i.e. one or more target keywords) from the one or more commodity reviews, each feature indicator being associated with a feature of a commodity; extracting one or more sentiment indicators (i.e. one or more target keywords) from the one or more commodity reviews” “Any suitable computing device can be used to implement the consumer computing device or the components of the provider's computer system. … computing devices may include control circuitry, input/output ("I/O") circuitry, and a processor that communicates with a number of peripheral subsystems via a bus subsystem. These peripheral subsystems may include a storage subsystem, a memory subsystem, a user-interface subsystem, a user-interface output subsystem, and a network-interface subsystem. By processing instructions stored on one or more storage devices, the processor may perform the steps of the present method … storage device may be used, including an optical storage device, a magnetic storage device, or a solid-state storage device” “consumer-generated, opinionated product reviews are stored in a database on a provider's server.”) Dong discloses select a subset of the review data based on a first selection criteria configured to select a first subset of the review data; (Para. 58-59) (“The invention mines product experiences to implement a practical technique for turning user-generated product reviews into rich, feature-based, experiential product cases. The features of these cases relate to topics that are discussed by reviewers and their aggregate opinions. The 3-step approach is summarised in FIG. 5 for a given product, P (i.e. a first selection criteria configured to select a first subset of the review data): (1) use shallow NLP techniques to extract a set of candidate features from Reviews(P), the reviews of P (i.e. select a subset of the review data based on a first selection criteria configured to select a first subset of the review data);”) Dong discloses generate a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords; (Para. 60, 66) (“(2) each feature, Fi, (i.e. target keyword) is associated with a sentiment label (positive, negative, or neutral) (i.e. a polarity of the target keywords) based on the opinion expressed in review, Rk, for P” “The invention calculates how frequently each feature co-occurs with a sentiment word (i.e. a polarity of the target keyword) in the same sentence (i.e. generate a target keyword score for each respective target keyword of the one or more target keywords based on a polarity of the target keywords), and retains a single-noun only if its frequency is greater than some fixed threshold”) Dong discloses select a subset of the target keywords each having a respective target keyword score based on the target keyword scores; (Para. 58-59, 63-66, 91, 106, 141) (“The invention mines product experiences to implement a practical technique for turning user-generated product reviews into rich, feature-based, experiential product cases. The features of these cases relate to topics that are discussed by reviewers and their aggregate opinions. The 3-step approach is summarised in FIG. 5 for a given product, P: (1) use shallow NLP techniques to extract a set of candidate features from Reviews(P), the reviews of P;” “Considering two basic types of features--bi-gram features and single-noun features--and the invention uses a combination of shallow NLP and statistical methods to mine them. … For single-noun features the invention also extracts a candidate set (i.e. select a subset of the target keywords), this time nouns, from the reviews but validates them by eliminating nouns that are rarely associated with sentiment words. The reason is that such nouns are unlikely to refer to product features.” “Again, based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries. For example, the advantages and disadvantages of a particular product can be highlighted (i.e. those features which are associated with mainly positive and negative sentiment in reviews (i.e. based on the target keyword scores)).” “The sentiment of a particular feature is indicated by colour, from red (strong negative sentiment) to green (strong positive sentiment)” “where Pos(Fi; P), Neg(Fi; P), and Neut(Fi; P) denote the number of times that feature Fi has positive, negative and neutral sentiment in the reviews for product P, respectively). Thus, each product can be represented as a product case, Case(P) (i.e. the target keyword scores), which aggregates product features, popularity and sentiment data as in Equation 3 below.”) Dong discloses generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords; (Para. 70, 89) (“The invention uses the feature-based product representations (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords) to implement a content-based approach” “Given the representation of products as sets of features and associated sentiment (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords) which are mined from product reviews, it is possible to leverage this information to provide explanations to users as to why particular products are being recommended. For example, an explanation for the top-recommended product may be: ‘this product is recommended as it has superior lens quality and battery life etc. (i.e. generate one or more target keyword summaries for each respective target keyword of the selected subset of the target keywords)”) Dong discloses generate an item object summary based on the one or more target keyword summaries using a summarization model; (Para. 88, 91) (“The invention provides a system (i.e. a summarization model) that generates a ‘meta-review’ which corrals and parses multiple customer/user product reviews to create a unified review (i.e. generate an item object summary based on the one or more target keyword summaries using a summarization model). This meta-review gives the reader the consensus opinion from all the collated reviews about various product features discussed in reviews. The invention also explains the various choices that were made when the meta-review was being created.” “based on the representation of products as sets of mined features and associated sentiment, it is possible to use this information to create product summaries (i.e. generate an item object summary based on the one or more target keyword summaries).”) Dong discloses generate a set of instructions to cause the item object summary to be displayed on a user interface. (Para. 8, 96) (“The invention provides a recommendation to the consumer for the best or most suitable commodity appropriate to the needs of the consumer. The commodity review may be accessible to the consumer by means of a website interface. The commodity review stores the previous experiences of other consumers relating to the same or a similar commodity. By extracting the sentiment indicators and evaluating these sentiment indicators, this arrangement leverages the previous experiences of other consumers to provide a more nuanced and sophisticated recommendation to the consumer.” “After the commodity recommendation has been formed in step 7, the computer system delivers 8 the commodity recommendation for the second commodity C to the user using the website interface.”) As discussed above, Dong discloses select a subset of the target keywords each having a respective target keyword score based on the target keyword scores. Dong does not explicitly disclose, however Norton, in the same field of endeavor, discloses select a subset of the target keywords each having a respective target keyword score of Dong based on a predetermined range of the target keyword scores; (Para. 103, 108, 187-189) (“Sentiment-score thresholds may be relative or fixed with respect to sentiment scores. For example, a positive-sentiment-score threshold (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) may include a certain number or percentage of highest sentiment scores (e.g., extracted sentences corresponding to the highest 150 sentiment scores). A negative-sentiment-score threshold (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) may include a certain number or percentage of lowest sentiment scores (e.g., extracted sentences corresponding to the lowest 150 sentiment scores). … a positive-sentiment-score threshold, a negative-sentiment-score threshold, … may include sentiment scores above a threshold sentiment score (e.g., above +5), below a threshold sentiment score (e.g., above −5) (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores)” “determining the sentiment indicator for each extracted sentence from the set of extracted sentences by determining, for each extracted sentence, a sentiment score indicating a positive sentiment corresponding to the topic or a negative sentiment corresponding to the topic (i.e. a predetermined range of the target keyword scores). … further include providing the response summary for the set of textual responses (i.e. select a subset of the target keywords each having a respective target keyword score based on a predetermined range of the target keyword scores) … the acts 800 further include generating the response summary for the set of textual responses” That is, in order to create the textual responses, the invention must select a subset of topics (i.e. keywords) based on their scores (i.e. target keyword score) to be included in the modeling of the textual response.) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong with the machine learning based summarizations of Norton in order to improve the “efficiency and flexibility” of the system’s textual responses (Norton – Para. 41) Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses iteratively modify the item object summary of Dong using a self-critique module based on at least one or more characteristics of the item object summary of Dong, wherein: the self-critique module includes one or more large language models configured to evaluate the at least one or more characteristics of the item object summary of Dong and adjust the item object summary in accordance with a determination that the at least one or more characteristics of the item object summary of Dong does not meet a first criteria; and (Para. 24, 47-48, 54-57) (“natural language application 120 can identify relevant context information from the source documents, prompt the language model to generate a response using the context information (i.e. the item object summary of Dong), and progressively correct the response” “At step 408, natural language application 120 prompts the language model to generate a response (i.e. the item object summary of Dong) using the prompt generated at step 406. At step 410, natural language application (i.e. self critique module) 120 detects and corrects an incorrect and/or incomplete response (i.e. a first criteria). In some embodiments, natural language application 120 detects conflicts (hallucinations) between one or more portions of the response and one or more portions of the context, and natural language application 120 corrects the hallucinations in the response, as discussed in greater detail below in conjunction with FIGS. 5-6. In some embodiments, natural language application 120 also checks whether each portion of the context relevant to the user request is within the response to identify any portions of context that are not within the response, indicating that the response is incomplete. In such cases, natural application 120 can prompt the language model to add the portion(s) of context that are relevant to the user request to the response.” “At step 510, natural language application 120 uses a textual entailment model (i.e. a self-critique module based on at least one or more characteristics of the item object summary) to compute an entailment score for each portion of the response and each augmented portion of the response based on the corresponding relevant portions of the context determined at step 506 (i.e. evaluate the at least one or more characteristics of the item object summary and adjust the item object summary in accordance with a determination that the at least one or more characteristics of the item object summary does not meet a first criteria). if a negated portion in the response is associated with a higher entailment score than the corresponding portion of the response is, then the response of the language model 204 conflicts with the portion of the context and is potentially hallucinatory, and vice versa if the corresponding portion of the response is associated with a higher entailment score than the negated portion of the response, which can be reflected in the entailment scores that are computed for the portion of the response and the negated portion of the response. At step 512, natural language application 120 computes a hallucination score for the response using the computed entailment scores for each portion of the response. … At step 514, natural language application 120 determines if the hallucination score is above a threshold. At step 514, natural language application 120 determines if the hallucination score is above a threshold.). … if the hallucination score is above the threshold, then method 400 continues to step 516, where natural language application 120 prompts the language model (i.e. the self-critique module includes one or more large language models) to modify the potentially hallucinatory portions of the response to better match the corresponding relevant portions of context, thereby generating a new response. Method 400 then returns to step 502 (i.e. iteratively modify the item object summary using a self-critique module)”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton with the LLM corrections of Rahman in order to improve the accuracy and completeness of model outputs (Rahman – Para. 8) In regards to claim 17, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 16. The remainder of the limitations of this claim are rejected using the same rationale as claim 2. In regards to claim 18, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 17. The remainder of the limitations of this claim are rejected using the same rationale as claim 3. In regards to claim 19, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 16. The remainder of the limitations of this claim are rejected using the same rationale as claim 4. Claims 5, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dong in view of Norton and further in view of Rahman and even further in view of Liu (US 20250371061) (hereafter Liu). In regards to claim 5, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 1. Dong in view of Norton and further in view of Rahman does not explicitly disclose, however Liu, in the same field of endeavor, discloses wherein the processor is configured to read the instructions to input each respective target keyword of the selected subset of the target keywords to a large language model to generate the one or more target keyword summaries for each respective target keyword. (Para. 20) (“an extractive method is applied to identify the most important and representative sentences from a text corpus and, in a summary generation operation, examples described herein utilize a LLM (e.g., abstractive summarization) to generate a topic summary based on filtered outputs from a previous extractive operation.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton and further in view of Rahman with the summarizations of Liu in order to improve efficient and effective summarization. (Liu – Para. 18) In regards to claim 14, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 10. The remainder of the limitations of this claim are rejected using the same rationale as claim 5. In regards to claim 20, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 16. The remainder of the limitations of this claim are rejected using the same rationale as claim 5. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Dong in view of Norton and further in view of Rahman and even further in view of Hakim11 (“Grammar Error Correction Using LLM (Large Language Models)”; April 16, 2024) (hereafter Hakim11). In regards to claim 6, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 1. Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses wherein the processor is configured to: read the instructions to determine, based on the use of the self-critique module, that the at least one or more characteristics of the item object summary do not meet the first criteria, wherein the first criteria includes at least one error and adjust the item object summary to correct the at least one (Para. 24, 47-48) (“natural language application 120 can identify relevant context information from the source documents, prompt the language model to generate a response using the context information (i.e. the item object summary of Dong), and progressively correct the response” “At step 408, natural language application 120 prompts the language model to generate a response (i.e. the item object summary of Dong) using the prompt generated at step 406. At step 410, natural language application (i.e. self critique module) 120 detects and corrects an incorrect and/or incomplete response (i.e. a first criteria includes an error). In some embodiments, natural language application 120 detects conflicts (hallucinations) between one or more portions of the response and one or more portions of the context, and natural language application 120 corrects the hallucinations in the response (i.e. adjust the item object summary to correct the at least one)”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton with the LLM corrections of Rahman in order to improve the accuracy and completeness of model outputs (Rahman – Para. 8) As discussed above, Dong in view of Norton and further in view of Rahman discloses at least one error. Dong in view of Norton and further in view of Rahman does not explicitly disclose, however Hakim11, in the same field of endeavor, discloses the at least one error of Rahman is incorrect grammar, incorrect sentence structure (Pg. 1) (“Grammar error correction (GEC) refers to the process of detecting and correcting grammatical errors in written text. This includes a wide range of mistakes (i.e. the at least one error of Rahman), such as verb tense errors (i.e. incorrect grammar), subject-verb agreement (i.e. incorrect grammar), incorrect word order (i.e. incorrect sentence structure), preposition mistakes(i.e. incorrect grammar), and punctuation errors(i.e. incorrect grammar).”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton and further in view of Rahman with Grammar Error Correction of Hakim11 in order to improve “readability, understanding, and professionalism of written communication.” (Hakim11 – Pg. 1) In regards to claim 15, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 10. The remainder of the limitations of this claim are rejected using the same rationale as claim 6. Claims 8 is rejected under 35 U.S.C. 103 as being unpatentable over Dong in view of Norton and further in view of Rahman and even further in view of Rolland (US 12393597) (hereafter Rolland). In regards to claim 8, Dong in view of Norton and further in view of Rahman discloses the limitations of claim 7. Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses a training data set; (Para. 3) (“an LLM is implemented as a neural network that includes a large number (e.g., billions) of parameters and is trained on a large quantity of text data (i.e. a training data set).”) Dong in view of Norton does not explicitly disclose, however Rahman, in the same field of endeavor, discloses one or more parameters characterize the self-critique module. (Para. 3, 26) ((“an LLM is implemented as a neural network that includes a large number (e.g., billions) of parameters (i.e. one or more parameters characterize the self-critique module) and is trained on a large quantity of text data.” “Language model 204 is a machine learning model trained to perform one or more natural language processing tasks (i.e. one or more parameters characterize the self-critique module”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton with the LLM corrections of Rahman in order to improve the accuracy and completeness of model outputs (Rahman – Para. 8) Dong in view of Norton and further in view of Rahman does not explicitly disclose, however Rolland, in the same field of endeavor, discloses wherein the processor is configured to read the instructions to: receive a training data set of Rahman; input the training data of Rahman set into an untrained self-critique module and, in response, adjust one or more parameters of the untrained self-critique module; determine that the untrained self-critique module is trained based on the adjusted one or more parameters; and store the adjusted one or more parameters in a database, wherein the adjusted one or more parameters characterize the self-critique module of Rahman. (Col. 9, Ln. 4-29; Col. 10, Ln. 18-31; Col 15, Ln. 57-67) (“Training a ML model generally involves inputting into an ML model (e.g. an untrained ML model) training data to be processed by the ML model, processing the training data using the ML model, collecting the output generated by the ML model (e.g. based on the inputted training data), and comparing the output to a desired set of target values. … if the value outputted by the ML model is excessively high, the parameters may be adjusted so as to lower the output value in future training iterations. An objective function is a way to quantitatively represent how close the output value is to the target value.” “ a trained ML model (i.e. the self-critique module of Rahman) may be fine-tuned, meaning that the values of the learned parameters may be adjusted slightly in order for the ML model to better model a specific task. Fine-tuning of a ML model typically involves further training the ML model on a number of data samples (which may be smaller in number/cardinality than those used to train the model initially) that closely target the specific task.” “The storage unit 214 may store data, for example, embeddings databases (i.e. store the adjusted one or more parameters in a database, wherein the adjusted one or more parameters characterize the self-critique module of Rahman) 250, among other data.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the summarized reviews of Dong in view of Norton and further in view of Rahman with the model training of Rolland in order to improve accuracy and relevance of model outputs. (Rolland – Col. 1, Ln. 23-26) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Piskala – US 2025/0095641 – discussing the use of machine learning and LLMs for text generation and corrections. Lok – US 2025/0166026 - discussing the use of machine learning and LLMs for text generation and corrections. Chen – US 10,373,067 - discussing the use of machine learning and LLMs for text generation and corrections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID G GODBOLD whose telephone number is (571)272-5036. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon S Campbell can be reached at 571-272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID G. GODBOLD/Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Jun 09, 2025
Application Filed
Mar 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530653
APPARATUS, METHOD, AND SYSTEM FOR GENERATING TRANSPORT VEHICLE DRIVING PLANS
2y 5m to grant Granted Jan 20, 2026
Patent 12488304
DELIVERY SCHEDULING ADJUSTMENT OF A REPLACEMENT DEVICE BASED ON NETWORK BACKUP OF EXCHANGED DEVICE
2y 5m to grant Granted Dec 02, 2025
Patent 12474175
SYSTEM AND METHOD FOR OPTIMIZING DELIVERY ROUTE BASED ON MOBILE DEVICE ANALYTICS
2y 5m to grant Granted Nov 18, 2025
Patent 12475431
Secure Pharmaceuticals Delivery Container and Service
2y 5m to grant Granted Nov 18, 2025
Patent 12462615
SYSTEM AND METHOD FOR COMPUTER VISION ASSISTED PARKING SPACE MONITORING WITH AUTOMATIC FEE PAYMENTS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
22%
Grant Probability
55%
With Interview (+33.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 82 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month