Prosecution Insights
Last updated: April 19, 2026
Application No. 18/613,923

MACHINE LEARNING TECHNIQUES FOR QUESTION RESOLUTION

Final Rejection §101§103
Filed
Mar 22, 2024
Examiner
ROSTAMI, MOHAMMAD S
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Optum Inc.
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
425 granted / 635 resolved
+11.9% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
37 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-11 and 13-20 are pending of which claims 1, 13 and 19 are in independent form. Claims 1-11 and 13-20 are rejected under 35 U.S.C. 101 including (Abstract idea). Claims 1-11 and 13-20 are rejected under 35 U.S.C. 103. Response to Arguments Applicant’s arguments with respect to claim(s) 1-11 and 13-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding the 35 USC 101 (Abstract Idea), remarks made by the applicant. Examiner specifies that, the newly added amendments do not overcome the 35 USC 101 rejection. With respect to step 2A, Prong One: The claims recite: Receiving evidence package. Generating predictions using a model. Aggregating predictions. Selecting passages. Routing passages based on answer type. Applying sub-classification models. Producing a response. In plain language, this is collecting information, analyzing it with algorithms/ML models, organizing it, and outputting an answer. These steps fall into recognized abstract idea: Mental Process/Mathematical Concepts: generating predications; weighted aggregate predictions; determining steps; routing based on answer types (algorithmic decision making). Organizing and Analyzing Information: receiving passages; selecting passages; classifying by answer type; producing a response (classic information processing). Nothing in the claim requires a new ML, new hardware structure, improved communication technique, or any specific technological improvement. The claims recite an abstract idea: analyzing, classifying, and processing information using mathematical/ML models. With respect to step 2A, Prong Two: The claims are generic computer components preforming their routine functions. The claims do not recite any technical improvement to: Radio technology Synchronization server Communication protocols Data compression User interfaces Device operation The recited components perform their ordinary, expected functions, which is considered insufficient. Therefore the claims do not integrate the abstract idea into a practical application. Furthermore, examiner specifies that “wherein the input question is associated with an answer type that is one of a multiple-choice answer type, large-limited-set answer type, or a free-form answer type” is considered insignificant application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11 and 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) recite(s) question resolution using machine learning. With respect to step 1 of the patent subject matter eligibility analysis, the claims are directed to a process, machine, manufacture, or composition of matter. Independent claim 1 are directed to a method, which is a process. Independent claim 13 is directed to a computing system, comprising one or more memory…and processor, which is directed to one of the four statutory subject matters. Independent claims 20 is directed to a non-transitory computer-readable storage medium which is directed to one of the four statutory subject matters. All other claims depend on claims 1, 13 and 20. As such, claims 1-20 are directed to a statutory category. Regarding claims 1, 13 and 20: With respect to step 2A, prong one (Judicial Exception), the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity. The claims recite: Receiving evidence package. Generating predictions using a model. Aggregating predictions. Selecting passages. Routing passages based on answer type. Applying sub-classification models. Producing a response. In plain language, this is collecting information, analyzing it with algorithms/ML models, organizing it, and outputting an answer. These steps fall into recognized abstract idea: Mental Process/Mathematical Concepts: generating predications; weighted aggregate predictions; determining steps; routing based on answer types (algorithmic decision making). Organizing and Analyzing Information: receiving passages; selecting passages; classifying by answer type; producing a response (classic information processing). The claims recite an abstract idea: analyzing, classifying, and processing information using mathematical/ML models. There are no steps performed that provides a technical improvement to the computing system itself (improved caching algorithm, improved database indexing, improved memory efficiency, improved cache eviction strategy; improved computing architecture). All the steps are generic, and conventional. Thus, the claims recite an abstract idea (Mental Process/Mathematical/ Organizing and Analyzing Manipulation algorithm). With respect to step 2A, Prong Two (Particular Application), the claims do not recite additional elements that integrate the judicial exception into a practical application. The following limitations are considered “additional elements” and explanation will be given as to why these “additional elements” do not integrate the judicial exception into a practical application. The additional elements recite: One or more processors, Retrieval ensemble model, ML aggregation model; and Sub-classification model. These components merely use generic/conventional computer components and ML models as tools to execute the abstract idea. The limitations fail to improve hardware (no new hardware, data structure, training technique, inference technique; no improvements to memory structure, computer structure, network performance, etc.). There are also no improvements to computer functionality or any specific technical solution to a computer centric problem (the claims merely automate tasks humans perform conceptually: generating question and answers). The claims fail to provide a particular technological solution. The computer merely used as a tool, which is an abstract improvement to information presentation and not technical improvements. There is no recitation of, a new data structure that changes computer operation, improved communication, an unconventional indexing/conversion technique, a specific hardware solution. Also, the claims do not show, how the models are technically improved, how computing is improved, how performance are enhanced at system level, how latency, storage, or bandwidth is reduced. The claims merely, uses generic ML/computing as a tool. Instead, the claims recite conventional and generic computer functions performed in a routine manner, which does not amount to a practical application. With respect to Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to a computer readable storage medium, computer, memory, and processor, at a very high level of generality and without imposing meaningful limitations on the scope of the claim. Nothing in the claims provide: unconventional; technically novel, non-routine, system-level improvements. There are no special architecture, new algorithm or technical constraints. Such generic, high‐level, and nominal involvement of a computer or computer‐based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent‐eligible, as noted at pg.74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359‐60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093‐94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257‐1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claimpatent‐eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".). The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well‐understood, routine, and conventional manner. MPEP § 2106.0S(d)(II) sets forth the following: The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. • Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec ... ; TLI Communications LLC v. AV Auto. LLC ... ; OIP Techs., Inc., v. Amazon.com, Inc ... ; buySAFE, Inc. v. Google, Inc ... ; • Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life ... ; • Electronic recordkeeping, Alice Corp ... ; Ultramercial ... ; • Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc ... ; • Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank ... ; and • A web browser's back and forward button functionality, Internet Patent • Corp. v. Active Network, Inc. ... . . . Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself. The dependent claims have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea. Regarding claims 2, and 14, The claim recites: retrieval ensemble model comprises a plurality of classification models and a machine learning fusion model. Using multiple models and a fusion model is a routine ML technique. This merely combines known analytical tools to process information. This does not improve computer architecture, memory, or processing capabilities. This is simply analyzing and aggregating data. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 3, and 15, The claim recites: classification models include a term-based retrieval model and one or more large language models. This merely recites types of known information retrieval and language processing models. Additionally, the claims simply identify tools for performing data analysis. This does not improve how computers retrieve or process data. This is claim is simply centered on evaluating and presenting information. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 4, and 16, The claim recites: ML fusion model is trained to generate the weighted aggregate predictions based on correspondence. Training a model to weight predictions is a mathematical/statistical algorithm. The claims simply represent algorithmic optimization of data analysis. This does not improve system performance at the hardware or architectural level. This is claim is directed to abstract data processing. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 5, and 17, The claim recites: classification models and ML fusion model are jointly trained using an annotated training. Joint training using labeled data is conventional ML practice. The claims simply represent routine model development techniques. This does not improve computer functionality. This is claim is directed to abstract learning processing. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 6, and 18, The claim recites: generating temporal features for evidence predictions and using them to generate response. Extracting and using temporal features in data analysis. The claims simply add another type of information to be processed. This does not improve storage, transmission, or computation mechanisms. This is claim is directed to analyzing and presenting information. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 7, and 19, The claim recites: evidence predictions include relevance rank values. Assigning relevance ranks is a method of ordering information. The claims simply constitute mathematical scoring and comparison. This does not improve computer operations. This is claim is directed to organizing and evaluating information. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claim 8, The claim recites: question response includes a question resolution and a selected input passage. Selecting passages based on resolution is simply information retrieval. The claim merely presents selected content to users. This does not improve data structure or system performance. This is claim is directed to presenting analyzed information. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claim 9, The claim recites: generating retrieval and aggregation metrics and initiating training based on those metrics. Computing metrics and retraining models are mathematical evaluations. The claim merely represents optimization of abstract process. This does not provide technical improvement. This is claim is directed to data analysis and learning. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claim 10, The claim recites: identifying failure scenario and generating synthetic training data for targeted training. Detecting errors and generating synthetic data are analytical techniques. The claim merely represents abstract problem analysis and data generation. This does not improve computer hardware and architecture. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claim 11, The claim recites: multi-model architecture including LLMs and a routine module. This simply recites a high-level software architecture. The claim merely uses known NN and routing techniques. This does not provide technical improvement. This is claim is directed to routing and processing information. There is no improvement to computer functionality, data structures, or processing architecture. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7-9, 13, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lau; Wai Ho et al. (US 20240289364 A1) [Lau] in view of Nguyen; Tuan et al. (US 20250005293 A1) [Nguyen] in view of Weiser; Samantha D. et al. (US 20250375711 A1) [Weiser]. Regarding claims 1, 13 and 20, Lau discloses, a computer-implemented method comprising: receiving, by one or more processors, a plurality of evidence passages from a document set corresponding to an input question (question answering system using machine leaning [abstract], ¶ [0004]-[0007], [0065]-[0068]); generating, by the one or more processors using a retrieval ensemble model and based at least in part on the input question, a plurality of evidence predictions for an evidence passage of the plurality of evidence passages (An ensemble of machine learning models 404 can ingest the vectorized questions from the vectorization service 402 and predict a domain for each vectorized question. The ensemble of machine learning models 404 can include instantiations of models of one or more types. For example, the machine learning models 404 can include a support vector machine (SVM) model 406, a multinomial logistic regression model 408, a neural network 410, or a random forest 412. The depicted models are not intended to be limiting. According to various embodiments, an ensemble of machine learning models 404 may employ additional or fewer machine learning models 404, or machine learning models 404 of a different type ¶ [0076]); generating, by the one or more processors and using the retrieval ensemble model, a weighted aggregate prediction for the evidence passage based on the plurality of evidence predictions (ensemble of machine learning models ¶ [0076], examiner specifies that ensemble learning typically refers to bagging (bootstrap aggregation)); determining, by the one or more processors and based at least in part on the weighted aggregate prediction, a set of input passages from the plurality of evidence passages (the voting service 414 can determine a weight based on a confidence level received from the machine learning models 404, and sum the confidence levels to determine an overall domain prediction (e.g., select the domain associated with the highest summed confidence level). In some embodiments, the voting service 414 may adjust one or more weights according to a previous performance of a machine learning model 404. For example, a confidence level for a machine learning model 404 strongly correlated with a correct outcome can be weighted upwardly, relative to a machine learning model 404 less correlated with the correct ¶ [0088]. For example, outliers can be discounted or removed, confidence levels predicted by individual machine learning models 404 can be adjusted according to a non-linear function, or another confidence interval (e.g., aggregate confidence interval) can be defined ¶ [0101]. Also see question/answer confidence score and weighted prediction ¶ [0068], [0070], [0079], [0080]); and providing, by the one or more processors, the question response (question answering system using machine leaning [abstract], ¶ [0004]-[0007], [0065]-[0068]). However, Lau does not explicitly facilitate routing, by the one or more processors, the set of input passages to a sub- classification model of using a machine learning aggregation model to receive a question response for the input question. Nguyen discloses, routing, by the one or more processors, the set of input passages to a sub- classification model of using a machine learning aggregation model (In various implementations, NL based output system 120 can cause the LLM engine 141 to process, using an LLM stored in the LLM(s) database 141A, NL based input to generate a stream of LLM output that may be provided by NL based output engine 150. The LLM can include, for example, any LLM that is stored in the LLM(s) database 141A, such as PaLM, BARD, BERT, LaMDA, Meena, GPT, and/or any other LLM, such as any other LLM that is encoder-only based, decoder-only based, sequence-to-sequence based and that optionally includes an attention mechanism or other memory ¶ [0041]) to receive a question response for the input question (In some implementations, the VLM may be applied in response to an individual providing NL input, and in some cases the VLM may be prompted based on the individual's NL input. As one example, suppose the individual provides the NL input, “I need a good recipe for dinner tonight.” This NL input may be processed, e.g., using an LLM, to generate one or more follow-up questions (also referred to herein as “synthetic follow-up queries”), answers to which may be necessary or helpful for responding to the individual's NL input. For instance, the individual's NL input may be processed based on the LLM to generate a synthetic follow-up query of “What food is available?” ¶ [0020]-[0021]). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Nguyen’s system would have allowed Lau to facilitates routing, by the one or more processors, the set of input passages to a sub-classification model of using a machine learning aggregation model to receive a question response for the input question. The motivation to combine is apparent in the Lau's reference, because there is a desire to improve implementations relate to leveraging large language model(s) (LLMs) and vision language model(s) (VLMs). However, neither Lau nor Nguyen explicitly facilitate wherein the input question is associated with an answer type that is one of a multiple-choice answer type, large-limited-set answer type, or a free-form answer type; based on the answer type,… sub-classification model of a machine learning aggregation model to receive a question response for the input question, wherein the machine learning aggregation model comprises a different sub-classification model for at least two of the multiple-choice answer type, the large-limited-set answer type, or the free-form answer type. Weiser discloses, wherein the input question is associated with an answer type that is one of a multiple-choice answer type, large-limited-set answer type, or a free-form answer type (The format of the answer 114 includes short-form responses, multiple-choice answers, and/or ordinal ranking formats that indicate the correct answer and/or other incorrect alternatives ¶ [0046], [0071], [0132], [0175]. t operation 1204, an agent such as a question writer agent generates an initial set of questions based on the user-inputted request. In some embodiments, the question writer agent is an LLM. For instance, a request such as “Seinfeld” is combined with a predefined system prompt such as “generate X number of questions for the topic (topic)” and included pre-loaded query context (e.g., the pre-loaded query context in FIG. 9). The question writer agent generates one or more types of questions and corresponding answer(s), such as multiple-choice questions, open-ended questions, and/or true/false questions based on the pre-loaded query context ¶ [0179]-[0180], [0205]); based on the answer type, … sub-classification model of a machine learning aggregation model to receive a question response for the input question, wherein the machine learning aggregation model comprises a different sub-classification model for at least two of the multiple-choice answer type, the large-limited-set answer type, or the free-form answer type (In operation 402, the game platform provides (a) a game application having a client-side user interface and a backend host configured to control communications between the client-side user interface and a generative AI model and (b) a plurality of tangible game elements (e.g., cards) associated with the game application. Each tangible game element, in some embodiments, is provided with at least one identifier that represents a topic or category (i.e., a value of a game parameter type) of a question-answer set (i.e., game content) ¶ [0056]. At operation 1204, an agent such as a question writer agent generates an initial set of questions based on the user-inputted request. In some embodiments, the question writer agent is an LLM. For instance, a request such as “Seinfeld” is combined with a predefined system prompt such as “generate X number of questions for the topic (topic)” and included pre-loaded query context (e.g., the pre-loaded query context in FIG. 9). The question writer agent generates one or more types of questions and corresponding answer(s), such as multiple-choice questions, open-ended questions, and/or true/false questions based on the pre-loaded query context ¶ [0179]-[0180]. . Each model operates independently but is managed by a consensus module that determines the overall validity of the content by aggregating the results from the various validation models. Using the validation framework, a larger amount of content (e.g., trivia questions) can be generated over a shorter period of time ¶ [0036]. Also see ¶ [0148], [0152]). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Weiser’s system would have allowed Lau and Nguyen to facilitates wherein the input question is associated with an answer type that is one of a multiple-choice answer type, large-limited-set answer type, or a free-form answer type; based on the answer type,… sub-classification model of a machine learning aggregation model to receive a question response for the input question, wherein the machine learning aggregation model comprises a different sub-classification model for at least two of the multiple-choice answer type, the large-limited-set answer type, or the free-form answer type. The motivation to combine is apparent in the Lau and Nguyen's reference, because there is a desire to improve more accurate and relevant response without additional refinement from the user. Regarding claims 7 and 19, the combination of Lau, Nguyen and Weiser discloses, wherein the plurality of evidence predictions for the evidence passage comprises a plurality of relevance rank values that each reflect a relevance of the evidence passage to the input question relative to the plurality of evidence passages (Lau: a machine learning model can determine an association score, or ranked order, or another indication of a confidence of a match between a question and an answer ¶ [0065]. Also see ¶ [0070], [0089], [0100]). Regarding claim 8, the combination of Lau, Nguyen and Weiser discloses, wherein the question response comprises a question resolution and a selected input passage from the set of input passages that corresponds to the question resolution (Lau: question answering system using machine leaning [abstract], ¶ [0004]-[0007], [0065]-[0068]). Regarding claim 9, the combination of Lau, Nguyen and Weiser discloses, further comprising: generating, using a retrieval scoring sub-module, a retrieval metric for the question response based on the selected input passage; generating, using an aggregation scoring sub-module, an aggregation metric for the question response based on the question resolution (Lau: ensemble of machine learning models ¶ [0076], examiner specifies that ensemble learning typically refers to bagging (bootstrap aggregation). For example, outliers can be discounted or removed, confidence levels predicted by individual machine learning models 404 can be adjusted according to a non-linear function, or another confidence interval (e.g., aggregate confidence interval) can be defined ¶ [0101]); and initiating one or more active training operations for the retrieval ensemble model and the machine learning aggregation model based on the retrieval metric and the aggregation metric (Lau: The data processing system can employ various machine learning models trained to predict a question domain for the vectored questions. The data processing system can arbitrate a prediction of the various machine learning models (e.g., via voting) ¶ [0004]-[0007]. The vectorization service 402 can be trained based on known relationships between questions and domains, such as based on the questions and domains of the answer set 124. According to some embodiments, the vectorization service 402 can be trained based on answers of the answer set 124 corresponding to the respective domains, or based on other (e.g., public or private) data ¶ [0075]). Regarding claim 12, (Canceled). Claim(s) 2-5, and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Lau in view of Nguyen in view of Weiser in view of McElvain; Gayle et al. (US 20190340172 A1) [McElvain]. Regarding claims 2 and 14, the combination of Lau, Nguyen and Weiser teaches all the limitations of claim 1. However, neither one of Lau, Nguyen or Weiser explicitly facilitate wherein the retrieval ensemble model comprises a plurality of classification models and a machine learning fusion model. McElvain discloses, wherein the retrieval ensemble model comprises a plurality of classification models and a machine learning fusion model (Candidate ranker 123 may be configured to apply an ensemble classification model based on the extracted features to rank the candidate question-answer pairs. In aspects, each question submitted may generate a question-answer pair for every candidate answer in the search results. For each feature of the extracted features, each question-answer pair may be scored. Each feature score of each question-answer pair may be fed into the ensemble classification model, and the ensemble classification model may generate a score that may represent the probability that the candidate answer in the candidate question-answer pair is a correct answer for the question ¶ [0078]-[0079], [0093], [0094]. Examiner further specifies that aggregation is interpreted as fusion). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because McElvain’s system would have allowed Lau, Nguyen and Weiser to facilitates wherein the retrieval ensemble model comprises a plurality of classification models and a machine learning fusion model. The motivation to combine is apparent in the Lau, Nguyen and Weiser's reference, because there is a desire to improve data searching, and more particularly to generating and identifying context specific answers to a query. Regarding claim 3 and 15, the combination of Lau, Nguyen, Weiser and McElvain discloses, wherein the plurality of classification models comprises a term-based retrieval model and one or more different large language models (large language models ¶ [0102], [0107]-[0112]). Regarding claims 4 and 16, the combination of Lau, Nguyen, Weiser, and McElvain discloses wherein the machine learning fusion model is previously trained to generate the weighted aggregate prediction from the plurality of evidence predictions based on a correspondence between the plurality of classification models and the input question (McElvain: Candidate ranker 123 may be configured to apply an ensemble classification model based on the extracted features to rank the candidate question-answer pairs. In aspects, each question submitted may generate a question-answer pair for every candidate answer in the search results. For each feature of the extracted features, each question-answer pair may be scored. Each feature score of each question-answer pair may be fed into the ensemble classification model, and the ensemble classification model may generate a score that may represent the probability that the candidate answer in the candidate question-answer pair is a correct answer for the question ¶ [0078]-[0079], [0093], [0094]. Examiner further specifies that aggregation is interpreted as fusion). Regarding claims 5 and 17, the combination of Lau, Nguyen, Weiser and McElvain discloses, wherein the plurality of classification models (McElvain: ensemble classification model ¶ [0078]-[0079], [0093], [0094]) and the machine learning fusion model are jointly trained using a subset of an annotated training set (Weiser: Machine-learning algorithms may include ensemble methods such as bagging meta-estimator, forest of randomized trees, AdaBoost, gradient tree boosting, and/or voting classifier methods. Machine-learning algorithms may include neural net algorithms, including convolutional neural net processes ¶ [0163]). Claim(s) 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lau in view of Nguyen in view of Weiser in view of Molloy; Ian Michael et al. (US 20210287141 A1) [Molloy]. Regarding claims 6 and 18, the combination of Lau, Nguyen, and Weiser teaches all the limitations of claims 1 and 13. However, neither one of Lau, Nguyen or Weiser explicitly facilitate further comprising: generating a set of temporal features comprising a temporal data feature for each of the plurality of evidence predictions; and generating the question response based on the set of input passages, the input question, and the set of temporal features. Molloy discloses, further comprising: generating a set of temporal features comprising a temporal data feature for each of the plurality of evidence predictions; and generating the question response based on the set of input passages, the input question, and the set of temporal features (The QA pipeline receives an input question, parses the question to extract the major features of the question, uses the extracted features to formulate queries, and then applies those queries to the corpus of data. …. Other reasoning algorithms may look at temporal or spatial features in the language, while others may evaluate the source of the portion of the corpus of data and evaluate its veracity ¶ [0107]). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Molloy’s system would have allowed neither one of Lau, Nguyen or Weiser to facilitates further comprising: generating a set of temporal features comprising a temporal data feature for each of the plurality of evidence predictions; and generating the question response based on the set of input passages, the input question, and the set of temporal features. The motivation to combine is apparent in the neither one of Lau, Nguyen or Weiser's reference, because there is a desire for improved data processing apparatus and method and more specifically to mechanisms for training divers and robust ensembles of artificial intelligence computer models. Claim(s) 10 are rejected under 35 U.S.C. 103 as being unpatentable over Lau in view of Nguyen in view of Weiser in view Sethi; Pooja et al. (US 20220374605 A1) [Sethi]. Regarding claim 10, the combination of Lau, Nguyen and Weiser, teaches all the limitations of claim 9. However, neither one of Lau, Nguyen or Weiser explicitly facilitate further comprising: identifying a failure question scenario based on the input question, the retrieval metric, and the aggregation metric; responsive to the failure question scenario, generating, using a synthetic data generation model, a plurality of synthetic training passages from the set of input passages; and initiating one or more targeted training operations based on the plurality of synthetic training passages. Sethi discloses, further comprising: identifying a failure question scenario based on the input question, the retrieval metric, and the aggregation metric; responsive to the failure question scenario, generating, using a synthetic data generation model, a plurality of synthetic training passages from the set of input passages; and initiating one or more targeted training operations based on the plurality of synthetic training passages (In particular embodiments, the assistant system may efficiently identify errors from the natural-language understanding (NLU) models used by the assistant system…. The selected traffic data may be manually annotated and used to evaluate the NLU models to identify the most important failure cases. The failure cases may be further used to automatically generate new (e.g., synthetic) training data, which may be used to retrain the NLU models to optimize them ¶ [0008]. Also see ¶ [0122], [0134]). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Sethi’s system would have allowed Lau, Nguyen and Weiser to facilitates further comprising: identifying a failure question scenario based on the input question, the retrieval metric, and the aggregation metric; responsive to the failure question scenario, generating, using a synthetic data generation model, a plurality of synthetic training passages from the set of input passages; and initiating one or more targeted training operations based on the plurality of synthetic training passages. The motivation to combine is apparent in the Lau, Nguyen and Weiser's reference, because there is a desire for improved databases and file management within network environments, and in particular relates to hardware and software for smart assistant systems. Claim(s) 11 are rejected under 35 U.S.C. 103 as being unpatentable over Lau in view of Nguyen in view of Weiser in view of Parham; David William et al. (US 20240248963 A1) [Parham]. Regarding claim 11, the combination of Lau, Nguyen and Weiser, teaches all the limitations of claim 9. However, neither one of Lau, Nguyen or Weiser, explicitly facilitate wherein the sub-classification model is one of one or more sub-classification models defined by the machine learning aggregation model and the machine learning aggregation model comprises a branched, multi-model architecture that defines (i) the one or more sub-classification models comprising one of an encoder-based large language model, a decoder-based large language model, or a generative pre-trained transformer model and (ii) a routing module configured to route an input to one of the one or more sub-classification models. Parham discloses, wherein the sub-classification model is one of one or more sub-classification models defined by the machine learning aggregation model and the machine learning aggregation model comprises a branched, multi-model architecture that defines (i) the one or more sub-classification models comprising one of an encoder-based large language model, a decoder-based large language model, or a generative pre-trained transformer model and (ii) a routing module configured to route an input to one of the one or more sub-classification models (The system may use one or more models (e.g., ensemble, time-aggregate, multi-modal, natural language processing models, machine learning models, large language models) to associate an organization with one or more ESG issues (e.g., climate) ¶ [0133], [0162]. Document-specific materiality scores are generated by parsing the source document, classifying elements according to the ESG issue being discussed (e.g., greenhouse gas emissions, biodiversity, employee health and safety), and finally generating an aggregate score for the source document based on the relative strength of the overall discussion of each issue in the document ¶ [0036], [0037]). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Parham’s system would have allowed Lau, Nguyen and Weiser, to facilitates wherein the sub-classification model is one of one or more sub-classification models defined by the machine learning aggregation model and the machine learning aggregation model comprises a branched, multi-model architecture that defines (i) the one or more sub-classification models comprising one of an encoder-based large language model, a decoder-based large language model, or a generative pre-trained transformer model and (ii) a routing module configured to route an input to one of the one or more sub-classification models. The motivation to combine is apparent in the Lau, Nguyen and Weiser's reference, because there is a desire to improve decision makers with effective tools to identify, evaluate, quantify, and monitor various aspects of complex issues derived indirectly from large data sources. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 1/29/2026 /MOHAMMAD S ROSTAMI/Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Mar 22, 2024
Application Filed
Nov 02, 2024
Non-Final Rejection — §101, §103
Feb 06, 2025
Response Filed
Feb 06, 2025
Examiner Interview (Telephonic)
Mar 03, 2025
Examiner Interview Summary
May 01, 2025
Final Rejection — §101, §103
Jun 06, 2025
Applicant Interview (Telephonic)
Jun 27, 2025
Examiner Interview Summary
Jul 02, 2025
Response after Non-Final Action
Jul 18, 2025
Request for Continued Examination
Jul 21, 2025
Response after Non-Final Action
Jul 23, 2025
Non-Final Rejection — §101, §103
Sep 24, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Examiner Interview Summary
Oct 20, 2025
Response Filed
Jan 29, 2026
Final Rejection — §101, §103
Mar 20, 2026
Applicant Interview (Telephonic)
Mar 21, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596705
CHANGE CONTROL AND VERSION MANAGEMENT OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579127
DETECTING LABELS OF A DATA CATALOG INCORRECTLY ASSIGNED TO DATA SET FIELDS
2y 5m to grant Granted Mar 17, 2026
Patent 12561392
RELATIVE FUZZINESS FOR FAST REDUCTION OF FALSE POSITIVES AND FALSE NEGATIVES IN COMPUTATIONAL TEXT SEARCHES
2y 5m to grant Granted Feb 24, 2026
Patent 12561360
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561312
DISTRIBUTED STREAM-BASED ACID TRANSACTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
93%
With Interview (+26.3%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month