Prosecution Insights
Last updated: April 19, 2026
Application No. 19/365,469

Method and System for Processing Artificial Intelligence User Requests

Non-Final OA §101§103
Filed
Oct 22, 2025
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Vijay Madisetti
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is in response to claims filed 22 October 2025. Claims 1-30 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 10 is objected to because of the following informalities: In lines 2-3, “comprised by”, and “comprises by” should read “comprising”. Appropriate correction is required. Claim 14 is objected to because of the following informalities: In line 12, “configured generate” should read “configured to generate”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. The limitations using the words “means” are present in claims 27, and 30. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “experiential system”, “analytical system”, “central executive code”, “integration and validation unit”, “interface controller”, and “learning engine” in claims 14, and 18-25. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof, recited as “system-on-chip” architecture comprising hardware that executes the software elements at issue. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Regarding claim 1, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites a method that makes routing decisions between experiential systems and analytical systems based on input for AI user requests. A method is one of the four statutory categories of invention. In step 2A, prong 1 of the 101 analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: i. “performing a routing analysis on the user input on a basis of a plurality of characteristics” (a person can mentally perform routing analysis by simply evaluating characteristics to mentally analyze them (MPEP 2106.04(a))) ii. “making a routing decision based on the routing analysis to send the user input to one or both of: an experiential system; and an analytical system” (a person can mentally make a routing decision by simply evaluating the routing analysis and making a judgement of where to route user input (MPEP 2106.04(a))) iii. “generating a final result by performing a result validation procedure on the one or more outputs” (a person can mentally generate results by evaluating validation criteria and making a judgement that an output is valid (MPEP 2106.04(a))). If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A, prong 2 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: iv. “a method for processing artificial intelligence (AI) user requests using a system that comprises both an experiential system and an analytical system” (mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). v. “receiving a user input from a user device” (insignificant extra-solution activity of mere data gathering (MPEP 2106.05(g))). vi. “routing the user input to at least one of the experiential system and the analytical system responsive to the routing decision” (insignificant extra-solution activity of mere data output (MPEP 2106.05(g))). vii. “receiving one or more outputs from at least one of the experiential system and the analytical system” (insignificant extra-solution activity of mere data gathering (MPEP 2106.05(g))). viii. “transmitting the final result to the user device” (insignificant extra-solution activity of mere data output (MPEP 2106.05(g))). Since the claim does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined through reanalysis of the following limitations considered in step 2A prong 2, that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. iv. “a method for processing artificial intelligence (AI) user requests using a system that comprises both an experiential system and an analytical system” (mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))). v. “receiving a user input from a user device” (well-understood, routine, and conventional activity of receiving data over a network, (MPEP 2106.05(d)(II))). vi. “routing the user input to at least one of the experiential system and the analytical system responsive to the routing decision” (well-understood, routine, and conventional activity of transmitting data over a network, (MPEP 2106.05(d)(II))). vii. “receiving one or more outputs from at least one of the experiential system and the analytical system” (well-understood, routine, and conventional activity of receiving data over a network, (MPEP 2106.05(d)(II))). viii. “transmitting the final result to the user device” (well-understood, routine, and conventional activity of transmitting data over a network, (MPEP 2106.05(d)(II))). Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 2, the additional element “the plurality of characteristics comprises at least two of: task type classification; complexity assessment; domain identification; temporal analysis; risk evaluation; and resource estimation” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 3, the additional element “the routing analysis is optimized by at least one of: reinforcement learning; supervised learning; or a combination of reinforcement learning and supervised learning” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 4, the additional element “the routing analysis is performed in at least one of: a performance mode configured to prioritize response time; an accuracy mode configured to prioritize accuracy; an efficiency mode configured to minimize at least one of computational cost and energy; a safety mode configured to maximize validation rigor; and a balanced mode configured to optimize weighted combination of multiple objectives” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 5, the additional element “the experiential system is configured to operate without explicit modeling of physical laws, mathematical constraints, or causal mechanisms” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 6, the additional element “the analytical system comprises two or more computational implementations of scientific principles directed to: physics; chemistry; biology; economics; and engineering” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 7, the additional element “the analytical system is operable to generate an output comprising at least one of: an uncertainty quantification; a sensitivity analysis; a validation certificate; a documenting constraint satisfaction; and traceability information” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 8, the additional element “the result validation procedure comprises implementing one or more consistency checking algorithms directed to: numerical consistency; logical consistency; semantic consistency; and physical consistency” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally validate output by simply evaluating an output and making a judgement that the output is consistent either numerically, logically, semantically, or physically (MPEP 2106.04(a))). Regarding claim 9, the additional element “the result validation procedure is operable to compute one or more confidence metrics comprising at least one of: model agreement; constraint satisfaction margins; historical accuracy; uncertainty quantification; and validation results. ” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 10, the additional element “the result validation procedure is operable to merge a first output received from the experiential system comprised by the one or more outputs and a second output received from the analytical system comprises by the one or more outputs” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 11, the additional element “the experiential system comprises a large language model” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 12, the additional element “executing a feedback procedure comprising at least one of: adjusting one or more parameters for performing the routing analysis; updating the experiential system; and updating the analytical system” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) (a person can mentally adjust parameters by simply making an evaluation of parameters for performing the routing analysis, and making a judgement of adjusted parameters(MPEP 2106.04(a))). Regarding claim 13, the additional element “the feedback procedure is performed responsive to at least one of: outcomes; performance metrics; and user feedback associated with the final result. ” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claims 14-30, they comprise limitations similar to those of claims 1-13, and are therefore rejected for similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 9-18, and 22-30 are rejected under 35 U.S.C. 103 as being unpatentable over HOWARD Pub. No.: US 2021/0334679 A1 (hereafter HOWARD), in view of PENFIELD et al. Patent No.: US 11,893,048 B1 (hereafter PENFIELD). Regarding claim 1, HOWARD teaches the invention substantially as claimed, including: A method for processing artificial intelligence (AI) user requests using a system that comprises both an experiential system and an analytical system, comprising: receiving a user input from a user device ([0065] An exemplary block diagram of a system 100 according to the present techniques is shown in FIG. 1. System 100 may include, for example, three layers, Input Data Layer 102…Input Data Layer 102 may include data-capturing points from data channels 108 associated with types of data: video, image, text, audio, etc., as well as meta world data 110 and objective data 112. The data channels layer may include several stages of data retrieval and manipulation, such as: identification of input points and types for each data channel, retrieval of data and data preprocessing, and data sampling techniques and storage); performing a routing analysis on the user input on a basis of a plurality of characteristics; making a routing decision based on the routing analysis ([0057] Embodiments may provide an intelligent adaptive system that combines input data types, processing history and objectives, research knowledge and situational context to determine what is the most appropriate mathematical model. [0066] Model selector 114 identify a set of methods and operations from model repository 116 to apply on the input data in relation to intelligence inferring and pattern determination. Such mechanisms may include the stages such as a Critic-Selector Mechanism, which may be based on combining input data types from data channels 108, meta world data 110, such as processing history, and objective data 112, including research knowledge and situational context to determine what is the most appropriate Artificial Intelligence (AI) model for existing data and how the system should manage the processing resources, be it models or computing infrastructure (i.e., determining the most appropriate mathematical model represents making a “routing decision” based on analysis of the input data and a plurality of characteristics including data types, processing history and objectives, etc.)) to send the user input to one or both of: an experiential system ([0121] It is to be noted that any type of machine learning model may be utilized by Selector Component 448 for selection of models, as well as generation of models. For example, as shown in FIG. 8a , embodiments may utilize…deep learning models 819, such as random, recurrent, and recursive neural network models (RNNs) 820 (i.e., at least recurrent neural network models represent systems that implement experiential models, according to [0173] of the specification)); and an analytical system [0121] It is to be noted that any type of machine learning model may be utilized by Selector Component 448 for selection of models, as well as generation of models. For example, as shown in FIG. 8a , embodiments may utilize…Bayesian learning models 811, such as sparse Bayes models 812, naive Bayes models 813, and expectation maximization models 814 (i.e., at least Bayesian learning models represent systems that implement mathematical analytical models, according to [0181] and [0182] of the specification. HOWARD also discusses selection of other analytical systems including linear regression models, )); routing the user input to at least one of the experiential system and the analytical system responsive to the routing decision ([0108] Solution Processor 456. Solution Processor 456 may receive the scheduled tasks or process modules 457 and runs them, if needed in parallel, on the appropriate computing infrastructure (i.e., tasks representing the input are sent to, or “routed” to the selected models executing on the appropriate computing infrastructure)); receiving one or more outputs from at least one of the experiential system and the analytical system ([0067] Output Data Layer 106 may include the results of running the resulting model or ensemble of models on the automatically selected computing infrastructure)… transmitting the final result to the user device ([0202] Agent layer 2502 may include digital hardware and software 2526 to provide system input and output to users.). While HOWARD discusses receiving results from selected models and transmitting output to users, HOWARD does not explicitly teach: generating a final result by performing a result validation procedure on the one or more outputs However, in analogous art that similarly discusses receiving results from machine learning models, PENFIELD teaches: receiving one or more outputs from at least one of the experiential system and the analytical system; generating a final result by performing a result validation procedure on the one or more outputs ([Column 18, Line 66-Column 19, Line 8] The output 1235 produced by each pre-trained machine learning model can include individual inferences for each field. Multiple outputs, or inferences may be produced or output 1235 by each network, whereby some are relevant and others may not be. Verification and validation checks can also be applied to either select from candidate outputs or verify and validate the accuracy or relevance of these outputs. After all validation checks are passed, the indexes (or model inferences) from each field are combined and returned and saved in the database 1240). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined PENFIELD’s teaching of validating outputs from machine learning models, with HOWARD’s teaching of generating outputs from experiential or analytical machine learning models, to realize, with a reasonable expectation of success, a system that generates outputs from experiential or analytical machine learning models, as in HOWARD, and validates the outputs before combining them, as in PENFIELD. A person having ordinary skill would have been motivated to make this combination to ensure accuracy and relevance of outputs (PENFIELD [Column 18, Line 66-Column 19, Line 8]). Regarding claim 2, HOWARD further teaches: the plurality of characteristics comprises at least two of: task type classification; complexity assessment; domain identification; temporal analysis; risk evaluation; and resource estimation ([0070] The critic-selector mechanism may process the problem description, recognize the problem type (i.e., “task type classification”), and then activate the selector component. The selector may start up several sets of resources (models or combination of models), which were learned from experience as the most probable viable approaches (i.e., “estimating” probable “resources” for a type of problem or task) for the given situation at hand). Regarding claim 3, HOWARD further teaches: the routing analysis is optimized by at least one of: reinforcement learning; supervised learning; or a combination of reinforcement learning and supervised learning ([0118] Orchestrator Perspective. From a more abstract, higher level point of view, system 400 may be seen as an orchestrator-centered system 800 managing all possible types of models, which may be organized in a graph, and which can be used for selecting processing paths, as illustrated in FIGS. 8a-c . Orchestrator 800 may use any approach from logic and planning, supervised to unsupervised learning, reinforcement learning, search algorithms, or any combination of those (i.e., orchestrator utilizes at least reinforcement learning to search for, and select a type of model)). Regarding claim 4, HOWARD further teaches: the routing analysis is performed in at least one of: a performance mode configured to prioritize response time ([0186] Embodiments may utilize a high volume of data and may have large data upload and retrieval performance requirements (i.e., performance requirements “prioritizes” performance)); an accuracy mode configured to prioritize accuracy ([0013] The at least one machine learning model relevant to the problem may be further obtained by determining, at the computer system, a combination of the selected and generated models that produces higher accuracy results than the selected and generated models); an efficiency mode configured to minimize at least one of computational cost and energy ([0175] Embodiments may provide a customer-centric energy system providing improved energy efficiency, cost minimization and reduced CO2 emissions); a safety mode configured to maximize validation rigor; and a balanced mode configured to optimize weighted combination of multiple objectives ([0175] Embodiments may provide a customer-centric energy system providing improved energy efficiency, cost minimization and reduced CO2 emissions (i.e., a system that factors in multiple objectives “weighs” those objects at least evenly)). Regarding claim 5, HOWARD further teaches: the experiential system is configured to operate without explicit modeling of physical laws, mathematical constraints, or causal mechanisms ([0056] Embodiments of the present systems and methods may provide machine learning techniques that may address such shortcomings and provide improved performance and results. For example, embodiments may address issues in the context of, for example, natural language processing (NLP), in a multidisciplinary approach that aims to bridge the gap between statistical NLP and the many other disciplines necessary for understanding human language such as linguistics, commonsense reasoning, and affective computing. Embodiments may leverage both symbolic and subsymbolic methods as that use models such as semantic networks and conceptual dependency representations to encode meaning, as well as use deep neural networks and multiple kernel learning to infer syntactic patterns from data (i.e., machine learning techniques that generate output using syntactic pattern matching for natural language processing represent operations of an experiential reasoning system, according to [0110] of the specification, that performs commonsense reasoning which does not require explicit modeling of physical laws, mathematical constrains, or causal mechanisms)). Regarding claim 9, PENFIELD further teaches: the result validation procedure is operable to compute one or more confidence metrics comprising at least one of: model agreement; constraint satisfaction margins; historical accuracy; uncertainty quantification; and validation results ([Column 18, Line 66-Column 19, Line 9] The output 1235 produced by each pre-trained machine learning model can include individual inferences for each field. Multiple outputs, or inferences may be produced or output 1235 by each network, whereby some are relevant and others may not be. Verification and validation checks can also be applied to either select from candidate outputs or verify and validate the accuracy or relevance of these outputs. After all validation checks are passed, the indexes (or model inferences) from each field are combined and returned and saved in the database 1240 (i.e., passing a validation check represents a “validation result” indicative of a level of “confidence” that the results are valid)). Regarding claim 10, PENFIELD further teaches: the result validation procedure is operable to merge a first output received from the [first] system comprised by the one or more outputs and a second output received from the [second] system comprises by the one or more outputs ([Column 18, Line 66-Column 19, Line 9] The output 1235 produced by each pre-trained machine learning model (i.e., at least a first, experiential model, and second, analytical model as described by HOWARD) can include individual inferences for each field. Multiple outputs, or inferences may be produced or output 1235 by each network, whereby some are relevant and others may not be. Verification and validation checks can also be applied to either select from candidate outputs or verify and validate the accuracy or relevance of these outputs. After all validation checks are passed, the indexes (or model inferences) from each field are combined and returned and saved in the database 1240 (i.e., results from first and second models are validated and combined)). Regarding claim 11, HOWARD further teaches: the experiential system comprises a large language model ([0056] Embodiments may address issues in the context of, for example, natural language processing (NLP), in a multidisciplinary approach that aims to bridge the gap between statistical NLP and the many other disciplines necessary for understanding human language such as linguistics, commonsense reasoning, and affective computing (i.e., machine learning models that perform natural language processing represent “large language models”)). Regarding claim 12, HOWARD further teaches: executing a feedback procedure comprising at least one of: adjusting one or more parameters for performing the routing analysis; updating the experiential system; and updating the analytical system ([0081] In embodiments, depending on the complexity of the model and the number of features the algorithm needs to search, the evaluation function can become more elaborate. If there are multiple features for which we want to optimize, a multi-parameter evaluation function can be used, for example a combination of multiple heuristic functions. Then, based on the feedback from all the heuristic functions, a decision can be made concerning how the set of model architectures can be improved). Regarding claim 13, HOWARD further teaches: the feedback procedure is performed responsive to at least one of: outcomes; performance metrics; and user feedback associated with the final result ([0081] In embodiments, depending on the complexity of the model and the number of features the algorithm needs to search, the evaluation function can become more elaborate. If there are multiple features for which we want to optimize, a multi-parameter evaluation function can be used, for example a combination of multiple heuristic functions. Then, based on the feedback from all the heuristic functions, a decision can be made concerning how the set of model architectures can be improved (i.e., heuristic function feedback represents “outcomes” of the heuristic functions)). Regarding claims 14-18, and 22-30, they comprise limitations similar to those of claims 1-5, and 9-13, and are therefore rejected for similar rationale. Claims 6, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over HOWARD, in view of PENFIELD, as applied to claims 1, and 14 above, and in further view of HYLAND et al. Pub. No.: US 2024/0256796 A1 (hereafter HYLAND) Regarding claim 6, PENFIELD further teaches: the analytical system comprises two or more computational implementations of scientific principles directed to: physics; chemistry; biology; economics; and engineering ([Column 10, Line 64-Column 11, Line 2] The custom dataset may be one curated specifically to train a machine learning network to identify specific information. For example, when training a machine learning network such as an NLP model to determine chemical names, CAS numbers, weightings and other information related to chemical ingredients (i.e., machine learning model determines various outputs related to chemistry or chemical engineering principles)). While HOWARD and PENFIELD discuss an analytical machine learning model implementing scientific principles related at least to chemistry or chemical engineering, HYLAND further teaches: the analytical system comprises two or more computational implementations of scientific principles directed to: physics; chemistry; biology; economics; and engineering ([0062] training the model using in-domain text comprises performing MLM training; concurrently training the model using labeled general domain task training data, wherein training the model using labeled task training data comprises performing both NLG training and NLU training; and using the trained model to perform a language task within the target domain. [0070] the target domain comprises a domain selected from the list consisting of medical, radiology, biomedical, law, finance, mathematics, chemistry physics, and engineering (i.e., machine learning models are trained, and therefore implement scientific principles directed at least to physics, chemistry, biomedical (biology), finance (economics) and engineering)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined HYLAND’s teaching of training machine learning models according to various scientific principles, with HOWARD and PENFIELD’s teaching of operating analytical machine learning models according to scientific principles, to realize, with a reasonable expectation of success, a system that operates analytical machine learning models according to scientific principles, as in HOWARD and PENFIELD, which include physics, chemistry, biomedical, finance, and engineering principles, as in HYLAND. A person having ordinary skill would have been motivated to make this combination to enable machine learning models to operate on a wider array of scientific principles leading to more accurate outputs. Regarding claim 19, it comprises limitations similar to those of claim 6, and is therefore rejected for similar rationale. Claims 7, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over HOWARD, in view of PENFIELD, as applied to claims 1, and 14 above, and in further view of LIN et al. Pub. No.: US 2024/0029132 A1 (hereafter LIN) Regarding claim 7, while HOWARD and PENFIELD discuss machine learning models that generate output, they do not explicitly teach: the analytical system is operable to generate an output comprising at least one of: an uncertainty quantification; a sensitivity analysis; a validation certificate; a documenting constraint satisfaction; and traceability information. However, in analogous art that similarly teaches machine learning models generating output, LIN teaches: the analytical system is operable to generate an output comprising at least one of: an uncertainty quantification; a sensitivity analysis; a validation certificate; a documenting constraint satisfaction; and traceability information ([0045] In some embodiments, the probability output by the machine-learned item availability model 316 includes a confidence score. The confidence score may be an error or uncertainty score of the output availability probability and may be calculated using any standard statistical error measurement). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined LIN’s teaching of a machine learning model outputting a confidence score indicating an uncertainty quantification, with the combination of HOWARD and PENFIELD’s teaching of an analytical machine learning model generating an output, to realize, with a reasonable expectation of success, a system where an analytical machine learning model generates an output, as in HOWARD and PENFIELD, having an indication of an uncertainty quantification, as in LIN. A person having ordinary skill would have been motivated to make this combination to give an indication of how certain an output is to be accurate for use in making better decisions based on this output. Regarding claim 20, it comprises limitations similar to those of claim 6, and is therefore rejected for similar rationale. Claims 8, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over HOWARD, in view of PENFIELD, as applied to claims 1, and 14 above, and in further view of WEINBERGER Patent No.: US 11,227,187 B1 (hereafter WEINBERGER). Regarding claim 7, while HOWARD and PENFIELD discuss a trained machine learning model generating outputs, they do not explicitly teach: the result validation procedure comprises implementing one or more consistency checking algorithms directed to: numerical consistency; logical consistency; semantic consistency; and physical consistency. However in analogous art that similarly teaches a trained machine learning model generating outputs which are validated, WEINBERGER teaches: the result validation procedure comprises implementing one or more consistency checking algorithms directed to: numerical consistency; logical consistency; semantic consistency; and physical consistency ([Column 20, Lines 7-19] Where the data 475-1, 475-2, 485 is to be used in the generation of a trained machine learning model, the data 475-1, 475-2, 485 may be retrieved and subjected to one or more hash functions or other validation functions. Where the hashes or other values generated based on outputs of the validation functions are consistent with the hashes generated upon authenticating the data 475-1, 475-2, 485, the validity of the data 475-1, 475-2, 485 is confirmed, and a machine learning model may be trained based on the data 475-1, 475-2, 485. If the hashes or other values are not consistent, however, then the validity of the data 475-1, 475-2, 485 is in question, and the data 475-1, 475-2, 485 may not be used to train a machine learning model (i.e., validation directed to hash value consistency represents “numerical consistency”)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined WEINBERGER’s teaching of validating machine learning model results based on consistency of numerical hashes, with the combination of HOWARD and PENFIELD’s teaching of generating validated results using machine learning models, to realize, with a reasonable expectation of success, a system that generates validated results using machine learning models, as in HOWARD and PENFIELD, by determining hash value consistency, as in WEINBERGER. A person having ordinary skill would have been motivated to make this combination to better ensure that a machine learning model outputs accurate and desirable results. Regarding claim 21, it comprises limitations similar to those of claim 6, and is therefore rejected for similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Oct 22, 2025
Application Filed
Feb 26, 2026
Non-Final Rejection — §101, §103
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month