Prosecution Insights
Last updated: April 18, 2026
Application No. 17/230,072

DETECT UN-INFERABLE DATA

Non-Final OA §101§103
Filed
Apr 14, 2021
Examiner
MULLINAX, CLINT LEE
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
4y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
59 granted / 123 resolved
-7.0% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
26 currently pending
Career history
149
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 123 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/27/2026 has been entered. Status of Claims This action is a responsive to the application filed on 03/27/2026. Claims 1-4, 6-11, 13-18, and 20-23 are pending. Claims 1-4, 6-11, 13-18, and 20 have been amended. Claims 5, 12, and 19 have been canceled. Claims 21-23 have been added. Response to Arguments Applicant’s arguments, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 101, have been considered but they are not persuasive. The applicant argues that the amended claims of training and testing machine learning models “cannot be executed in the human mind” and improve the operations of machine learning models, “the system is fundamentally improved to actively prevent the output of erroneous false positive predictions”, and “the claims as a whole integrate the limitations into a practical application and amount to significantly more than an abstract idea”. Therefore, the claims overcome the 101 rejections. The examiner respectfully disagrees. The additional elements and use in the claim do not operate to overcome the previous 101 abstract idea rejection. The model testing and training aspect of the claims are maintained as recited at a high level and amount to adding the words “apply it” (or an equivalent) with the judicial exception, and the amendments of “machine learning” are deemed recited at a high level and amount to generally linking the user of the judicial exception to a particular technological environment or field of use. The remaining amended operations of output comparison and dataset manipulation has been deemed as a mental process. Applicant is encouraged to give clarifying amendments to the machine learning model architecture/operations and/or the training process of the models so it is not merely a “black-box” recitation of machine learning model operations to assist in overcoming the 101 rejections. See 35 U.S.C 101 section for full, updated analysis of claim limitations necessitated by applicant amendments. Applicant’s arguments, with respect to the rejection(s) of claim(s) 1, 8, and 15 under 35 U.S.C. 103, have been considered but they are not persuasive. Applicant argues that no art of reference teaches the amended claim language of claims 1, 8, and 15, since “LIU forces a decision and picks a winner based on maximum confidence, the system outputs an inferable result. LIU actively teaches away from reporting a conflict as ‘un­inferable’”, and Christiansen’s “inconclusive output is based on uncertainty (a low probability resulting from soft voting), not a strong prediction conflict”; thus, “LIU resolves conflicts by forcing an inference, and CHRISTIANSEN reports inconclusive results based on low confidence rather than conflicting high confidence”. The examiner respectfully disagrees due to the broadness of the claim language. Liu page 965, section “Hybrid Classifier…” teaches “Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds”. When the models have the same confidence in disagreeing outputs (strong prediction conflicts) “vi ≠ pi, the hybrid model uses a priori knowledge to assign class labels”. Further, “For the disagreement context, Ai ≠ Di, because the final class label is assigned as the higher confidence level out of ARTMAP and DT (Equation 1), the final combined confidence level will be assigned with the higher level one (max(vi, pi))”. Further still, pages 968-969 teach determining low level accuracy (un-inferable…not predictable) results being undependable based on disagreements of the classifiers, since “Low level agreement expresses the low prediction accuracy (un-inferable…not predictable) probably caused either by low quality training samples or by spectral similarity of different land cover types…[and] disagreement leads to misclassifications”. Christiansen is cited in alternative, sections Methods: “Dataset” and “Model Building” teach ensembles using soft voting giving an inconclusive output (un-inferable…non-predictable) if there are evidence of malignant and benign data model predictions and associated “confidence scores” (strong prediction conflicts), since “Using the prediction from the ensemble, tumors were classified as benign or malignant (Ovry-Dx1) or as benign, inconclusive or malignant (Ovry-Dx2), by setting thresholds on the predicted probability of malignancy”. Here, it is determined that models detect evidence for opposing predictions and output a resulting “inconclusive”; thus, maintained as reading on the claimed language. See 35 U.S.C 103 section for full mapping of claim limitations necessitated by applicant amendments. Applicant’s arguments, with respect to the rejection(s) of claim(s) 1, 8, and 15 under 35 U.S.C. 103, have been considered but they are not persuasive. Applicant argues that no art of reference teaches the amended claim language of claims 1, 8, and 15, since Christiansen’s predicted probabilities are “not on a respective confidence threshold on a probability curve corresponding to the particular machine learning model that generated a given prediction”, and the taught “95% confidence intervals for "sensitivity, specificity, accuracy, and area under the receiver-operator­characteristics (ROC) curve," are merely test-set performance statistics used to evaluate model performance, not model-specific confidence thresholds used to determine whether an individual model prediction is ‘strong’”. The examiner respectfully disagrees due to the broadness of the claim language. Christiansen, sections Model Building, Results, and Table 2 teach each model (machine learning model), the types of data they were trained on (important features) for determining which training set type gives the best results (ranking/ identifying a set of distinct features based on the ranking) of predicted probabilities with associated confidence scores on confidence intervals measured on population curves, and further the probabilities are mapped into ROC curves. Applicant is encouraged to amend the claim language so the claim cannot be read as broadly. See 35 U.S.C 103 section for full mapping of claim limitations necessitated by applicant amendments. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 8, and 15 are respectively drawn to a system, method, and non-transitory computer readable storage medium, hence each falls under one of four categories of statutory subject matter (Step 1). Nonetheless, the claims are directed to a judicially recognized exception of an abstract idea without significantly more. Claims 1, 8, and 15 recite the following, or analogous, limitations “identifying a plurality of…models to binary classify a set of data, wherein each one of the plurality of…models produces one of a plurality of predictions corresponding to one of a plurality of targets; detecting one or more strong prediction conflicts between the plurality of predictions in response to testing the set of data against each of the plurality of…models, wherein the one or more strong prediction conflicts correspond to one or more models in the plurality of…models resulting in one or more different strong predictions based on the set of data, wherein a strong prediction exceeds a respective confidence threshold on a probability curve corresponding to the…model that generated the strong prediction, comprising: ranking the set of important features corresponding to the K subset of models; identifying a set of distinct features based on the ranking; for each of the set of distinct features: selecting one of the set of distinct features; removing a portion of the training data corresponding to the selected distinct feature…and selecting one of the K subset of models based on the testing; and designating the selected K subset of models as one of a set of S models; and utilizing the set of S models during the testing of the set of data to detect the one or more strong prediction conflicts; and reporting an un-inferable result of the testing in response to detecting the one or more strong prediction conflicts, wherein the un-inferable result is an outcome that is not predictable”. These limitations, as claimed, under its broadest reasonable interpretation, can be evaluated in a human mind, except for the recitation of generic computer components (using artificial intelligence/machine learning, a computer including one or more microprocessors, and a non-transitory computer readable storage medium) (Step 2A). Other than reciting “one or more processors; a memory coupled to at least one of the processors”, “computer readable storage medium”, and “machine learning”, “removing a portion of the training data corresponding to the selected distinct feature; testing the each of the K subset of models on a subset of the training data that excludes the removed portion of the training data” to perform the exceptions, nothing in the claims preclude the steps from practically being performed in the human mind. For example, a human expert can: mentally/with the aid of pen and paper identifying a plurality of…models to binary classify a set of data, wherein each one of the plurality of…models produces one of a plurality of predictions corresponding to one of a plurality of targets (e.g. by thinking of/writing out computations that output labels on input data associated with predetermined labels), mentally/with the aid of pen and paper detecting one or more strong prediction conflicts between the plurality of predictions in response to testing the set of data against each of the plurality of…models, wherein the one or more strong prediction conflicts correspond to one or more models in the plurality of…models resulting in one or more different strong predictions based on the set of data, wherein a strong prediction exceeds a respective confidence threshold on a probability curve corresponding to the…model that generated the strong prediction (e.g. by thinking of/writing out one or more computation outputs are not the same and computing a confidence in the outputs that are not that same, wherein differing outputs correspond to high confidence values over a predetermined value and plotted on a probability curve), mentally/with the aid of pen and paper comprising: ranking the set of important features corresponding to the K subset of models; identifying a set of distinct features based on the ranking; for each of the set of distinct features: selecting one of the set of distinct features; removing a portion of the training data corresponding to the selected distinct feature…and selecting one of the K subset of models based on the testing; and designating the selected K subset of models as one of a set of S models; and utilizing the set of S models during the testing of the set of data to detect the one or more strong prediction conflicts (e.g. by thinking of/writing out an order of the computations based on parameter values, reducing inputs based on the parameter values, computing outputs based on the inputs to determine the best computations based on errors, and determining differing outputs) mentally/with the aid of pen and paper reporting an un-inferable result of the testing in response to detecting the one or more strong prediction conflicts, wherein the un-inferable result is an outcome that is not predictable (e.g. by thinking of/writing out the collection of outputs are inconclusive in view of the one or more outputs not being the same with corresponding confidence values). Thus, the claims recite a mental process (Step 2A, Prong 1). Claims 1, 8, and 15 include additional elements, “one or more processors; a memory coupled to at least one of the processors”, “computer readable storage medium”, and “machine learning”, “testing the each of the K subset of models on a subset of the training data that excludes the removed portion of the training data”, however the recitations of these elements are at a high level of generality, and adding the words “apply it” (or an equivalent) with the judicial exception (i.e., “testing the each of the K subset of models on a subset of the training data that excludes the removed portion of the training data”), or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (i.e., “one or more processors; a memory coupled to at least one of the processors” and “computer readable storage medium”) (see MPEP 2106.05(f)), and generally linking the user of the judicial exception to a particular technological environment or field of use (i.e., “machine learning”) (see MPEP 2106.05(h)). Hence, each of the additional limitations or in combination do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (Step 2A, Prong 2). The additional elements in the claim do not amount to significantly more than an abstract idea. Furthermore, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using “one or more processors; a memory coupled to at least one of the processors”, “computer readable storage medium”, and “machine learning”, “removing a portion of the training data corresponding to the selected distinct feature; testing the each of the K subset of models on a subset of the training data that excludes the removed portion of the training data” to perform the steps of the independent claims amounts to no more than mere adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, and generally linking the user of the judicial exception to a particular technological environment or field of use; which these cannot provide an inventive concept. (STEP 2B). As such, claims 1, 8, and 15 are not patent eligible. Dependent claims 2-4, 6-7, 9-11, 13-14, 16-18, and 20-23 are also ineligible for the same reasons given with respect to claims 1, 8, and 15. The dependent claims describe additional mental processes and mathematical concepts: mentally/with the aid of pen and paper wherein the plurality of…models comprises a first model and a second model, the method further comprising: generating, by the first model, a strong first prediction corresponding to a first one of the plurality of targets; generating, from the second model, a strong second prediction corresponding to a second one of the plurality of targets; and generating the un-inferable result in response to determining that the first target is different from the second target (claims 2, 9, and 16) (e.g. by mentally/writing out two computations that output differing, confident labels associated with predetermined labels from the same inputs, and the results being inconclusive) mentally/with the aid of pen and paper wherein the strong first prediction is based on a first mean plus two standard deviations confidence threshold on a first probability curve corresponding to the first model, and wherein the strong second prediction is based on a second mean plus two standard deviations confidence threshold on a second probability curve corresponding to the second model (claims 3, 10, and 17) (e.g. by mentally/writing out statistical mathematical calculation concepts for determining the confidence of the computation outputs) mentally/with the aid of pen and paper…computing, for each of the plurality of…models, one of a plurality of model evaluation measures that measure a performance of one of the plurality of…models; and selecting a K subset of models from the plurality of…models based on their corresponding model evaluating measures, wherein the K subset of models comprises a set of important features (claims 4, 11, and 18) (e.g. by mentally/writing out error calculations of the computation label outputs compared to the predetermined labels, and choosing computations based on the error calculations, the computations further include parameter variables) mentally/with the aid of pen and paper determining a confidence threshold for each one of the S models in the set of S models; and utilizing the confidence threshold to determine whether one or more of the plurality of predictions is a strong prediction (claims 6, 13, and 20) (e.g. by thinking of/writing out a uncertainty value to compare the computation outputs to in order to determine confidence in the outputs) mentally/with the aid of pen and paper determining that the plurality of predictions comprise a plurality of strong first predictions that each correspond to a first one of the plurality of targets; determining that the plurality of predictions comprise a single strong second prediction that corresponds to a second one of the plurality of targets; and reporting the un-inferable result in response to determining that the first target is different from the second target (claims 7 and 14) (e.g. by thinking of/writing out the outputs are above the uncertainty value that correspond to predetermined labels and determining that two outputs differ) mentally/with the aid of pen and paper determining that there are not any strong prediction conflicts between the plurality of predictions; and generating an output result as inferable based on a score data test (claims 21-23) (e.g. by thinking of/writing out the outputs are in agreement with equal corresponding confidence values, and giving a final output value) Again, the dependent claims continued to cover the performance of the limitation in the mind as inherited from the independent claims (Step 2A, Prong 1). The dependent claims 2, 4, 9, 16, 18 recitation of “machine learning”; claims 4, 11, and 18 recitation of “building the plurality of…models based on a set of training data”, is again recited at a high level and amount to adding the words “apply it” (or an equivalent) with the judicial exception (i.e., “building the plurality of…models based on a set of training data”) (see MPEP 2106.05(f)), or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)); generally linking the user of the judicial exception to a particular technological environment or field of use (i.e., “machine learning”) (see MPEP 2106.05(h)); and do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (Step 2A, Prong 2). The additional element in the claims do not amount to significantly more than an abstract idea. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements to perform the steps of in the dependent claims and perform the steps of the claims amount to no more than mere instructions to apply the exception using generic computer components and adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, generally linking the user of the judicial exception to a particular technological environment or field of use, which these cannot provide an inventive concept. (STEP 2B). As such, dependent claims 2-4, 6-7, 9-11, 13-14, 16-18, and 20-23 additional elements or combination of elements do not amount to significantly more than an abstract idea nor provide any inventive concept, nor impose a meaningful limit to integrate the elements into a practical application or significantly more than the judicial exceptions; therefore, the dependent claims are not deemed patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 6-11, 13-18, and 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (“Uncertainty and Confidence in Land Cover Classification Using a Hybrid Classifier Approach”, 2004) hereinafter Liu, in view of Christiansen et al (“Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment”, 2020) hereinafter Christiansen. Regarding claims 1, 8, and 15, Liu teaches a computer-implemented method; an information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of; and a computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system to perform actions comprising (section “Dataset” teaches neural networks processing digital pixels from a dataset known to be executed on a computer, wherein the computer is known to include one or more memories communicatively coupled to one or more processors for executing programs to perform the embodiments of the disclosure): identifying a plurality of machine learning models to binary classify a set of data, wherein each one of the plurality of machine learning models produces one of a plurality of predictions corresponding to one of a plurality of targets (page 965, section “Hybrid Classifier…” teaches “hybrid classifier that combines the results of Decision Tree and fuzzy ARTMAP [neural] network (identifying a plurality of machine learning models)” that generate output label, wherein “Let Di denote the class label at pixel i using DTs, and Ai denote the class label at pixel i using ARTMAP (each one of the plurality of machine learning models produces one of a plurality of predictions corresponding to one of a plurality of targets)”; and page 966, section Dataset teaches “The final training data and associated class labels (targets) were compiled”); detecting one or more strong prediction conflicts between the plurality of predictions in response to testing the set of data against each of the plurality of machine learning models (page 965, section “Hybrid Classifier…” and Fig. 2 teach based on the dataset outputs (in response to testing the set of data against each of plurality of machine learning models), “Disagreement results when the following condition holds true: Ai ≠ Di (detecting one or more…conflicts between the plurality of predictions). Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds (detecting one or more…conflicts)”. When the models have the same confidence in disagreeing outputs (strong prediction conflicts) “vi ≠ pi, the hybrid model uses a priori knowledge to assign class labels”. Further, “For the disagreement context, Ai ≠ Di, because the final class label is assigned as the higher confidence level out of ARTMAP and DT (Equation 1), the final combined confidence level will be assigned with the higher level one (max(vi, pi))”), wherein the one or more strong prediction conflicts correspond to one or more models in the plurality of machine learning models resulting in one or more different strong predictions based on the set of data (page 965, section “Hybrid Classifier…” and Table 4 teach “Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds”. When the models have the same confidence in disagreeing outputs (strong prediction conflicts) “vi ≠ pi, the hybrid model uses a priori knowledge to assign class labels”. Further, “For the disagreement context, Ai ≠ Di (different), because the final class label is assigned as the higher confidence level out of ARTMAP and DT (Equation 1), the final combined confidence level will be assigned with the higher level one (max(vi, pi))” or based on model accuracy after determining the prediction confidences are equal. Further still, pages 968-969 teach determining low level accuracy results being undependable based on disagreements of the classifiers, since “Low level agreement expresses the low prediction accuracy probably caused either by low quality training samples or by spectral similarity of different land cover types…[and] disagreement leads to misclassifications”), and reporting an un-inferable result of the testing in response to detecting the one or more strong prediction conflicts, wherein the un-inferable result is an outcome that is not predictable (page 965, section “Hybrid Classifier…” teaches “Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds”. When the models have the same confidence in disagreeing outputs (strong prediction conflicts) “vi ≠ pi, the hybrid model uses a priori knowledge to assign class labels”. Further, “For the disagreement context, Ai ≠ Di, because the final class label is assigned as the higher confidence level out of ARTMAP and DT (Equation 1), the final combined confidence level will be assigned with the higher level one (max(vi, pi))”. Further still, pages 968-969 teach determining low level accuracy (un-inferable…not predictable) results being undependable based on disagreements of the classifiers, since “Low level agreement expresses the low prediction accuracy (un-inferable…not predictable) probably caused either by low quality training samples or by spectral similarity of different land cover types…[and] disagreement leads to misclassifications”). However, Liu does not explicitly teach wherein a strong prediction exceeds a respective confidence threshold on a probability curve corresponding to the machine learning model that generated the strong prediction, comprising: ranking the set of important features corresponding to the K subset of models; identifying a set of distinct features based on the ranking; for each of the set of distinct features: selecting one of the set of distinct features; removing a portion of the training data corresponding to the selected distinct feature; testing the each of the K subset of models on a subset of the training data that excludes the removed portion of the training data; and selecting one of the K subset of models based on the testing; and designating the selected K subset of models as one of a set of S models; and utilizing the set of S models during the testing of the set of data to detect the one or more strong prediction conflicts. Christiansen teaches wherein a strong prediction exceeds a respective confidence threshold on a probability curve corresponding to the machine learning model that generated the strong prediction, comprising: ranking the set of important features corresponding to the K subset of models; identifying a set of distinct features based on the ranking (sections Model Building, Results, and Table 2 teach each model (machine learning model), the types of data they were trained on (important features) for determining which training set type gives the best results (ranking/ identifying a set of distinct features based on the ranking) of predicted probabilities with associated confidence scores on confidence intervals measured on population curves, and further the probabilities are mapped into ROC curves); for each of the set of distinct features: selecting one of the set of distinct features; removing a portion of the training data corresponding to the selected distinct feature; testing the each of the K subset of models on a subset of the training data that excludes the removed portion of the training data (section Results and Table 2 teach excluding a certain type of data from the training data and determining prediction statistics (testing)); and selecting one of the K subset of models based on the testing; and designating the selected K subset of models as one of a set of S models (sections section Results and Table 2 teach the ensemble of models performing the best); and utilizing the set of S models during the testing of the set of data to detect the one or more strong prediction conflicts (sections Methods: “Dataset” and “Model Building”-“Training process” teach ensemble models including binary classifiers using soft voting giving an inconclusive output (un-inferable) if there are evidence of malignant and benign data model predictions and associated “confidence scores” (strong prediction conflicts), since “Using the prediction from the ensemble, tumors were classified as benign or malignant (Ovry-Dx1) or as benign, inconclusive or malignant (Ovry-Dx2), by setting thresholds on the predicted probability of malignancy”). Further, Liu at least implies a computer-implemented method; an information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of; and a computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system to perform actions comprising, binary classify, and reporting an un-inferable result of the testing in response to detecting the one or more strong prediction conflicts (see mappings above); however Christiansen teaches a computer-implemented method; an information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of; and a computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system to perform actions comprising (page 162 teaches “An advantage of our model is that it is simple to use, as any center could upload a set of deidentified images directly from the workstation or hospital computer, to a cloud platform hosting the model, without the need to first assess subjectively the images or provide additional patient data”; wherein the computer is known to include one or more memories communicatively coupled to one or more processors for executing programs to perform the embodiments of the disclosure), binary classify (page 158 teach utilizing binary classifiers), and reporting an un-inferable result of the testing in response to detecting the one or more strong prediction conflicts, wherein the un-inferable result is an outcome that is not predictable (sections Methods: “Dataset” and “Model Building” teach ensembles using soft voting giving an inconclusive output (un-inferable…non-predictable) if there are evidence of malignant and benign data model predictions and associated “confidence scores” (strong prediction conflicts), since “Using the prediction from the ensemble, tumors were classified as benign or malignant (Ovry-Dx1) or as benign, inconclusive or malignant (Ovry-Dx2), by setting thresholds on the predicted probability of malignancy”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Christiansen’s teachings of ensemble soft voting outputs being inconclusive with differing classification types present in the ensemble into Liu’s teaching of determining disagreement among ensemble models and its effect on prediction accuracies in order to “improve performance over the individual models” (Christiansen, sections Methods: “Dataset” and “Model Building”). Regarding claims 2, 9, and 16, the combination of Liu and Christiansen teach all the claim limitations of claims 1, 8, and 15 above; and further teach wherein the plurality of machine learning models comprises a first model and a second model, the method further comprising: generating, by the first model, a strong first prediction corresponding to a first one of the plurality of targets; generating, from the second model, a strong second prediction corresponding to a second one of the plurality of targets (Liu, page 965, section “Hybrid Classifier…”, page 966, and Table 4 teach “Disagreement results when the following condition holds true: Ai ≠ Di. Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds”; and both models produce a confident output (“when vi = pi”) (first/second model generate a strong prediction)); and generating the un-inferable result in response to determining that the first target is different from the second target (Liu, page 965, section “Hybrid Classifier…” and Table 4 teach “Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds…For the disagreement context, Ai ≠ Di (different), because the final class label is assigned as the higher confidence level out of ARTMAP and DT (Equation 1), the final combined confidence level will be assigned with the higher level one (max(vi, pi))” or based on model accuracy. Further, pages 968-969 teach determining low level accuracy (un-inferable) results being undependable based on disagreements of the classifiers, since “Low level agreement expresses the low prediction accuracy (un-inferable) probably caused either by low quality training samples or by spectral similarity of different land cover types…[and] disagreement leads to misclassifications”). Liu at least implies generating the un-inferable result in response to determining that the first target is different from the second target (see mappings above); however Christiansen teaches generating the un-inferable result in response to determining that the first target is different from the second target (sections Methods: “Dataset” and “Model Building” teach ensembles using soft voting giving an inconclusive output (un-inferable) if there are evidence of malignant and benign data model predictions (different), since “Using the prediction from the ensemble, tumors were classified as benign or malignant (Ovry-Dx1) or as benign, inconclusive or malignant (Ovry-Dx2), by setting thresholds on the predicted probability of malignancy”). Liu and Christiansen are combinable for the same rationale as set forth above with respect to claims 1, 8, and 15. Regarding claims 3, 10, and 17, the combination of Liu and Christiansen teach all the claim limitations of claims 2, 9, and 16 above; and further teach wherein the strong first prediction is based on a first mean plus two standard deviations confidence threshold on a first probability curve corresponding to the first model, and wherein the strong second prediction is based on a second mean plus two standard deviations confidence threshold on a second probability curve corresponding to the second model (Christiansen, sections Method: “Statistical analysis” and Results teach “To compare the performance of the DNN models to that of SA in discriminating between benign and malignant tumors in the test set, the sensitivity, specificity, accuracy and area under the receiver-operating-characteristics (ROC) curve (AUC), with their 95% CI (first mean plus two standard deviations confidence threshold), were calculated”; thus, determining strength of each models predictions). Liu and Christiansen are combinable for the same rationale as set forth above with respect to claims 1, 8, and 15. Regarding claims 4, 11, and 18, the combination of Liu and Christiansen teach all the claim limitations of claims 1, 8, and 15 above; and further teach building the plurality of machine learning models based on a set of training data; computing, for each of the plurality of machine learning models, one of a plurality of model evaluation measures that measure a performance of one of the plurality of machine learning models (Liu, section “Dataset” teaches “Both decision trees and fuzzy ARTMAP classifiers (machine learning models) were first trained on 80 percent of data and 20 percent of data was used to test the accuracy of classification (performance)”); and selecting a K subset of models from the plurality of machine learning models based on their corresponding model evaluating measures, wherein the K subset of models comprises a set of important features (Christiansen, sections Method: “Statistical analysis” and Results teach comparing each model performance of multiple different models (based on their corresponding model evaluating measures), and determining the best performing models (selecting a K subset of models). Further, section Results and Table 2 teach each model and the types of data they were trained on (the K subset of models comprises a set of important features)). Liu and Christiansen are combinable for the same rationale as set forth above with respect to claims 1, 8, and 15. Regarding claims 6, 13, and 20, the combination of Liu and Christiansen teach all the claim limitations of claims 1, 8, and 15 above; and further teach determining a confidence threshold for each one of the S models in the set of S models; and utilizing the confidence threshold to determine whether one or more of the plurality of predictions is a strong prediction (Christiansen, sections Method: “Statistical analysis” and Results teach “To compare the performance of the DNN models to that of SA (to determine whether one or more of the plurality of predictions is a strong prediction) in discriminating between benign and malignant tumors in the test set, the sensitivity, specificity, accuracy and area under the receiver-operating-characteristics (ROC) curve (AUC), with their 95% CI (determining/using the confidence threshold), were calculated”; thus, determining strength of each models predictions). Liu and Christiansen are combinable for the same rationale as set forth above with respect to claims 1, 8, and 15. Regarding claims 7 and 14, the combination of Liu and Christiansen teach all the claim limitations of claims 1 and 8 above; and further teach determining that the plurality of predictions comprise a plurality of strong first predictions that each correspond to a first one of the plurality of targets (Liu, pages 965-966 teach “For example, at five voters level, approximately 65 percent of pixels are assigned a class for which all five voters (trained ARTMAP networks) agree (plurality of predictions), and the prediction accuracy of these pixels is approximately 71 percent (plurality of strong first predictions)”, and then comparing the final output class to the decision tree’s output); determining that the plurality of predictions comprise a single strong second prediction that corresponds to a second one of the plurality of targets (Liu, page 965, section “Hybrid Classifier…”, page 966, and Table 4 teach “Disagreement results when the following condition holds true: Ai ≠ Di. Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds”; and both models produce a confident output (“when vi = pi”) (single strong second prediction) including the decision tree output (single strong second prediction)); and reporting the un-inferable result in response to determining that the first target is different from the second target (Liu, page 965, section “Hybrid Classifier…” and Table 4 teach “Our hybrid model consists of three levels of agreement and three levels of disagreement between the two classifiers based on thresholds…For the disagreement context, Ai ≠ Di (different), because the final class label is assigned as the higher confidence level out of ARTMAP and DT (Equation 1), the final combined confidence level will be assigned with the higher level one (max(vi, pi))” or based on model accuracy. Further, pages 968-969 teach determining low level accuracy (un-inferable) results being undependable based on disagreements of the classifiers, since “Low level agreement expresses the low prediction accuracy (un-inferable) probably caused either by low quality training samples or by spectral similarity of different land cover types…[and] disagreement leads to misclassifications”). Liu at least implies reporting the un-inferable result in response to determining that the first target is different from the second target (see mappings above); however Christiansen teaches reporting the un-inferable result in response to determining that the first target is different from the second target (sections Methods: “Dataset” and “Model Building” teach ensembles using soft voting giving an inconclusive output (un-inferable) if there are evidence of malignant and benign data model predictions (different), since “Using the prediction from the ensemble, tumors were classified as benign or malignant (Ovry-Dx1) or as benign, inconclusive or malignant (Ovry-Dx2), by setting thresholds on the predicted probability of malignancy”). Liu and Christiansen are combinable for the same rationale as set forth above with respect to claims 1, 8, and 15. Regarding claims 21-23, the combination of Liu and Christiansen teach all the claim limitations of claims 1, 8, and 15 above; and further teach determining that there are not any strong prediction conflicts between the plurality of predictions; and generating an output result as inferable based on a score data test (Liu, page 965, section “Hybrid Classifier…” and Fig. 2 teach “agreement between two classifiers results when the following condition holds true: Ai = Di” and the output is the agreed upon predicted “class label”). Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Crabtree et al (US Pub 20220012814) teaches utilizing machine learning models and making informed, optimized decisions on a “probability curve”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLINT MULLINAX whose telephone number is 571-272-3241. The examiner can normally be reached on Mon - Fri 8:00-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.M./Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Apr 14, 2021
Application Filed
Jun 10, 2025
Non-Final Rejection — §101, §103
Aug 19, 2025
Interview Requested
Sep 03, 2025
Examiner Interview Summary
Sep 03, 2025
Applicant Interview (Telephonic)
Sep 12, 2025
Response Filed
Dec 24, 2025
Final Rejection — §101, §103
Jan 26, 2026
Interview Requested
Feb 04, 2026
Applicant Interview (Telephonic)
Feb 04, 2026
Examiner Interview Summary
Feb 24, 2026
Response after Non-Final Action
Mar 27, 2026
Request for Continued Examination
Apr 01, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561620
Machine Learning-Based URL Categorization System With Noise Elimination
2y 5m to grant Granted Feb 24, 2026
Patent 12554962
CONFIGURABLE PROCESSOR ELEMENT ARRAYS FOR IMPLEMENTING CONVOLUTIONAL NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Patent 12547887
SYSTEM FOR DETECTING ELECTRIC SIGNALS
2y 5m to grant Granted Feb 10, 2026
Patent 12518169
SYSTEMS AND METHODS FOR SAMPLE GENERATION FOR IDENTIFYING MANUFACTURING DEFECTS
2y 5m to grant Granted Jan 06, 2026
Patent 12493771
DEEP LEARNING MODEL FOR ENERGY FORECASTING
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
86%
With Interview (+38.3%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 123 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month