Prosecution Insights
Last updated: April 19, 2026
Application No. 17/827,363

Systems and Methods for Semi-Supervised Active Learning

Final Rejection §101§103
Filed
May 27, 2022
Examiner
DIEP, DUY T
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Terramera Inc.
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
30%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
5 granted / 20 resolved
-30.0% vs TC avg
Moderate +6% lift
Without
With
+5.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
59
Total Applications
across all art units

Statute-Specific Performance

§101
34.1%
-5.9% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The arguments filed mm/dd/2025 have been entered. Claims 1-6, 8-10, 12, 14-16, 18-20, 22, 23, 25, 26 remain pending in the application. Applicant’s arguments, with respect to claim rejections of claims 1-6, 8-10, 12, 14-16, 18-20, 22, 23, 25, 26 under 35 U.S.C 101 filed 12/16/2025 have been considered and are not persuasive. Therefore, the previous rejections as set forth in the previous office action will be maintain. The applicant argues that claim 1 is patent-eligible under 35 U.CS.C 101 because, like USPTO Subject Matter Eligibility example 39, the claim is not directed to a judicial exception at Step 2A prong 1. Specifically, the applicant asserts that claim 1 recites a processor-implemented method for creating training data and training a machine learning model, analogous to the computer-implemented neural network training method in Example 39, and does not recite any mathematical relationships, formulas, calculations, a mental process practically performable in the human mind or a method of organizing human activity. The applicant further contends that the examiner improperly characterized the claim as a mental process and provide that AI-based claim limitations cannot practically performed in the human mind and should not be grouped as mental process, and therefore asserts that claim 1 and its dependent claims comply with 101. The examiner respectfully disagrees. Applicant’s argument is not persuasive. As previously explained, claim 1 recites a series of steps – selecting samples, generating labels and predictions, associating confidence measures, comparing confidence measures to thresholds, modifying thresholds, and generating subsequent labels – that fall within the judicial exception of mental processes under Step 2A prong I. These limitations recite evaluations, comparisons, and labeling determination that can practically be performed in the human mind or with pen and paper. Merely reciting that certain steps are performed by “a machine learning model” and “training the one or more parameters of the machine learning model over a labeled training dataset” does not remove the claim from this grouping where the claim fails to recite any specific model architecture, technical configuration, or algorithmic implementation that would preclude mental performance. Applicant’s reliance on Example 39 is misplaced. Unlike Example 39, which recites specific technical operations (e.g., defined image transformations that cannot practically be performed in human mind, distinct multi-stage training sets, and concrete retraining steps that improve neural network performance), claim 1 recites its steps at a high level of generality and uses the machine learning model as a generic tool to execute the abstract idea. The claimed thresholds, confidence measure falls within numerical category, in which a human mind is capable of determining and processing calculation of these number. Similarly, a human mind is capable of performing prediction step, labeling step, and selecting step. The claim does not specify how prediction confidence measures, thresholds, nor does it identify a technological improvement to the function of the computer or the machine learning model. Accordingly, the claim does not integrate the judicial exception into a practical application under Step 2A, prong II. Further the additional elements, whether considered individually or as an ordered combination do not amount to significantly more than the abstract idea under step 2B. The recited machine learning model labeling techniques, and training steps amount to no more than generic computer implementation and instructions to apply the abstract idea, without an unconventional arrangement of technical improvement. Therefore, claim 1 remains not patent eligible under 35 U.S.C 101 and the rejection is maintained. Applicant’s arguments, with respect to claim rejections of claims 1-6, 8-10, 12, 14-16, 18-20, 22, 23, 25, 26 under 35 U.S.C 103 filed 12/16/2025 have been considered and are not persuasive. Therefore, the previous rejections as set forth in the previous office action will be maintain. The applicant argues that the cited references of Wang in view of Huszar fails to teach claim 1 because Wang allegedly does not disclose the fifth limitation of “selecting and performing at least one of a plurality of labeling techniques based on the labeling determination to yield the first label” and because Wang allegedly discloses only a single confidence threshold rather than modifying at least one threshold prior to generating second labels as required by seventh and eight limitations. Huszar only discloses a single human-labeling approach technique rather than a selectable plurality of labeling techniques, and that Wang’s discussion of using a “different threshold” merely reflects re-running a confidence model rather than explicitly modifying a threshold before generating second labels. The applicant concludes that none of the cited references, alone or in combination, discloses or suggest the claimed sequence of selecting multiple labeling techniques and modifying threshold prior to generating second labels. The examiner respectfully disagrees. Firstly, with respect to the 5th limitation, the applicant argues that the cited reference fails to disclose the limitation as claimed, and asserting that Huszar discloses only a single labeling technique and do not explicitly enumerate multiple techniques. This argument improperly narrows the claim. Under the broadest reasonable interpretation, a plurality of labeling techniques requires only more than one technique and does not require explicit enumeration or naming of specific techniques within a single reference. As set forth in the rejection, Wang discloses automated confidence-labeling of unknown data using machine learning model and confidence thresholds, while Huszar discloses an active learning system in which labeling is selectively performed by a human rater via a labeling UI. These two teachings constitute distinct labeling techniques. When Wang is combined with Huszar, a plurality of labeling techniques is provided, from which at least one technique may be selected based on the labeling determination. A person of ordinary skilled in the art would have been motivated to combine these teachings to reduce labeling cost and improve training efficiency, as expressly taught by Huszar. Accordingly, the combination teaches the 5th limitation Secondly, the applicant further argues that Wang uses a single, unmodified threshold and therefore does not disclose the claim limitation “modifying at least one of the one or more thresholds to yield a modified threshold”. This argument is not persuasive. Wang expressly discloses that confidence scores are compared to a threshold to determine which predicted labels are deemed confident and further states that “a different threshold could be used” as paragraph 109 as recited in the previous office action. Under the broadest reasonable interpretation, using a different threshold value different from a previously used threshold value constitutes modifying the previous threshold to yield a modified threshold. The claim does not require a particular mathematical operation, algorithm, or mechanism for modifying the threshold, nor does it exclude human or system selection of a different threshold value. Thus, Wang teaches the 7th limitation. Thirdly, the applicant further contends that Wang’s later labels results solely from updated confidence scores rather than a modified threshold and therefore do not satisfy the 8th limitation. This argument improperly conflates confidence scores with threshold comparison. Wang separately discloses recomputing updated confidence scores and apply those scores to a threshold to select and label the updated subset of unknown data extracted from the remaining unknown data at paragraph 106 “The remaining unknown data (528) is the portion of the unknown data (508) that remains after the subset of unknown data (518) has been extracted from the unknown data (508) ... Thus, the updated prediction model (524) may be executed on the remaining unknown data (528) to output the updated predicted labels (530)”, and paragraph 109 “Again, the updated confidence scores (532) is applied to the threshold described above. However, a different threshold could be used. The updated subset of unknown data (534) with their updated confident labels (536) is extracted from the remaining unknown data (528).” This updated subset corresponds to the second subset of unknown data as previously mention by Wang in paragraph 36 as recited in the Office Action and corresponds to the claimed second sample with second label based on modified threshold. The remaining unknown data is merely an intermediate pool and is distinct from the subsequently extracted subset with confident label. Under the broadest reasonable interpretation, when the system uses a threshold value different from a previously applied threshold, this would indicate that the previously applied threshold has been modified, and the resulting confident labels for the updated subset of unknown data are generated based on the comparison with that different threshold. Accordingly, Wang teaches generating a second label for a second sample based on the modified threshold, as claimed. Therefore, the rejections of the claimed will be maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, 8-10, 12, 14-16, 18-20, 22, 23, 25, 26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: Claim 1 recites a method, one of the four statutory categories of patentable subject matter. Step 2A, Prong I: Claim 1 further recites the limitations of: “selecting a first sample from an unlabeled training dataset” This limitation recites a mental process. A person can mentally select a sample from a set of unlabeled training dataset. Such as selecting an image from an album of images. “generating a first label for the first sample” This limitation recites a mental process. A person can mentally generate a label for the sample, such as mentally generate am animal label for the selected image. “generating, ..., a first prediction based on the first sample and one or more parameters of the machine learning model, the first prediction associated with a confidence measure” This limitation recites a mental process. A person can mentally generate a prediction based on the sample and parameters of the machine learning model, and the prediction associated with a confidence measure, such as generating that a prediction whether the selected image belongs to the animal category based on the content of the images and parameters, and the person can mentally associate a confidence measure with the prediction. “comparing the confidence measure to one or more thresholds to yield a labeling determination” This limitation recites a mental process. A person can mentally compare the confidence measure to a threshold to determine if the person can label the image. The comparison can be performed mentally such as comparing two values to determine which value is greater. “modifying at least one of the one or more thresholds to yield a modified threshold” This limitation recites a mental process. A person can mentally or manually modify the threshold such as by changing the value of the threshold. “generating a second label for a second sample based on the modified threshold” This limitation recites a mental process. A person can mentally generate a second label for another sample based on the modified threshold, such as the person may select another picture from a picture album, determine a label corresponding to the selected image such as a reptile category and compare the determination with a modified threshold to confirm their determination. Step 2A, Prong II: Claim 1 further recites the limitations of: “... by the machine learning model ...” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The claim recites generating a prediction by the machine learning model without reciting the configuration or the technical steps of the machine learning model to output a prediction based on the model’s parameters. “selecting and performing at least one of a plurality of labeling techniques based on the labeling determination to yield the first label” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The claim recites selecting and performing labeling techniques without reciting what labeling technique is performed or utilized or a new labeling technique is introduced to perform the method. “training the one or more parameters of the machine learning model over a labeled training dataset, the labeled training dataset comprising the sample and the label” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The claim recites training the parameters of the machine learning model over a labeled training data set without reciting how the training is performed or if any specific configuration of a new inventive machine learning model is utilized or an inventive method for training the machine learning model is introduced. Step 2B: When considered individually or in combination, the additional limitations and elements of claim 1 does not amount to significantly more than the judicial exception for the same reasons discussed above as to why the additional limitations do not integrate the abstract idea into a practical application. The additional elements of outlined in Step 2A performing functions as designed simply accomplishes execution of the abstract ideas. The additional element “... by the machine learning model ...” recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f). The claim recites generating a prediction by the machine learning model without reciting the configuration or the technical steps of the machine learning model to output a prediction based on the model’s parameters. The additional element “selecting and performing at least one of a plurality of labeling techniques based on the labeling determination to yield the first label” recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f). The claim recites selecting and performing labeling techniques without reciting what labeling technique is performed or utilized or a new labeling technique is introduced to perform the method. The additional element “training the one or more parameters of the machine learning model over a labeled training dataset, the labeled training dataset comprising the sample and the label” recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f). The claim recites training the parameters of the machine learning model over a labeled training data set without reciting how the training is performed or if any specific configuration of a new inventive machine learning model is utilized or an inventive method for training the machine learning model is introduced. In conclusions from above for the elements considered as a mental process, and elements reciting additional element of instruction to apply an exception as identified in MPEP 2106.05(f) are carried over and do not provide significantly more than the abstract idea. Looking at the limitations in combination and the claims as a whole does not change this conclusion and the claim is ineligible. Therefore, additional limitations of claim 1 do not amount to significantly more than the judicial exception. Thus, claim 1 recites abstract ideas with additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 1 is not patent eligible. Regarding claim 2 depends on claim 1 thus the rejection of claim 1 is incorporated. Claim 2 recites the limitation: “The method according to claim 1 wherein the plurality of labeling techniques comprise semi-supervised learning and active learning” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites what labeling techniques may be utilized without reciting how these labeling techniques is performed or configured as a new inventive method. Thus, claim 2 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 2 is not patent eligible. Regarding claim 3 depends on claim 2 thus the rejection of claim 2 is incorporated. Claim 3 recites the limitation: “comparing the confidence measure to one or more thresholds comprises determining whether the confidence measure is greater than a semi-supervised learning threshold” The limitation recites a mental process. A person can mentally compare the confidence measure to one or more thresholds to determine if the measure is greater than the threshold. The comparison between two values can be performed mentally by a person’s mind. “selecting and performing at least one of a plurality of labeling techniques comprises, in response to determining that the confidence measure is greater than a semi-supervised learning threshold, assigning a pseudo-label to the first sample based on the prediction” The limitation recites a mental process. A person can mentally determine a confidence in their selection based on a threshold, then further label their selection such as selecting and labeling an image with a pseudo label based on the prediction. Thus, claim 3 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 3 is not patent eligible. Regarding claim 4 depends on claim 3 thus the rejection of claim 3 is incorporated. Claim 4 recites the limitation: “The method according to claim 3 wherein selecting and performing at least one of a plurality of labeling techniques comprises, in response to determining that the confidence measure is less than or equal to the semi-supervised learning threshold, querying an oracle for a ground-truth label for the first sample” The limitation recites a mental process. A person can mentally determine if the confidence measure is less than or equal to a threshold, then querying an oracle for a ground truth label such as by requesting the correct verified label for a piece of data from an expert. Thus, claim 4 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 4 is not patent eligible. Regarding claim 5 depends on claim 3 thus the rejection of claim 3 is incorporated. Claim 5 recites the limitation: “comparing the confidence measure to one or more thresholds comprises determining whether the confidence measure is less than an active learning threshold, the active learning threshold less than the semi-supervised learning threshold” The limitation recites a mental process. A person can mentally compare the confidence measure to one or more thresholds to determine if the measure is less than or equal to the threshold. The comparison between two values can be performed mentally by a person’s mind. “selecting and performing at least one of a plurality of labeling techniques comprises, in response to determining that the confidence measure is less than an active learning threshold, assigning a pseudo-label to the first sample based on the prediction” The limitation recites a mental process. A person can mentally determine a confidence in their selection based on a threshold, then further label their selection such as selecting and labeling an image with a pseudo label based on the prediction. Thus, claim 5 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 5 is not patent eligible. Regarding claim 6 depends on claim 3 thus the rejection of claim 3 is incorporated. Claim 6 recites the limitation: “assigning the pseudo-label comprises generating the pseudo-label for the first sample based on the prediction” The limitation recites a mental process. A person can mentally assign a pseudo-label for a sample based on the prediction. “wherein the confidence measure comprises a measure of uncertainty in the first prediction; determining whether the confidence measure is greater than a semi-supervised learning threshold comprises determining whether the measure of uncertainty is less than the semi-supervised learning threshold” The limitation recites a mental process. A person can mentally compare value and determine which value is greater or less than another value. Thus, claim 6 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 6 is not patent eligible. Regarding claim 8 depends on claim 1 thus the rejection of claim 1 is incorporated. Claim 8 recites the limitation: “The method according to claim 1 wherein selecting the first sample comprises random sampling” The limitation recites a mental process. A person can mentally select a sample randomly. Thus, claim 8 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 8 is not patent eligible. Regarding claim 9 depends on claim 1 thus the rejection of claim 1 is incorporated. Claim 9 recites the limitation: “The method according to claim 1 wherein modifying the at least one of the one or more thresholds comprises modifying the at least one of the one or more thresholds based on a model uncertainty measure associated with the machine learning model” The limitation recites a mental process. A person can mentally or manually modify a threshold value based on a measure of uncertainty associated with a machine learning model. Thus, claim 9 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 9 is not patent eligible. Regarding claim 10 depends on claim 9 thus the rejection of claim 9 is incorporated. Claim 10 recites the limitation: “the model uncertainty measure comprises a measure of expected calibration error in the machine learning model” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the model uncertainty measure comprises a measure of expected calibration error in the machine learning model without reciting how the measure of expected calibration error is obtained or utilized for the training of the machine learning model. “wherein the model uncertainty measure is based on a number of times the one or more parameters of the machine learning model have been trained” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the number of times the parameters of the machine learning model have been trained, and then determine an uncertainty measure based on the count, without reciting how the uncertainty measure is determined based on the number of times the parameters have been trained. There is no relation or rule or converting process transforming the number of times the parameters have been trained into a value representing the measure of uncertainty. Thus, claim 10 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 10 is not patent eligible. Regarding claim 12 depends on claim 9 thus the rejection of claim 9 is incorporated. Claim 12 recites the limitation: “the model uncertainty measure comprises an average of a plurality of confidence measures associated with a plurality of predictions generated by the machine learning model” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites “wherein the model uncertainty measure comprises a measure of accuracy of predictions by the machine learning model over a test training dataset, the test training dataset disjoint from the labeled training dataset” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites Thus, claim 12 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 12 is not patent eligible. Regarding claim 14 depends on claim 9 thus the rejection of claim 9 is incorporated. Claim 14 recites the limitation: “modifying at least one of the one or more thresholds a plurality of times to generate a plurality of modified thresholds” The limitation recites a mental process. A person can mentally or manually modify a threshold multiple times to obtain various threshold values. The process of modifying a value such as changing the value can be performed mentally or manually. “generating, for each of the modified thresholds, one or more labels for one or more samples from the unlabeled dataset; and ...” The limitation recites a mental process. A person can mentally generate a label for one or more samples from the unlabeled dataset for each modified threshold. The process of generating a label can be performed mentally in a human mind. “... training the one or more parameters of the machine learning model over at least the one or more labels and one or more samples” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the training of one or more parameters of the machine learning model over at least the one or more labels and one or more samples without reciting how the training is performed or if a new inventive machine learning model is configured to perform any inventive steps of training. Thus, claim 14 recites abstract ideas with additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 14 is not patent eligible. Regarding claim 15 depends on claim 14 thus the rejection of claim 14 is incorporated. Claim 15 recites the limitation: “The method according to claim 14 wherein modifying the at least one of the one or more thresholds a plurality of times comprises iteratively decreasing the uncertainty to which the at least one of the one or more thresholds correspond” The limitation recites a mental process A person can mentally modify one or more thresholds value by iteratively decreasing the uncertainty to which the at least one of the one or more thresholds correspond. The modifying a threshold value iteratively by decreasing another value associated with the threshold can be performed mentally in a human mind or manually. Thus, claim 15 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 15 is not patent eligible. Regarding claim 16 depends on claim 15 thus the rejection of claim 15 is incorporated. Claim 16 recites the limitation: “The method according to claim 15 wherein iteratively decreasing the uncertainty comprises increasing a value of the at least one of the one or more thresholds from a starting value in a range of about 0% to 50% of a minimum confidence value to a final value in a range of about 50% to 100% of a maximum confidence value, wherein optionally the starting value comprises about 20% of the minimum confidence value and the final value comprises about 60% of the maximum confidence value” The limitation recites a mental process. A person can mentally or manually increase a value from a starting value in a range of about 0% to 50% of a minimum confidence value to a final value in a range of about 50% to 100% of a maximum confidence value, wherein optionally the starting value comprises about 20% of the minimum confidence value and the final value comprises about 60% of the maximum confidence value. A person’s mind is capable of performing the increasing of value by determining the changing from one value to another value such as considering the value change from 20 to 60. Regarding claim 18 depends on claim 15 thus the rejection of claim 15 is incorporated. Claim 18 recites the limitation: “... determining a value of the at least one of the one or more thresholds based on a sum of a minimum confidence value and the model uncertainty measure” The limitation recites a mental process and a mathematical process. A person can mentally determine a value of the one or more threshold based on mentally performing the sum of a minimum confidence value and the model uncertainty measure. The sum between two values further represents a mathematical process. Thus, claim 18 recites abstract ideas rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 18 is not patent eligible. Regarding claim 19 depends on claim 1 thus the rejection of claim 1 is incorporated. Claim 19 recites the limitation: “The method according to claim 1 wherein the machine learning model comprises an ensemble model, the ensemble model comprising a plurality of sub-models.” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the machine learning model comprises an ensemble model which comprise a plurality of sub-models without reciting how these sub-models of the ensemble model is configured or how they are associated with the semi supervised learning method that provide an inventive method. Thus, claim 19 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 19 is not patent eligible. Regarding claim 20 depends on claim 19 thus the rejection of claim 19 is incorporated. Claim 20 recites the limitation: “wherein the confidence measure comprises a measure of a plurality of predictions generated by the plurality of sub-models based on the first sample; wherein optionally the measure of the plurality of predictions comprises at least one of: a variance and a standard deviation based on the plurality of predictions” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the confidence measure is a measure of a plurality of predictions generated by the plurality of sub-models, which further comprises a variance and a standard deviation based on the plurality of predictions, without reciting the rationale or the configuration of the training of sub-models to obtain predictions or how the variance and the standard deviation is utilized with regard to the confidence measure, such that the semi-supervised learning method may be performed as an improvement or inventive concept based on these factors. Thus, claim 20 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 20 is not patent eligible. Regarding claim 22 depends on claim 19 thus the rejection of claim 19 is incorporated. Claim 22 recites the limitation: “The method according to claim 18 wherein at least one of the plurality of sub- models comprises a neural network” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the sub-models comprises a neural network without reciting how these neural networks may be configured or associated with the semi-supervised learning method. Simply reciting a model comprising a neural network does not contribute to an inventive concept. Thus, claim 22 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 22 is not patent eligible. Regarding claim 23 depends on claim 1 thus the rejection of claim 1 is incorporated. Claim 23 recites the limitation: “wherein the machine learning model comprises a Bayesian neural network, the Bayesian neural network operable to generate the first prediction comprising the confidence measure” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the machine learning model comprises a Bayesian Neural network operable to generate the prediction with confidence measure, without reciting the process to generate the prediction by the network or whether the network is associated with the active semi supervised learning method in which an improved or inventive neural network may be obtained. Simply recites using the Bayesian neural network for a functional step does not contribute to an inventive concept. “comprising generating the confidence measure for the first prediction by performing Monte Carlo dropout with the machine learning model based on the first sample” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites performing Monte Carlo dropout to generate the confidence measure, without reciting how the Monte Carlo dropout method is utilized in accordance with the semi supervised active learning method. Simply reciting using the Monte Carlo dropout for a functional step does not contribute to an inventive concept. Thus, claim 23 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 23 is not patent eligible. Regarding claim 25 depends on claim 1 thus the rejection of claim 1 is incorporated. Claim 25 recites the limitation: “The method according to claim 1 wherein the machine learning model is operable to receive a first representation of a first chemical structure and a second representation of a second chemical structure as input and to produce a prediction of synergy between the first and second chemical structures as output” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the machine learning model is operable to receive chemical structures as input and to produce a prediction of synergy between the first and second chemical structures as output without reciting how the process of input and output contribute to the semi supervised active learning method and does not recite how the neural network is configured or functional steps to produce the output from such input. Thus, claim 25 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 25 is not patent eligible. Regarding claim 26 depends on claim 25 thus the rejection of claim 25 is incorporated. Claim 26 recites the limitation: “The method according to claim 25 wherein the unlabeled training dataset comprises a plurality of representations of chemical structures and the labeled training dataset comprises a plurality of sets of representations of chemical structures, each set of representations comprising a plurality of representations, the labeled training dataset further comprising, for each set of representations, and indication of synergy between the chemical structures of the set, wherein optionally, for each set of representations of chemical structures, the indication of synergy comprises an indication of synergistic pesticidal efficacy of a chemical composition comprising the chemical structures of the set against a target pest, wherein further optionally the method comprises constraining a number of queries to an oracle for ground-truth labels to less than a predetermined number” This additional element recites a mere instruction to apply an exception with a recitation of the words "apply it" (or an equivalent) as identified in MPEP 2106.05(f), and does not provide integration into a practical application or significantly more than the abstract idea. The limitation recites the unlabeled training set being a plurality of representations of chemical structures, indication of synergistic pesticidal efficacy between the chemical structures against a target pest, and a number of queries to an oracle for ground-truth labels to less than a predetermined number without reciting how these indication and training set may be associated with the semi supervised active learning method such that an inventive concept is obtained. The claim does not recite how these indications or training set correspond to the claimed method above, or if the indication being represented by the label. Thus, claim 26 recites additional elements rendered at a high level of generality resulting in claims that do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception. Therefore, claim 26 is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 19, 20, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et.al (US 20220237520 A1) in view of Huszar et.al (US 20180240031 A1) Regarding claim 1, Wang teaches the 1st limitation “selecting a first sample from an unlabeled training dataset” (paragraph 23 “According to the one or more embodiments, a prediction model predicts predicted labels for the unknown data ... A subset of the unknown data with confident labels (i.e., labels having predicted confidences above a threshold value) is selected for addition to the known data”, and paragraph 35 “the multiple subsets of unknown data (118) include multiple confident labels (122)” Wang discloses a method of machine learning training for data augmentation. Within the disclosure, Wang discloses using a prediction model to predict labels for the unknown data, wherein the unknown data is “unknown” in the sense that known labels are not available or might not exist, suggesting an unlabeled training dataset in the claim. Wang discloses a subset of the unknown data which have already had their labels predicted is selected for addition to known data, suggesting that a subset of the unknown data may be selected for prediction by the machine learning model, which suggest the first sample selected from an unlabeled training dataset in the claim.) Wang teaches the 2nd limitation “generating a first label for the first sample” (paragraph 2 “The method includes training, using accepted data having known labels, an untrained prediction model to generate a trained prediction model. The method also includes generating, using the trained prediction model, predicted labels for unknown data”. Wang discloses the method include generating using the trained prediction model, predicted labels for a subset of unknown data.) Wang teaches the 3rd limitation “generating, by the machine learning model, a first prediction based on the first sample and one or more parameters of the machine learning model, the first prediction associated with a confidence measure” (paragraph 20 “For ease of notation, the following definitions are stated. A prediction model is a prediction machine learning model that includes a prediction algorithm and a number of parameters for the prediction algorithm ... A confidence model is a confidence machine learning model that includes a confidence prediction algorithm”, and paragraph 23 “According to the one or more embodiments, a prediction model predicts predicted labels for the unknown data. A separate confidence model then predicts confidences that the predicted labels are accurate” Wang discloses the prediction machine learning model and a number of parameters utilizing a prediction algorithm to generate a prediction of labels for a subset of unknown data. Wang also discloses a confidence machine learning model to predict a confidence score that the predicted label is accurate.) Wang teaches the 4th limitation “comparing the confidence measure to one or more thresholds to yield a labeling determination” (paragraph 23 “A separate confidence model then predicts confidences that the predicted labels are accurate. A subset of the unknown data with confident labels (i.e., labels having predicted confidences above a threshold value) is selected for addition to the known data”. Wang discloses the subset of the unknown data with labels having predicted confidence scores above a threshold value is selected, suggesting a comparison process of the predicted confidence score with one or more threshold value to determine if the label is appropriate for the subset of unknown data, such that the subset of the unknown data may be added to known data with labels.) Wang teaches the 6th limitation “training the one or more parameters of the machine learning model over a labeled training dataset, the labeled training dataset comprising the sample and the label” (paragraph 23 “The prediction model is then trained, or retrained, on a combination of the known data and the subset of unknown data”, and paragraph 44 “As indicated above, the process of training a machine learning model involves executing the machine learning algorithm to generate a prediction, comparing the prediction to a known result, determining a loss function adjusting parameters (such as weights) for the algorithm in a manner estimated to bring the next prediction closer to the known result, re-executing the machine learning algorithm with the adjusted parameters, and iterating these steps until convergence”. Wang discloses the process of training a neural network with regard to its parameter using a combination of the known data and the subset of the unknown data with labels.) Wang teaches the 7th limitation “modifying at least one of the one or more thresholds to yield a modified threshold” (paragraph 34 “The threshold value (115) may be set by a computer scientist or by another machine learning process”, and paragraph 109 “Again, the updated confidence scores (532) is applied to the threshold described above. However, a different threshold could be used.” Wang discloses the threshold value may be set by a computer scientist or by another machine learning process and a different threshold could be used, suggesting the modifying of the threshold value by a person ordinary skilled in the art such as a computer scientist or by another machine learning process to obtain a different threshold.) Wang teaches the 8th limitation “generating a second label for a second sample based on the modified threshold” (paragraph 36 “The unknown data (106) also includes a second subset of unknown data (124) among multiple second subsets of unknown data (126). The second subset of unknown data (124) is a new subset of the unknown data (106) generated during a further iteration of the training process described with respect to FIG. 2 and exemplified in FIG. 5. Each time the method is iterated, a new subset of the unknown data (106) is generated, thus leading to reference to the multiple second subsets of unknown data (126). In each case, the multiple second subsets of unknown data (126) has corresponding multiple confident labels (122), on a one for one basis”, paragraph 106 “The remaining unknown data (528) is the portion of the unknown data (508) that remains after the subset of unknown data (518) has been extracted from the unknown data (508)”, and paragraph 109 “Again, the updated confidence scores (532) is applied to the threshold described above. However, a different threshold could be used. The updated subset of unknown data (534) with their updated confident labels (536) is extracted from the remaining unknown data (528). The updated subset of unknown data (534) is that portion of the remaining unknown data (528) with updated confidence scores (532) that are above the threshold.” Wang discloses the training process may be perform in further iteration with a one or more second subset of the unknown data, wherein the second subset of the unknown data corresponds to the updated subset of unknown data extracted from the remaining unknown data within the references, such that a confident label may be predicted with a confidence score for the remaining unknown data using the prediction model. The confidence score of the label of the data within the remaining unknown data is then compared to a threshold, wherein the threshold may be the different threshold, which corresponds to the modified threshold. The whole process of prediction for remaining unknown data to obtain confident label associated with confidence score, then compare the score with the different threshold to extract updated subset of unknown data/second subset of unknown data is analogous to the generating a second label for a second sample based on the modified threshold, as claimed.) Wang does not teach the 5th limitation “selecting and performing at least one of a plurality of labeling techniques based on the labeling determination to yield the first label”. However, Huszar teaches this limitation (paragraph 30 “The modules in the active learning system 100 may also include a labeling user interface (UI) 130. The labeling user interface may be configured to present information about one or more informative objects 115 to a human rater, who provides a label 131 for the informative object”. Huszar discloses an active learning system that provide training for deep neural networks. Within the disclosure, Huszar discloses a labeling user interface (UI) that allow a human rater to provide a label for the data. A person ordinary skilled in that art would have been able to configure the incorporation of the labeling UI based on the teaching combination below, such that after the confidence model determine that the predicted label for unknown data is above a threshold, the unknown data may be configured through the labeling process via the UI by a user to assign the correct predicted label to the unknown data.) Before the effective filing date, it would have been obvious to a person ordinary skilled in the art to combine the teaching of the method of machine learning training for data augmentation by Wang with an active learning system that provide training for deep neural networks by Huszar. The motivation to do so is referred to in Huszar’s disclosure (paragraph 20 “The system 100 may be used to build a ... machine learning system in less time and with greatly reduced number of labeled examples. Because the systems and methods described result in a trained classifier (or other type of predictive model) with minimized input from a human user, the systems and methods are scalable and can be used to build deep neural classifiers where unsupervised learning is inapplicable or unavailable. For example, human-qualitative judgments/classifications cannot be determined by analysis of unlabeled data alone. Thus, deep learning systems have not previously been trained to output such judgments. For ease of discussion, the depiction of system 100 in FIG. 1 is described as a system for generating a classifier, which is one type of machine learning system. However, other configurations and applications may be used. For example, the machine learning system may predict a score for the input data, e.g. similarity score, quality score, or may provide any other decision, depending on how the training data is labeled”, and paragraph 30 “in some implementations, the labeling UI 130 may provide the same informative object 115 to several human raters and receive several potential labels for the informative object. The system 100 may aggregate the potential labels in some manner, e.g., majority vote, averaging, dropping low and high and then averaging, etc., to generate the label 131 for the object” Huszar discloses the benefit of the system, which provide a trained, scalable predictive model with minimized input from a human user, and the system may be used for prediction of a score over the unlabeled data, determining whether the plurality of predictions satisfy a metric, and training over the labeled training data that satisfy the metric, in which the described method is similar to the method by Wang. Huszar also discloses an UI for interaction with human operator, suggesting that label may be provided by expert, which guarantee the accuracy of data, thus allow for improvement in training of the neural network model. Therefore, a person ordinary skilled in the art would have been able to incorporate the teaching by Wang with the teaching by Huszar based on the similarity between both disclosures and for further improvement.) Regarding claim 2 depends on claim 1, thus the rejection of claim 1 is incorporated. Huszar teaches the limitation “The method according to claim 1 wherein the plurality of labeling techniques comprise semi-supervised learning and active learning” (paragraph 0005 “Semi-supervised learning is typically applied to solve problems where there is a partially labelled data set, for example, where only a subset of the data is labelled. Semi-supervised machine learning makes use of externally provided labels and objective functions as well as any implicit data relationships. Active learning is a special case of semi-supervised learning, in which the system queries a user or users to obtain additional data points and uses unlabeled data points to determine which additional data points to provide to the user for labeling.” Huszar discloses the active learning is a special case of semi-supervised learning in which the system queries a user to obtain additional data points from unlabeled data points, wherein the UI is used to query a human operator to provide the label.) Regarding claim 3 depends on claim 2, thus the rejection of claim 2 is incorporated. Wang teaches the limitation “comparing the confidence measure to one or more thresholds comprises determining whether the confidence measure is greater than a semi-supervised learning threshold” (paragraph 34 “The confidence score (112) and the multiple confidence scores (114) may be compared to a threshold value (115). The threshold value (115) is a number that reflects a judgement that a given confidence score (112) is high enough such that the portion of the unknown data (106) to which the predicted label (108) applies is considered trustworthily labeled.” Wang discloses comparing one or more confidence scores of a predicted label to a threshold value to determine the confidence of the prediction, wherein the score may be greater than the threshold value.) Wang teaches the limitation “selecting and performing at least one of a plurality of labeling techniques comprises, in response to determining that the confidence measure is greater than a semi-supervised learning threshold, assigning a pseudo-label to the first sample based on the prediction” (paragraph 122 “Thus, the one or more embodiments provide for data augmentation methods that identify and pseudo-label ... based on the confidence level of the estimated labels to mitigate sampling bias in the training data” Wang discloses assigning a pseudo-label to data based on the confidence level of the predicted label to mitigate sampling bias in the training data, wherein a person ordinary skilled in the art may configure the assignment of pseudo-label via the UI as disclosed by Huszar to assign a label indicating that the confidence score is greater than the threshold.) Regarding claim 4 depends on claim 3, thus the rejection of claim 3 is incorporated. Huszar teaches the limitation “The method according to claim 3 wherein selecting and performing at least one of a plurality of labeling techniques comprises, in response to determining that the confidence measure is less than or equal to the semi-supervised learning threshold, querying an oracle for a ground-truth label for the first sample” (paragraph 30 “The modules in the active learning system 100 may also include a labeling user interface (UI) 130. The labeling user interface may be configured to present information about one or more informative objects 115 to a human rater, who provides a label 131 for the informative object.” Huszar discloses the process of a human operator using the UI to provide a label for the data, wherein a person ordinary skilled in the art would have been able to configure such that after the predicted confidence score of the predicted label is compared to a threshold, if the confidence score is less than the threshold, the system can utilize the UI to request a human rater to provide a correct label, which suggest a ground-truth label provided by a person ordinary skilled in the art.) Regarding claim 5 depends on claim 3, thus the rejection of claim 3 is incorporated. Wang teaches the limitation “comparing the confidence measure to one or more thresholds comprises determining whether the confidence measure is less than an active learning threshold, the active learning threshold less than the semi-supervised learning threshold” (paragraph 34 “The confidence score (112) and the multiple confidence scores (114) may be compared to a threshold value (115). The threshold value (115) is a number that reflects a judgement that a given confidence score (112) is high enough such that the portion of the unknown data (106) to which the predicted label (108) applies is considered trustworthily labeled.” Wang discloses comparing one or more confidence scores of a predicted label to a threshold value to determine the confidence of the prediction, wherein the score may be less than the threshold value) “selecting and performing at least one of a plurality of labeling techniques comprises, in response to determining that the confidence measure is less than an active learning threshold, assigning a pseudo-label to the first sample based on the prediction” (paragraph 122 “Thus, the one or more embodiments provide for data augmentation methods that identify and pseudo-label ... based on the confidence level of the estimated labels to mitigate sampling bias in the training data” Wang discloses assigning a pseudo-label to data based on the confidence level of the predicted label to mitigate sampling bias in the training data, wherein a person ordinary skilled in the art may configure the assignment of pseudo-label via the UI as disclosed by Huszar to assign a label indicating that the confidence score is less than the threshold.) Regarding claim 6 depends on claim 3, thus the rejection of claim 3 is incorporated. Wang teaches the limitation “assigning the pseudo-label comprises generating the pseudo-label for the first sample based on the prediction” (paragraph 122 “Thus, the one or more embodiments provide for data augmentation methods that identify and pseudo-label ... based on the confidence level of the estimated labels to mitigate sampling bias in the training data” Wang discloses assigning a pseudo-label to data based on the confidence level of the predicted label to mitigate sampling bias in the training data, wherein a person ordinary skilled in the art may configure the assignment of pseudo-label via the UI as disclosed by Huszar.) Regarding claim 19 depends on claim 1, thus the rejection of claim 1 is incorporated. Huszar teaches the limitation “The method according to claim 1 wherein the machine learning model comprises an ensemble model, the ensemble model comprising a plurality of sub-models” (paragraph 32 “Process 200 may begin with the active learning system initializing a committee having a plurality of committee members (205). Each of the committee members is a deep neural network. In some implementations, each of the committee members is trained ...” Huszar discloses the active learning system initializing a committee having a plurality of committee members with deep neural network, suggesting an ensemble model with a plurality of sub-models.) Regarding claim 20 depends on claim 19, thus the rejection of claim 19 is incorporated. Wang teaches a part of the limitation “The method according to claim 18 wherein the confidence measure comprises a measure of a plurality of predictions generated by the plurality of sub-models based on the first sample” (paragraph 33 “Similarly, multiple confidence scores (114) are assigned to the multiple predicted labels (110) on a one to one basis. The confidence score (112) is a prediction by a confidence model that the predicted label (108) correctly predicts the label for the portion of the unknown data.” Wang discloses multiple confidence scores correspond with each predicted labels, wherein a person ordinary skilled in the art would have been able to generate multiple confidence scores correspond to multiple predicted labels generated by the deep neural network, wherein the deep neural network may correspond to each committee member that represent sub-models as disclosed in claim 19 above.) Huszar teaches a part of the limitation “wherein optionally the measure of the plurality of predictions comprises at least one of: a variance and a standard deviation based on the plurality of predictions” (paragraph 28 “The label evaluator 140 may evaluate the diversity of the predictions to determine whether the predictions for the unlabeled object satisfy a diversity metric. The diversity metric measures how much variance exists in the predictions.”, and paragraph 35 “the information about a given informative object may be presented to several human raters and the system may aggregate the labels in some manner (e.g., voting, averaging, weighted averaging, standard deviation, etc.)” Huszar discloses the label evaluator may evaluate the diversity of the predictions with a diversity metric to measures how much variance exists in the predictions and that predicted label may be aggregated through a standard deviation technique.) Regarding claim 22 depends on claim 19, thus the rejection of claim 18 is incorporated. Huszar teaches the limitation “The method according to claim 18 wherein at least one of the plurality of sub- models comprises a neural network” (paragraph 32 “Process 200 may begin with the active learning system initializing a committee having a plurality of committee members (205). Each of the committee members is a deep neural network.” Huszar discloses each committee member which represent sub-models comprises a deep neural network.) Claims 8-10, 12, 14-16, 18, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et.al (US 20220237520 A1) in view of Huszar et.al (US 20180240031 A1), further in view of Krishnan et.al (US 20210117760 A1) Regarding claim 8 depends on claim 1, thus the rejection of claim 1 is incorporated. Wang/Huszar does not teach the limitation “The method according to claim 1 wherein selecting the first sample comprises random sampling”. However, Krishnan teaches this limitation (paragraph 92 “for example, during training a group of randomly sampled examples (e.g., mini-batches) can be processed per iteration”. Krishnan discloses methods and apparatus to obtain well-calibrated uncertainty in deep neural networks. Within the disclosure, Krishnan discloses during the training of the neural network, the examples can be a group of random sampled examples such as mini-batches, suggesting the random sampling for selecting the first sample in the claim, wherein based on the teaching combination below, the subset of unknown data can be randomly sampled from the unknown data.) Before the effective filing date, it would have been obvious to a person ordinary skilled in the art to combine the teaching of the method of machine learning training for data augmentation by Wang and an active learning system that provide training for deep neural networks by Huszar with the teaching of methods and apparatus to obtain well-calibrated uncertainty in deep neural networks by Krishnan. The motivation to do so is referred to in Krishnan’s disclosure (paragraph 39 “In examples disclosed herein, optimization methods are used to leverage the relationship between accuracy and uncertainty as an anchor for uncertainty calibration. In the examples disclosed herein, a differentiable accuracy versus uncertainty loss function for training neural networks is developed to allow the model to learn to provide well-calibrated uncertainties in addition to improved accuracy. In some examples disclosed herein, the same methodology can be extended for post-hoc uncertainty calibration on pre-trained models. Using the examples disclosed herein, a state-of-the-art model calibration is developed”, and paragraph 51 “In some examples, the uncertainty map generator 525 can generate a percentage and/or confidence score associated with a given prediction. For example, a well-calibrated model should be confident about its predictions when it is accurate and indicate high uncertainty when making inaccurate predictions. In some examples, the uncertainty map generator 525 can provide pre-training and post-training images, and/or predictions generated with and/or without training using the loss function to allow a user to compare/contrast the predictions”. Krishnan discloses optimization methods used to leverage the relationship between accuracy and uncertainty, wherein a differentiable accuracy versus uncertainty loss function for training neural networks is developed to allow the model to learn to provide well-calibrated uncertainties in addition to improved accuracy. The usage of the uncertainty and uncertainty calibration within a machine learning model provide an improvement over the training and decision making by the model. While teaching combination by Wang/Huszar discloses the training of the machine learning model in determining if a predicted label is corrected for the data, the teaching combination may further incorporate the teaching by Krishnan to improve the decision making by the machine learning model with an incorporation of uncertainty and uncertainly loss with regard to the confidence score of each prediction to provide a more accurate determination thus the user may obtain the best result from the machine learning model. Furthermore, the incorporation of random sampling by Krishnan toward the unknown data for the subset of the unknown data may provide a better sample as random sampling minimizes bias and improves the generalizability of data.) Regarding claim 9 depends on claim 1, thus the rejection of claim 1 is incorporated. Krishnan teaches the limitation “The method according to claim 1 wherein modifying the at least one of the one or more thresholds comprises modifying the at least one of the one or more thresholds based on a model uncertainty measure associated with the machine learning model”. (paragraph 41 “a well-calibrated model should be confident about its predictions when it is accurate and indicate high uncertainty when making inaccurate predictions”, paragraph 54 “The threshold identifier 610 identifies an uncertainty threshold (e.g., as defined using uth in connection with Equations 5-8 and/or Equations 10-13). For example, the uncertainty threshold uth can be used to determine the level of certainty with a given model prediction. For example, a certain prediction can correspond to an uncertainty being less than the uncertainty threshold while an uncertain prediction can correspond to an uncertainty being greater or equal to the uncertainty threshold (e.g., uncertain prediction)”, and paragraph 57 “The uncertainty identifier 625 determines a predictive uncertainty estimate (ui) for the model prediction”. Krishnan discloses a threshold identifier to identify a threshold corresponding with the uncertainty estimate with regard to a model prediction by the uncertainty identifier, wherein based on the teaching combination of the confidence score by Wang, a model’s prediction with high confidence score may yield a low uncertainty in prediction and high uncertainty for low confidence score. A person ordinary skilled in the art may configure to apply the teaching by Wang to obtain a modified confidence threshold by incorporating the uncertainty threshold with the confidence threshold such that as a prediction result is compared to a confidence threshold, an uncertainty measure corresponding with the prediction result is compare with the uncertainty threshold to reaffirm the confidence in the prediction result.) Regarding claim 10 depends on claim 9, thus the rejection of claim 9 is incorporated. Krishnan teaches the limitation “the model uncertainty measure comprises a measure of expected calibration error in the machine learning model”. (paragraph 59 “For example, the output calculator 635 can provide data for ... model calibration error with respect to predictive uncertainty (UCE) based on data shift intensity, as well as any other assessment performed during model calibration evaluation, model confidence and uncertainty evaluation ... during training” Krishnan discloses the output calculator can provide data for model calibration error with respect to predictive uncertainty during training.) Regarding claim 12 depends on claim 9, thus the rejection of claim 9 is incorporated. Wang teaches the limitation “the model uncertainty measure comprises an average of a plurality of confidence measures associated with a plurality of predictions generated by the machine learning model”. (paragraph 33 “Similarly, multiple confidence scores (114) are assigned to the multiple predicted labels (110) on a one to one basis. The confidence score (112) is a prediction by a confidence model that the predicted label (108) correctly predicts the label for the portion of the unknown data.” Wang discloses multiple confidence scores correspond with each predicted labels, wherein a person ordinary skilled in the art would have been able to generate an average of a confidence score from various confidence scores correspond to each predicted labels to obtain one value of confidence score reflecting the accuracy of the overall prediction as well as the uncertainty estimate in the one or more prediction by the machine learning model.) Regarding claim 14 depends on claim 9, thus the rejection of claim 9 is incorporated. Wang teaches the limitation “modifying at least one of the one or more thresholds a plurality of times to generate a plurality of modified thresholds” (paragraph 34 “The threshold value (115) may be set by a computer scientist or by another machine learning process”, and paragraph 109 “Again, the updated confidence scores (532) is applied to the threshold described above. However, a different threshold could be used.” Wang discloses the threshold value may be set by a computer scientist or by another machine learning process and a different threshold could be used, suggesting the modifying of the threshold value by a person ordinary skilled in the art such as a computer scientist or by another machine learning process to obtain a plurality of modified thresholds, wherein the modification may be performed by incorporating another threshold such as the uncertainty threshold as disclosed in claim 9 above.) Wang teaches the limitation “generating, for each of the modified thresholds, one or more labels for one or more samples from the unlabeled dataset; and training the one or more parameters of the machine learning model over at least the one or more labels and one or more samples” (paragraph 20 “For ease of notation, the following definitions are stated. A prediction model is a prediction machine learning model that includes a prediction algorithm and a number of parameters for the prediction algorithm ... A confidence model is a confidence machine learning model that includes a confidence prediction algorithm”, and paragraph 23 “According to the one or more embodiments, a prediction model predicts predicted labels for the unknown data. A separate confidence model then predicts confidences that the predicted labels are accurate. A subset of the unknown data with confident labels (i.e., labels having predicted confidences above a threshold value) ...” Wang discloses the prediction machine learning model and a number of parameters utilizing a prediction algorithm to generate a prediction of labels for a subset of unknown data. Wang also discloses a confidence machine learning model to predict a confidence score that the predicted label is accurate, wherein the confidence score is compared with a threshold value to determine the confident in the prediction, wherein the threshold value may be modified by incorporating an uncertain measure and uncertain threshold to reaffirm the accuracy of the prediction.) Regarding claim 15 depends on claim 14, thus the rejection of claim 14 is incorporated. Krishnan teaches the limitation “The method according to claim 14 wherein modifying the at least one of the one or more thresholds a plurality of times comprises iteratively decreasing the uncertainty to which the at least one of the one or more thresholds correspond” (paragraph 46 “In both examples, the uncertainty maps can include an example indicator 420, 470 of the level of uncertainty (e.g., low to high) associated with the predictions made by a given model. Over time, the model can be improved to reduce the level of uncertainty and increase the level of accuracy during classification” Krishnan discloses the uncertainty map can include an example indicator of the level of uncertainty (e.g., low to high) associated with the predictions, such that over time, the level of uncertainty may be reduce and increase the level of accuracy, which suggest that the confidence score in prediction is increased and the uncertainty and the confidence score may be compared with one or more threshold to maintain the accuracy in the prediction.) Regarding claim 16 depends on claim 15, thus the rejection of claim 15 is incorporated. Krishnan teaches the limitation “The method according to claim 15 wherein iteratively decreasing the uncertainty comprises increasing a value of the at least one of the one or more thresholds from a starting value in a range of about 0% to 50% of a minimum confidence value to a final value in a range of about 50% to 100% of a maximum confidence value, wherein optionally the starting value comprises about 20% of the minimum confidence value and the final value comprises about 60% of the maximum confidence value” (paragraph 46 “In both examples, the uncertainty maps can include an example indicator 420, 470 of the level of uncertainty (e.g., low to high) associated with the predictions made by a given model. Over time, the model can be improved to reduce the level of uncertainty and increase the level of accuracy during classification”. Krishnan discloses the uncertainty map can include an example indicator of the level of uncertainty (e.g., low to high) associated with the predictions, such that over time, the level of uncertainty may be reduce and increase the level of accuracy, wherein a person ordinary skilled in the art can configure such that as the uncertainty decrease, the accuracy increases in which the confidence score in predicting the label is increased wherein the person can modify the threshold value in accordance with the confidence score to make sure the confidence score being the most accurate such as a computer scient may increase the threshold selected from a range of 0 – 0.5 to 0.5 – 1 such that the confidence score must be greater than the threshold value selected from the higher range for the predicted label to be considered as corrected.) Regarding claim 18 depends on claim 15, thus the rejection of claim 15 is incorporated. Krishnan teaches the limitation “The method according to claim 15 wherein iteratively decreasing the uncertainty comprises determining a value of the at least one of the one or more thresholds based on a sum of a minimum confidence value and the model uncertainty measure” (paragraph 43 “pi ... can be defined as the confidence (e.g., probability of predicted class ...)”, and paragraph 109 “As previously described, a reliable model should be accurate when it is certain about its predictions and indicate high uncertainty when it is likely to be inaccurate. For example, conditions probabilities p(accurate|certain) and p(uncertain|inaccurate) can be used as model performance evaluation metrics for comparing the quality of uncertainty estimates obtained from different probabilistic methods”. Krishnan discloses conditions probabilities p(accurate|certain) and p(uncertain|inaccurate) which can be used as model performance evaluation metrics, which suggest a threshold, wherein conditions probabilities p is the confidence in prediction and may be combined with uncertainty/certainty estimate. A person ordinary skilled in the art would have been able to combine the confidence in prediction such as a confidence score with the uncertainty estimate through a summation calculation.) Regarding claim 23 depends on claim 1, thus the rejection of claim 1 is incorporated. Krishnan teaches the limitation “wherein the machine learning model comprises a Bayesian neural network, the Bayesian neural network operable to generate the first prediction comprising the confidence measure” (paragraph 42 “For example, methods disclosed herein can be used to train deep neural network classifiers (e.g., Bayesian and non-Bayesian) that result in models that are confident on accurate predictions and indicate high uncertainty when they are likely to be inaccurate.” Krishnan discloses the method may be applied to train deep neural network such as Bayesian deep neural network that result in models that are confident on accurate predictions and indicate high uncertainty when they are likely to be inaccurate, wherein a person ordinary skilled in the art may incorporate the Bayesian deep neural network in performing prediction and training using unlabeled and labeled unknown data as disclosed by Wang.) Claims 25, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et.al (US 20220237520 A1) in view of Huszar et.al (US 20180240031 A1), further in view of Lambrinoudis et.al (US 20220406415 A1) Regarding claim 25 depends on claim 1 thus the rejection of claim 1 is incorporated. Wang/Huszar does not teach the limitation “The method according to claim 1 wherein the machine learning model is operable to receive a first representation of a first chemical structure and a second representation of a second chemical structure as input and to produce a prediction of synergy between the first and second chemical structures as output”. However, Lambrinoudis teaches this limitation (paragraph 11 “generating one or more predictions of a synergistic interaction between the pesticidal compound and the synergistic compound against one or more pests,”, and paragraph 13 “generating a first encoded compound representation based on the first chemical feature of the pesticidal compound and generating a second encoded compound representation based on the second chemical feature of the synergistic compound and wherein generating the one or more predictions comprises generating the one or more predictions based on the first and second encoded compound representations” Lambrinoudis discloses systems and methods for synergistic pesticide screening. Within the disclosure, Lambrinoudis discloses a first and second encoded compound representation of chemical feature and generate a prediction of synergy between the first and second compound representation.) Before the effective filing date, it would have been obvious to a person ordinary skilled in the art to combine the teaching of the method of machine learning training for data augmentation by Wang and an active learning system that provide training for deep neural networks by Huszar with the teaching of systems and methods for synergistic pesticide screening by Lambrinoudis. The motivation to do so is referred to in Lambrinidis’s disclosure (paragraph 8 “There is thus a general desire for improved systems and methods for screening pesticidal compositions for synergistic efficacy.”, paragraph 42 “The described system and methods can, in certain circumstances, efficiently and accurately predict which candidate pesticidal compositions are likely to have a synergistic interaction against one or more pests. The described system and methods may be used in addition to (e.g. prior to and/or in parallel with) or even instead of conventional laboratory-based screening. Subsequent testing of compositions predicted to be likely to lack the desired synergistic interaction may be reduced or eliminated, thereby potentially accelerating the discovery of synergistic pesticidal compositions”, and paragraph 136 “Obtaining labelled candidate pesticidal composition representations can be costly; for instance, it may involve human experts performing laboratory experiments to confirm synergistic interactions for candidate pesticidal compositions. Such an active learning approach can, in suitable circumstances, reduce the quantity of laboratory experimentation necessary or desirable to train the model adequately.” Lambrinidis discloses a general desire for improved systems and methods for screening pesticidal compositions for synergistic efficacy, wherein a system employing machine model to perform prediction is applied to, efficiently and accurately predict synergistic interaction between pesticidal compositions against pests. By incorporating an effective and accurate machine learning model active learning model, the user can reduce the cost in involving human experts to perform laboratory experiments. Therefore, the teaching combination by Wang/Huszar may further incorporate the teaching by Lambrinidis as an example of how machine learning model may benefit user in various real-life application.) Regarding claim 26 depends on claim 25 thus the rejection of claim 25 is incorporated. Lambrinidis teaches the limitation “The method according to claim 25 wherein the unlabeled training dataset comprises a plurality of representations of chemical structures and the labeled training dataset comprises a plurality of sets of representations of chemical structures, each set of representations comprising a plurality of representations, the labeled training dataset further comprising, for each set of representations, and indication of synergy between the chemical structures of the set, wherein optionally, for each set of representations of chemical structures, the indication of synergy comprises an indication of synergistic pesticidal efficacy of a chemical composition comprising the chemical structures of the set against a target pest, wherein further optionally the method comprises constraining a number of queries to an oracle for ground-truth labels to less than a predetermined number” (paragraph 72 “Each digital representation comprises a representation of the compounds' chemical structure and/or the compounds' chemical properties (which may include, for example, the compounds' known effects upon classes of organisms, such as pests, crop plants, etc.)”, paragraph 45 “In some embodiments, a synergistic pesticidal composition screening system predicts a synergy metric describing a synergistic interaction exhibited by a candidate pesticidal composition. Any suitable synergy metric may be predicted”, paragraph 46 “In some embodiments, a synergistic pesticidal composition screening system predicts a metric of improved pesticidal effectiveness of a candidate pesticidal composition upon one or more pest organisms. ... For example, the predicted quantity may be combined with an estimated per-unit cost of the candidate pesticidal composition (e.g. by multiplication) to determine a predicted cost per unit of efficacy”, and paragraph 136 “Obtaining labelled candidate pesticidal composition representations can be costly; for instance, it may involve human experts performing laboratory experiments to confirm synergistic interactions for candidate pesticidal compositions. Such an active learning approach can, in suitable circumstances, reduce the quantity of laboratory experimentation necessary or desirable to train the model adequately”. Lambrinidis discloses the application of active learning together with encoded representation of compound of chemical along with chemical’s structure and properties, wherein a person ordinary skilled in the art may configure one or more encoded representation of chemical structure as unlabeled data and labeled data, with the label being the known effects of the compound against a target such as pests. The machine learning model may provide a prediction of synergy indication between each representation of chemical structure, which may comprise a predicted cost per unit of efficacy. The system and method by Lambrinidis may be utilized such that the testing and labelling of chemical structure is performed via machine learning model, which reduce involvement of human experts performing laboratory experiments, wherein a person ordinary skilled in the art may configure the threshold as disclosed by Wang to indicate if the predicted result is sufficient enough such that a human expert is not required.) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY TU DIEP whose telephone number is (703)756-1738. The examiner can normally be reached M-F 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY T DIEP/Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

May 27, 2022
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §103
Dec 16, 2025
Response Filed
Jan 16, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579428
METHOD FOR INJECTING HUMAN KNOWLEDGE INTO AI MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12488223
FEDERATED LEARNING FOR TRAINING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 02, 2025
Patent 12412129
DISTRIBUTED SUPPORT VECTOR MACHINE PRIVACY-PRESERVING METHOD, SYSTEM, STORAGE MEDIUM AND APPLICATION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
30%
With Interview (+5.5%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month