Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
REMARKS
Applicant’s amendment to claim 11 necessitated the withdrawal of the objection to claim 11.
Applicant’s amendment to claim 8 necessitated the withdrawal of the 35 USC ¶112, Second Paragraph rejection of claim 8.
On page 7, Applicant argues “training a plurality of AI models…is widely regarded as computationally intensive task that is beyond the capabilities of performance in the human mind.” Applicant’s argument is not persuasive because the steps described by the specification ([0007]) for training an AI model could reasonably be performed by the human mind to achieve the same expected results.
On page 7, Applicant “disagrees that the judicial exception is not integrated into a practical application. Claim 1 is explicitly directed to a computational method for training of an AI model for later deployment.” Applicant’s argument is not persuasive because the additional elements are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic component (MPEP 2106.05(f)). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
On pages 7-10, Applicant argues the claims provide a fundamentally different approach to standard model training and arose based on the applicant's direct experience in trying to generate a robust model for classification of embryos for implantation in an IVF cycle. Applicant’s argument is not persuasive because the claims are generally linking the use of the judicial exception to a particular technological environment or field of use. For example, the specification describes the use of artificial intelligence (AI) models and generally linking the use of AI models to the field of IVF ([00136]). The courts have found that generally linking the use of the judicial exception to a particular technological environment or field of use not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception. See Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978).
On pages 10-11, Applicant argues Andoni describes a genetic algorithm with variable epoch sizes, where after a first epoch, a determination is made whether to vary the epoch size based on a fitness function. Then, based on this decision, the method determines whether to generate another set of models. However, the fitness functions, and method of selecting a model, is performed using accuracy based metrics. There is no disclosure or suggestion of calculating confidence metrics on a common validation dataset and using them to select a model. Applicant’s argument is not persuasive because when the limitation of “a common validation dataset over a plurality of epochs” is attributed with the broadest reasonably interpretation (BRI) Andoni as cited ([0006] and [0030]) describes the argued limitation. Further, Andoni discloses a fitness value may be determined based on models generated by a prior epoch. The fitness value may include an average fitness value, a highest fitness value, a median fitness value, a most common fitness value, or another fitness value. If the fitness value satisfies a first fitness threshold, the system 100 may determine to reduce the epoch size as compared to prior epochs ([0063]). Therefore, Andoni under BRI describes the argued limitation.
On page 12, Applicant argues Andoni does not disclose “the best confidence metric and the associated epoch number is stored.” Applicant’s argument is not persuasive because Andoni discloses a determination to modify the epoch size may be based on the convergence metric 142 associated with at least one epoch prior to the particular epoch…The association of epoch number(s) to epoch size(s) may be indicated by configuration data. For example, the configuration data may be stored at a memory of the system 100 (e.g., during initialization of the genetic algorithm 110 or during updating of the genetic algorithm 110) or based on user input ([0061]). Therefore, Andoni under BRI describes the argued limitation.
On pages 12-14, Applicant argues Andoni fails to disclose the limitation of “a common validation dataset over a plurality of epochs.” Applicant’s argument is not persuasive because when the limitation of “a common validation dataset over a plurality of epochs” is attributed with the broadest reasonably interpretation (BRI) Andoni as cited ([0006] and [0030]) describes the argued limitation. Further, Andoni discloses a fitness value may be determined based on models generated by a prior epoch. The fitness value may include an average fitness value, a highest fitness value, a median fitness value, a most common fitness value, or another fitness value. If the fitness value satisfies a first fitness threshold, the system 100 may determine to reduce the epoch size as compared to prior epochs ([0063]). Therefore, Andoni under BRI describes the argued limitation.
On pages 14-15, Applicant argues Andoni fails to disclose the at least one confidence metric is calculated at each epoch in claim 2. Applicant’s argument is not persuasive because Andoni discloses the argued limitation as cited. Further, Andoni discloses number of models generated in each epoch may be determined based on a convergence metric associated with one or more previous epochs ([0030]).
On page 15, Applicant argues Andoni fails to disclose generating an ensemble AI model using at least two of the plurality of trained AI models based on the stored best confidence metrics, and the ensemble model uses a confidence based voting strategy in claim 3. It is noted the limitation of “voting…” has been attributed with the customary and ordinary definition of “indication of a choice between two or more candidates or courses of action.” The limitation of “ensemble” has been reasonably interpreted as “a group of items…”, therefore, the cited “combined…” has been reasonably interpreted as “ensemble…” Therefore, the disclosure of during a fifth stage 500 of operation, the “overall elite” models 460, 462, and 464 may be genetically combined to generate the trainable model 122. For example, genetically combining models may include crossover operations in which a portion of one model is added to a portion of another model ([0049]) and an overall fittest model of the last executed epoch may be selected and output as representing a neural network that best models the input data set 102 ([0082]).
On pages 15-16, Applicant argues “combining models to create a new elite model is not an ensemble approach. Accordingly, claim 4 is not anticipated by Andoni. Applicant’s argument is not persuasive as discussed above.
On page 16, Applicant argues claim 5 sets forth that the "common ensemble validation dataset is the common validation dataset." The most common fitness values are not a "common validation dataset." The most common fitness values are unrelated to the use of a common validation dataset for assessing a model. Accordingly, claim 5 is not anticipated by Andoni. Therefore, Andoni under BRI describes the argued limitation as cited.
On page 16, Applicant argues the Examiner cites paragraph [0063], which discuses a range of fitness metrics. As discussed above fitness metrics are essentially accuracy metrics, and are not confidence metrics. For example, a median is a similar to an average. A most common fitness value is very similar to an average fitness value. There is no suggestion of using a confidence-based metric. Accordingly, claim 7 is not anticipated by Andoni. Applicant’s argument is not persuasive because Andoni under BRI describes the argued limitation as cited.
On page 16, Applicant argues paragraph [0006] of Andoni only requires a comparison of genetic models and does not suggest the use of confidence metrics. Accordingly, claim 9 is not anticipated by Andoni. Andoni under BRI describes the argued limitation as cited.
On pages 16-17, Applicant argues metric claims 1 and 10 require that a model is tested on the common validation dataset at the end of an epoch and report (store) the confidence metric, and in claim 10 log-loss is used. The confidence metric is then used to select the best model to use, and allows reviewing the training process to select a different model than the final model (based on confidence metrics). Thus, Indarapu fails to suggest the use of log-loss as a selection metric and claim 10 is not obvious. Applicant’s argument is not persuasive because Indarapu discloses log-loss function is an objective function measuring the accuracy of the classifier model. The less value of the log-loss function, the better accuracy has the classifier model ([0040]) as cited. Andoni as modified as a whole under BRI describes the argued limitation as cited.
On page 17, Applicant argues claims 2-16 are not taught, suggested, or rendered obvious by the cited references, as set forth above, and by virtue of their dependence on claim 1. Applicant’s argument is not persuasive because claim 1 is rendered obvious over Andoni as modified.
On pages 17-18, Applicant’s argument directed to claims 17 and 18 is not persuasive as addressed above.
Claims 1-18, filed November 14, 2025, are examined on the merits.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1:
The claims recite a method and a system which are statutory categories of invention.
Step 2A Prong One:
Claim 1 recites “training a plurality of Artificial Intelligence (AI) models” and “calculating a confidence metric” at a high level of generality such that it could be practically performed in the human mind. The limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for generic computer components. These limitations, as drafted, are processes that, under its broadest reasonable interpretation, can be performed as a mental process (that is, “observation, evaluation, judgement, opinion”).
Claims 17 and 18 are directed to a system comprising the same abstract idea as claim 1. These claims are similarly rejected under the same rationale as claim 1, supra.
Step 2A Prong Two
The judicial exception is not integrated into a practical application. In particular, the claims recite additional elements of “one or more processors, one or more memories, and a communications interface”, where the claims further recite generic elements of the method or system. The “one or more processors, one or more memories, and a communications interface” are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic component (MPEP 2106.05(f)). The limitation of “selecting…” and “deploying the AI model…” amounts to extra-solution activity of selecting a particular data source or type of data to be manipulated, for collection, analysis and display, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of “one or more processors, one or more memories, and a communications interface” are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (MPEP 2106.05(f)). The limitation of “selecting…” and “deploying the AI model…” amounts to extra-solution activity of selecting a particular data source or type of data to be manipulated, for collection, analysis and display, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016).
Thus taken alone, the individual elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology.
Claim 2 recites wherein the at least one confidence metric is calculated at each epoch. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 3 recites wherein generating an AI model comprises generating an ensemble AI model using at least two of the plurality of trained AI models based on the stored best confidence metrics, and the ensemble model uses a confidence based voting strategy. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 4 recites selecting at least two of the plurality of trained AI models based on the stored best confidence metric; generating a plurality of distinct candidate ensemble models wherein each candidate ensemble model combines the results of the selected at least two of the plurality of trained AI models according to a confidence based voting strategy; calculating the confidence metric for each candidate ensemble model applied to a common ensemble validation dataset; selecting a candidate ensemble model from the plurality of distinct candidate ensemble models and calculating a confidence metric for the selected candidate ensemble model applied to a blind test set. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 5 recites the common ensemble validation dataset is the common validation dataset. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 6 recites the common ensemble validation dataset is the common validation dataset. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 7 recites the common ensemble validation dataset is the common validation dataset. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 8 recites wherein generating an AI model comprises generating a student AI model using a distillation method to train the student model using at least two of the plurality of trained AI models using at least one confidence metric. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 9 recites wherein selecting at least one of the plurality of trained AI models based on the stored best confidence metric comprises: selecting at least two of the plurality of trained AI models, comparing each of the at least two of the plurality of trained AI models using a confidence based metric, and selecting the best trained AI models based on the comparison. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 10 recites wherein, at least one confidence metric comprises one or more of Log loss, combined class Log loss, combined data-source Log loss, combined class and data-source Log loss. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 11 recites wherein a plurality of assessment metrics are calculated and are selected from the group consisting of accuracy, Mean class accuracy, sensitivity, specificity, a confusion matrix, Sensitivity-to-specificity ratio, precision, negative predictive value, balanced accuracy, Log loss, combined class Log loss, combined data-source Log loss, combined class and data-source Log loss, tangent score, bounded tangent score, per-class ratio of tangent score vs Log Loss, Sigmoid score, epoch number, mean of square error (MSE), root MSE, mean of average error, mean average precision (mAP), confidence score, Area-Under-the-Curve (AUC) threshold, Receiver Operating Characteristic (ROC) curve threshold, Precision-Recall curve. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 12 recites wherein the plurality of assessment metrics comprises a primary metric and at least one secondary metric, wherein the primary metric is a confidence metric, and the at least one secondary metric are used as tiebreaker metrics. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 13 recites wherein the plurality of AI models comprise a plurality of distinct model configurations, wherein each model configuration comprises a model type, a model architecture, and one or more pre-processing methods. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 14 recites wherein the one or more pre- processing methods comprises segmentation, and the plurality of AI models comprises at least one AI model applied to unsegmented images, and at least one AI model applied to segmented images. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 15 recites wherein the one or more pre- processing methods comprises one or more computer vision pre-processing methods. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim 16 recites wherein the validation dataset is a healthcare dataset comprising a plurality of healthcare images. These limitations further narrow the abstract idea or extra-solution activity, but are nonetheless part of the abstract idea identified in claim 1. They also do not amount to significantly more than the abstract idea. The claims are similarly rejected under the same rationale as claim 1, supra.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 7, 9, 13, 17, and 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Andoni et al. (US 2019/0073591 A, provided in the IDS filed October 07, 2022).
Claim 1, Andoni discloses a computational method for generating an Artificial Intelligence (AI) models model, the method comprising:
training a plurality of Artificial Intelligence (AI) models using a common validation dataset over a plurality of epochs, wherein during training of each model, at least one confidence metric is calculated at one or more epochs, and, for each model, the best confidence metric value over the plurality of epochs ([0006], e.g. the best performing models of an epoch may be selected for reproduction to generate a trainable model. The trainable model may be trained using backpropagation to generate a trained model. When the trained model is available, the trained model may be re-inserted into the genetic algorithm for continued evolution), and the associated epoch number at the best confidence metric is stored ([0030], e.g. number of models generated in each epoch may be determined based on a convergence metric associated with one or more previous epochs. The convergence metric may include an epoch number, a fitness-based metric, an improvement metric, a stagnation metric, or some other metric based on one or more models of the one or more previous epochs);
generating an AI model comprising:
selecting at least one of the plurality of trained AI models based on the stored best confidence metric ([0006], e.g. the best performing models of an epoch may be selected for reproduction to generate a trainable model);
calculating a confidence metric for the selected at least one trained AI model applied to a blind test set ([0025], e.g. the fitness function 140 may be an objective function that can be used to compare the models of the input set 120. In some examples, the fitness function 140 is based on a frequency and/or magnitude of errors produced by testing a model on the input data set 102…if a particular neural network correctly predicted the value of B for nine of the ten rows, then a relatively simple fitness function (e.g., the fitness function 140) may assign the corresponding model a fitness value of 9/10=0.9); and
deploying the AI model ([0065], e.g. if the stagnation metric satisfies the threshold, the number of models generated during the current epoch may be increased in order to introduce additional models with additional/different traits to attempt to overcome the stagnation) if the best confidence metric exceeds an acceptance threshold ([0034], e.g. a value satisfies a threshold when the value is greater than or equal to a threshold value. In other aspects, a value may satisfy a threshold when the value is greater than (e.g., exceeds) the threshold value, when the value is less than (e.g., fails to exceed) the threshold value, or is less than or equal to the threshold value).
Claim 2, Andoni discloses wherein the at least one confidence metric is calculated at each epoch. ([0030], e.g. number of models generated in each epoch may be determined based on a convergence metric associated with one or more previous epochs. The convergence metric may include an epoch number, a fitness-based metric, an improvement metric, a stagnation metric, or some other metric based on one or more models of the one or more previous epochs, as further described herein).
Claim 3, Andoni discloses wherein generating an AI model comprises generating an ensemble AI model using at least two of the plurality of trained AI models based on the stored best confidence metrics, and the ensemble model uses a to a confidence based voting strategy ([0049], e.g. during a fifth stage 500 of operation, the “overall elite” models 460, 462, and 464 may be genetically combined to generate the trainable model 122. For example, genetically combining models may include crossover operations in which a portion of one model is added to a portion of another model, and [0082], e.g. an overall fittest model of the last executed epoch may be selected and output as representing a neural network that best models the input data set 102).
Claim 4, Andoni discloses wherein generating an ensemble AI model comprises selecting at least two of the plurality of trained AI models based on the stored best confidence metric;
generating a plurality of distinct candidate ensemble models wherein each candidate ensemble model combines the results of the selected at least two of the plurality of trained AI models according to a confidence based voting strategy ([0048], e.g. the fittest models of each “elite species” may be identified. The fittest models overall may also be identified, and [0049], e.g. the “overall elite” models 460, 462, and 464 may be genetically combined to generate the trainable model 122);
calculating the confidence metric ([0024], e.g. the fitness data 240 may include a fitness value that is determined based on evaluating the fitness function 140 with respect to the model 200) for each candidate ensemble model applied to a common ensemble validation dataset ([0048], e.g. the fittest models of each “elite species” may be identified. The fittest models overall may also be identified);
selecting a candidate ensemble model from the plurality of distinct candidate ensemble models and calculating a confidence metric for the selected candidate ensemble model applied to a blind test set ([0049], e.g. during a fifth stage 500 of operation, the “overall elite” models 460, 462, and 464 may be genetically combined to generate the trainable model 122, and [0006], e.g. selection (e.g., identifying the best performing neural networks via testing)).
Claim 5, Andoni discloses wherein the common ensemble validation dataset is the common validation dataset ([0064], e.g. most common fitness values).
Claim 7, Andoni discloses wherein the confidence based voting strategy is selected from the group consisting of maximum confidence, mean confidence, majority-mean confidence, majority-max confidence, median confidence, or weighted mean confidence ([0063], e.g. fitness value may include an average fitness value, a highest fitness value, a median fitness value, a most common fitness value, or another fitness value).
Claim 9, Andoni discloses selecting at least one of the plurality of trained AI models based on the stored best confidence metric comprises: selecting at least two of the plurality of trained AI models, comparing each of the at least two of the plurality of trained AI models using a confidence based metric, and selecting the best trained AI models based on the comparison ([0006], e.g. selection (e.g., identifying the best performing neural networks via testing). In addition, the best performing models of an epoch may be selected for reproduction to generate a trainable model. The trainable model may be trained using backpropagation to generate a trained model. When the trained model is available, the trained model may be re-inserted into the genetic algorithm for continued evolution. Training a model that is generated by breeding the best performing population members of an epoch may serve to reinforce desired “genetic traits” (e.g., neural network topology, activation functions, connection weights, etc.), and introducing the trained model back into the genetic algorithm may lead the genetic algorithm to converge to an acceptably accurate solution (e.g., neural network) faster, for example because desired “genetic traits” are available for inheritance in later epochs of the genetic algorithm).
Claim 13, Andoni discloses wherein the plurality of AI models comprise a plurality of distinct model configurations, wherein each model configuration comprises a model type, a model architecture, and one or more pre-processing methods ([0019], e.g. adding additional models having topologies or traits associated with improvements in fitness and by decreasing the amount of models generated or evolved when the topologies or traits are not associated with sufficient improvements in fitness, which may improve the quality (e.g., fitness) of models output by the model building process).
Claims 17 and 18 are directed to a system comprising the same steps a as claim 1. Andoni discloses a system (page 2, [0017], e.g. Figure 1 system 100) for implementing the above cited method. These claims are similarly rejected under the same rationale as claim 1, supra.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andoni et al. (US 2019/0073591 A1), as applied to claims 1-5, 7, 9, 13, 17, and 18 above, in view of Radosavovic (Data Distillation: Towards Omni-Supervised Learning, 2018).
Claim 8, Andoni discloses the claimed invention except for the limitation of a distillation method to train the model. Radosavovic discloses a distillation method to train the model (page 2, column 2, Section 3, e.g. propose data distillation, a general method for omnisupervised learning that distills knowledge from unlabeled data without the requirement of training a large set of models).
Radosavovic discloses the new knowledge generated from unlabeled data can be used to improve the model (page 3, column 1). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Radosavovic to improve the model of Andoni. Therefore, it would have been obvious for one of ordinary skill in the art to use the method of Andoni with the distillation method of Radosavovic. The benefit would be to improve the model.
Claim(s) 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andoni et al. (US 2019/0073591 A1), as applied to claims 1-5, 7, 9, 13, 17, and 18 above, in view of Indarapu et al. (Indarapu hereafter, US 20170286997 A1).
Claim 10, Andoni discloses the claimed invention except for at least one confidence metric comprises one or more of Log loss. Indarapu discloses training comprising at least one confidence metric comprises one or more of Log loss ([0036], e.g. Since the log-loss function 510 depends on positive and negative instances, the ML trainer can use the log-loss function 510 as a training algorithm for training the classifier model if the positive and negative instances are certain).
Indarapu discloses log-loss function is an objective function measuring the accuracy of the classifier model. The less value of the log-loss function, the better accuracy has the classifier model ([0040]). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Indarapu to improve the model of Andoni. Therefore, it would have been obvious for one of ordinary skill in the art to use the method of Andoni with the Log loss metric of Indarapu. The benefit would be to improve the accuracy of the classifier model.
Claim 11, Andoni as modified discloses wherein a plurality of assessment metrics are calculated and are selected from the group consisting of accuracy, Mean class accuracy, sensitivity, specificity, a confusion matrix, Sensitivity-to-specificity ratio, precision, negative predictive value, balanced accuracy, Log loss (Indarapu, [0036], e.g. Since the log-loss function 510 depends on positive and negative instances, the ML trainer can use the log-loss function 510 as a training algorithm for training the classifier model if the positive and negative instances are certain), combined class Log loss, combined data-source Log loss, combined class and data-source Log loss, tangent score, bounded tangent score, per-class ratio of tangent score vs Log Loss, Sigmoid score, epoch number, mean of square error (MSE), root MSE, mean of average error, mean average precision (mAP), confidence score, Area-Under-the-Curve (AUC) threshold, Receiver Operating Characteristic (ROC) curve threshold, Precision-Recall curve.
Claim 12, Andoni as modified discloses wherein the plurality of assessment metrics comprises a primary metric and at least one secondary metric, wherein the primary metric is a confidence metric, and the at least one secondary metric are used as tiebreaker metrics (Andoni, [0063], e.g. if the fitness value is between the first threshold and the second threshold, the genetic algorithm 110 may be producing acceptable results for the amount of processing resources used by the system 100. If the fitness value fails to satisfy both the first and second thresholds, the system 100 may determine to increase the epoch size as compared to the prior epoch).
Claim(s) 14 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andoni et al. (US 2019/0073591 A1), as applied to claims 1-5, 7, 9, 13, 17, and 18 above, in view of Chen et al. (Chen hereafter, US 20200320769 A1).
Claim 14, Andoni discloses the claimed invention except for the limitation of wherein the one or more pre- processing methods comprises segmentation, and the plurality of AI models comprises at least one AI model applied to unsegmented images, and at least one AI model applied to segmented images. Chen discloses wherein the one or more pre- processing methods comprises segmentation, and the plurality of AI models comprises at least one AI model applied to unsegmented images, and at least one AI model applied to segmented images ([0185]-[0187], e.g. Preparing Training Data for Deep Learning In the context of the prediction of garment attributes, the image data used for model training can be in the format of: unsegmented mannequin photos of the garment, either in a single frontal view, or in multiple distinct camera views; segmented garment texture sprites from the mannequin photos).
Chen discloses an invention to improve the capability and generality of visual feature extraction and hence enhance the accuracy of classification or regression ([0007]). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Chen to improve the method of Andoni. Therefore, it would have been obvious for one of ordinary skill in the art to use the method of Andoni with the segmentation of Chen. The benefit would be to improve the capability and generality of visual feature extraction.
Claim 15, Andoni as modified discloses wherein the one or more pre- processing methods comprises one or more computer vision pre-processing methods (Chen, [0007], the capability and generality of visual feature extraction and hence enhance the accuracy of classification or regression).
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andoni et al. (US 2019/0073591 A1), as applied to claims 1-5, 7, 9, 13, 17, and 18 above, in view of Otte et al. (Otte hereafter, US 11514289 B1).
Claim 16, Andoni discloses the claimed invention except for the limitation of a plurality of healthcare images. Otte discloses a plurality of healthcare images (column 4, lines 44-49, e.g. training samples can correspond to samples having measured properties of the sample (e.g., genomic data and other subject data, such as images or health records), as well as known classifications/labels (e.g., phenotypes or treatments) for the subject).
Otte discloses an improvement that addresses the problems of the prior art by providing an apparatuses for generating and using machine learning models using genetic data (column 1, lines 29-34). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Otte to improve the method of Andoni. Therefore, it would have been obvious for one of ordinary skill in the art to use the method of Andoni with the healthcare images of Ottie. The benefit would be to address the problems of the prior art.
PERTINENT PRIOR ART
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Siristatidis et al. (Artificial Intelligence in IVF: A Need, 2011) discloses this has been attributed to the experience required to train an ANN, the existing danger of over training the system and the trapping to local minima, together with the instability of some neural network models as predictive tools. Thus, small changes in the training data set may produce very different models and consequently performance when applied to new data yielding totally different results. This instability leads the generalization performance of some ANN architectures for a particular task to vary considerably, being dependent on pre-chosen data used, which carry the extra bias of their retrospective nature. In an effort to reduce the pitfalls of a single system, ANN ensemble techniques are beginning to be adopted (page 179).
CONCLUSION
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Patent applicants with problems or questions regarding electronic images that can be viewed in the Patent Application Information Retrieval system (PAIR) can now contact the USPTO's Patent Electronic Business Center (Patent EBC) for assistance. Representatives are available to answer your questions daily from 6 am to midnight (EST). The toll free number is (866) 217-9197. When calling please have your application serial or patent number, the type of document you are having an image problem with, the number of pages and the specific nature of the problem. The Patent Electronic Business Center will notify applicants of the resolution of the problem within 5-7 business days. Applicants can also check PAIR to confirm that the problem has been corrected. The USPTO's Patent Electronic Business Center is a complete service center supporting all patent business on the Internet. The USPTO's PAIR system provides Internet-based access to patent application status and history information. It also enables applicants to view the scanned images of their own application file folder(s) as well as general patent information available to the public.
For all other customer support, please call the USPTO Call Center (UCC) at 800-786-9199. The USPTO's official fax number is 571-272-8300.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to C. Dune Ly, whose telephone number is (571) 272-0716. The examiner can normally be reached on Monday-Friday from 8 A.M. to 4 PM ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Neveen Abel-Jalil, can be reached on 571-270-0474.
/Cheyne D Ly/
Primary Examiner, Art Unit 2152
2/10/2026