DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s response, filed 10 October 2025, to the last office action has been entered and made of record.
In response to the amendments to the specification and claims, they are acknowledged, supported by the original disclosure, and no new matter is added.
In response to the amendments to the specification, the amended language to the abstract has overcome the objection to the specification of the previous Office action, and the respective objection has been withdrawn.
In response to the amendments to the claims, specifically addressing the rejection under 35 U.S.C. § 101, for being directed to a judicial exception without significantly more, of the previous Office action, the amended language has overcome the respective rejections, and the rejections have been withdrawn.
Independent claims 1, 6, and 7 are amended to recite the additional elements of “discriminate the input image into a class from among a plurality of classes based on the image features, and generate a class discriminative result comprising a class discriminative score for each of the plurality of classes by using a class discriminative model” and “generate a normal/abnormal discriminative result indicating a normal class likelihood by summing the class discriminative scores corresponding to a predetermined subset of the plurality of classes defined as normal classes”. These additional claim elements when considered in combination provide additional specific limitations other than what is well-understood, routine, conventional activity in the field and amount to “significantly more” features. See MPEP 2106.05(d).
Amendments to the independent claims 1, 6, and 7 have necessitated a new ground of rejection over the applied prior art. Please see below for the updated interpretations and rejections.
Response to Arguments
Applicant’s arguments with respect to amended independent claims 1, 6, and 7 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2020/0321118, effectively filed 2 April 2019), herein Kim in view of Bakalo et al. (US 2020/0226368, effectively filed 15 January 2019), and Fujimori et al. (US 2019/0385016), herein Fujimori.
Regarding claim 1, Kim discloses a learning device comprising:
a memory storing instructions (see Kim [0111]-[0117], where memory storing programs to perform the disclosed teachings are disclosed); and
one or more processors (see Kim [0111]-[0117], where processor is disclosed)configured to execute the instructions to:
extract image features from an input image by using a feature extraction model (see Kim [0041]-[0045], where a feature extraction layer of a neural network extracts common features of the two domains; see Kim [0054]-[0055], where the data sets of the first and second domain can be composed of images generated by different shooting methods);
discriminate the input image into a class from among a plurality of classes based on the image features, and generate a class discriminative result comprising a class discriminative score for each of the plurality of classes by using class model (see Kim [0042]-[0049], where the output layer and class specific discriminators are trained using the common features extracted from the feature extraction layer, and that the number of classes can be defined and designed differently according to target task of neural network, where if the target task is to determine the type of a disease or tumor, three or more classes indicating the type of each disease or tumor can be defined as the target classes of the neural network; see Kim [0065]-[0066], where the discriminators execute operation to discriminate domains for data sets of all classes; and see Kim [0081]-[0082], where the output layer is trained to execute the target task and outputs probability that the input data set belongs to each class, e.g. confidence score of each class);
calculate a class discriminative loss based on the class discriminative result (see Kim [0076]-[0082], where the output layer can be trained using errors calculated based on the difference between the class prediction values and ground truth labels);
discriminate a domain of the input image based on the image features and generate a domain discriminative result by using a domain discriminative model (see Kim [0073]-[0080], where the discriminators can obtain domain prediction values for feature data);
calculate a domain discriminative loss based on the domain discriminative result (see Kim [0073]-[0080], where the discriminators and feature extraction layer can be trained using errors calculated based on the difference between the domain predication value and inverted labels); and
update parameters of the feature extraction model and the domain discriminative model based on the domain discriminative loss (see Kim [0073]-[0082], where the discriminators and feature extraction layer can be trained using errors calculated based on the difference between the domain prediction value and inverted labels, and adversarial learning can be performed between the first extraction layer and discriminator).
While Kim teaches that the neural network can be trained to execute a specific task, e.g. diagnosis of abnormality or identification of lesions, for digital mammography and breast tomosynthesis images (see Kim [0055]); Kim does not explicitly disclose generate a normal/abnormal discriminative result indicating a normal class likelihood by summing the class discriminative scores corresponding to a predetermined subset of the plurality of classes defined as normal classes.
Bakalo teaches in a related and pertinent method for classifying image patches as benign or malignant (see Bakalo Abstract), where medical images are split into patches and a classification branch of a neural network computes local probabilities of malignant, benign, and normal for each patch, a detection branch ranks the patches according to their relevance to malignant and benign findings in the image and the outputs are aggregated with a multiplication and summing steps, where a summing of the classification probabilities for the patches and patch rankings for malignant condition and benign condition is performed, and a global malignant condition probabilities and global benign probability condition probability is formed (see Bakalo [0020]-[0021]), where the mammogram datasets with BI-RADS findings are used, which include BI-RADS 1, 2, 3, 4, and 5 categories, and BI-RADS 2 and 3 are included into a benign class and BI-RADS 4 and 5 are included in the malignant class (see Bakalo [0029]).
At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Bakalo to the teachings of Kim, such that a global benign probability can be formed for determining if a medical image patch includes normal and benign condition or if the medical image patch includes malignant condition, based on the summation of classification probabilities for target classes, where the target classes are based on BI-RADS categories findings, which a subset of the BI-RADS categories are defined as a benign or normal class.
This modification is rationalized as an application of a known technique to a known device ready for improvement to yield predictable results.
In this instance, Kim disclose a base learning device which extracts features from images with different domains, and a neural network with discriminators are used to determine class probabilities and discriminate domains of datasets for target classes and can be trained using errors calculated from the determined prediction values..
Bakalo teaches a known technique for classifying image patches as benign or malignant, where a neural network computes local probabilities of malignant, benign, and normal for each patch and the outputs are aggregated with a multiplication and summing steps, where a summing of the classification probabilities for the patches and patch rankings for malignant condition and benign condition is performed, for forming global malignant condition probabilities and global benign probability condition probabilities, and that mammogram datasets with BI-RADS findings are used, which include BI-RADS 1, 2, 3, 4, and 5 categories, where BI-RADS 2 and 3 are included into a benign class and BI-RADS 4 and 5 are included in the malignant class.
One of ordinary skill in the art would have recognized that by applying Bakalo’s technique would allow for the forming of a global benign probability to determine if a medical image patch includes normal and benign conditions or if the medical image patch includes a malignant condition, based on a summation of classification probabilities for target classes, where the target classes are based on BI-RADS categories findings, which a subset of the BI-RADS categories are defined as a benign or normal class, predictably leading to an improved learning device which further performs normal and benign or malignant data classification.
While Bakalo teaches computing area under the curve measures for performance assessment of the proposed methods (see Bakalo [0030]-[0032]); Kim and Bakalo do not explicitly disclose calculate an AUC loss based on the normal/abnormal discriminative result; and update parameters of the feature extraction model, the class discriminative model, and a normal/abnormal discriminative model based on the class discriminative loss and the AUC loss.
Fujimori teaches in a related and pertinent device and method for updating a recognition model (see Fujimori Abstract), where a normal mode classifier defines a normal range with only normal data and determines whether the data of the determination target is included in the normal range, to identify normal data and the other abnormal data, and calculates the recognition score of the data of the determination target based on extracted feature amount acquired from the data of the determination target (see Fujimori [0032])¸an area under the curve (AUC) can be calculated as an evaluated value of evaluation of the recognition performance of the current normal model classifier, and the classification model is updated according to the calculated AUC value (see Fujimori [0063]).
At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Fujimori to the teachings of Kim and Bakalo, such that the normal and benign or malignant classification of Kim and Bakalo are performed based on a determined normal range to identify normal and abnormal data and the classification models are evaluated and updated using AUC metrics.
This modification is rationalized as an application of a known technique to a known device ready for improvement to yield predictable results.
In this instance, Kim and Bakalo disclose a base learning device which extracts features from images with different domains, and a neural network with discriminators are used to determine class probabilities and discriminate domains of datasets for target classes and can be trained using errors calculated from the determined prediction values, and further form global benign probability to determine if a medical image patch includes normal and benign conditions or if the medical image patch includes a malignant condition, based on a summation of classification probabilities for target classes, where the target classes are based on BI-RADS categories findings, which a subset of the BI-RADS categories are defined as a benign or normal class.
Fujimori teaches a known technique for updating a recognition model, where a normal mode classifier determines whether the data of the determination target is included in the normal range to identify normal data and other abnormal data, and calculates the recognition score of the data of the determination target based on extracted feature amount acquired from the data of the determination target¸ where AUC can be calculated as an evaluated value for evaluation of the recognition performance of the current normal model classifier, and that the classification model is updated according to the calculated AUC value.
One of ordinary skill in the art would have recognized that by applying Fujimori’s technique would allow for the normal and benign or malignant classification of Kim and Bakalo performed based on a determined normal range to identify normal and abnormal data and the classification models are evaluated and updated using AUC metrics, predictably leading to an improved learning device which further performs abnormal data classification.
Regarding claim 2, please see the above rejection of claim 1. Kim, Bakalo, and Fujimori disclose the learning device according to claim 1, wherein the normal/abnormal discriminative model includes the same parameters as that of the class discriminative model (see Kim [0042]-[0049], where the output layer and class specific discriminators are trained using the common features extracted from the feature extraction layer, and that the number of classes can be defined and designed differently according to target task of neural network, where if the target task is to determine the type of a disease or tumor, three or more classes indicating the type of each disease or tumor can be defined as the target classes of the neural network; see Bakalo [0020]-[0021], where the classification branch of a neural network computes local probabilities of malignant, benign, and normal for each patch and the outputs are aggregated with multiplication and summing steps, where a summing of the classification probabilities for the patches and patch rankings for malignant condition and benign condition is performed to form global malignant condition probabilities and global benign probability condition probability; where the combined teachings suggest to perform the normal and abnormal determination with the class specific discriminators of Kim).
Regarding claim 3, please see the above rejection of claim 1. Kim, Bakalo, and Fujimori disclose the learning device according to claim 1, wherein the class discriminative model classifies the input image into three or more classes (see Kim [0046]-[0049], where if the target task is to determine a type of disease or tumor, three or more classes indicating the type of each disease or tumor can be defined).
Regarding claim 4, please see the above rejection of claim 1. Kim, Bakalo, and Fujimori disclose the learning device according to claim 1, wherein the normal/abnormal discriminative result indicates a normal class likelihood for each input image, and the processor calculates, as the AUC loss, a difference between a normal/abnormal discriminative result calculated for an input image of the normal class and a normal/abnormal discriminative result calculated for an input image of the abnormal class, by using correct normal/abnormal labels indicating respective input images (see Fujimori [0080]-[0081], where a difference value between recognition scores between pieces of data can be calculated to grasp the classification between normality and abnormality).
Regarding claim 5, please see the above rejection of claim 4. Kim, Bakalo, and Fujimori disclose the learning device according to claim 4, wherein the processor updates parameters of the feature extraction model, the class discriminative model, and the normal/abnormal discriminative model so as to reduce the AUC loss (see Fujimori [0063], where the when the calculated AUC is x or more, the classification model is updated and when the evaluated value is less than x, the processing finishes without updating the classification model; suggesting that the classification models are updated to reduce the AUC below x).
Regarding claim 6, it recites a method performing the device functions of claim 1. Kim, Bakalo, and Fujimori teach the method by performing the device functions of claim 1. Please see above for detailed claim analysis.
Please see the above rejection for claim 1, as the rationale to combine the teachings of Kim, Bakalo, and Fujimori are similar, mutatis mutandis.
Regarding claim 7, it recites a non-transitory computer-readable recording medium storing a program, the program causing a computer to perform the device functions of claim 1. Kim, Bakalo, and Fujimori teach a non-transitory computer-readable recording medium storing a program, the program causing a computer to perform the device functions of claim 1 (see Kim [0111]-[0117], where memory storing programs and executed by a processor to perform the disclosed teachings are disclosed). Please see above for detailed claim analysis.
Please see the above rejection for claim 1, as the rationale to combine the teachings of Kim, Bakalo, and Fujimori are similar, mutatis mutandis.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY WING HO CHOI whose telephone number is (571)270-3814. The examiner can normally be reached 9:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VINCENT RUDOLPH can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIMOTHY CHOI/Examiner, Art Unit 2671
/VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671