Prosecution Insights
Last updated: April 19, 2026
Application No. 18/510,672

MULTI-LABEL CLASSIFICATION METHOD FOR MEDICAL IMAGE

Non-Final OA §103§112
Filed
Nov 16, 2023
Examiner
SARKAR, SHIVANGI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
HTC Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
7
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-20 are currently pending in the application filed November 16, 2022. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 02/26/2025, and 07/29/2024 have been considered by the Examiner. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the drawing for claim 4, claim 6 and claim 17 must be shown or the feature(s) canceled from the claim(s). Claim 4 mentions M abnormal features subjected and N abnormal features being indicated by the partial in put label therefore M being greater than N however the drawings do not show an example or description of the relationship between M abnormal feature nor N abnormal features Claim 6 mentions probability values for abnormal features and difficulty levels estimation function which is not described in the drawings Claim 17 mentions probability values for abnormal features and difficulty levels estimation function which is not described in the drawings No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” and are, thus, being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Such claim limitation(s) is/are: A. "storage unit" in claims 13 and dependent claims, as per MPEP §2181 recite the generic placeholder, modified by functional language, and the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function invoking interpretation under §112(f), and in light of the specification do not have support under §112 (a) or (b). B. "processing unit" in claims 13 and dependent claims described in paragraph [0015] and implemented on hardware disclosed in paragraphs [0021] (e.g., “In some embodiments, the processing unit 240 can be a processor, a graphic processor, an application specific integrated circuit (ASIC) or any equivalent processing circuit.”) Claim Rejections - 35 USC § 112 Regarding §112(a): The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Section IV. of §2181 reads: DETERMINING WHETHER 35 U.S.C. 112(a) or Pre-AIA 35 U.S.C. 112, FIRST PARAGRAPH SUPPORT EXISTS “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under section 112(a). See MPEP § 2163.03, subsection VI. Examiners should further consider whether the disclosure contains sufficient information regarding the subject matter of the claims as to enable one skilled in the pertinent art to make and use the full scope of the claimed invention in compliance with the enablement requirement of section 112(a). See MPEP § 2161.01, subsection III, and MPEP § 2164.08.” “Storage unit” and derivations thereof raises concerns with discerning the type of storage unit and how the program code is being stored. The storage unit additionally lacks support in the specification for the corresponding structure, material, or acts and does not disclose support to one of ordinary skill in the art for making and using the components of the storage unit. Thus, the device or material performing the functionality, as well as the claimed functionality lack written description. Regarding §112(b): The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Section II of §2181 reads: II. DESCRIPTION NECESSARY TO SUPPORT A CLAIM LIMITATION WHICH INVOKES 35 U.S.C. 112(f) or Pre-AIA 35 U.S.C. 112, SIXTH PARAGRAPH A. The Corresponding Structure Must Be Disclosed In the Specification Itself in a Way That One Skilled In the Art Will Understand What Structure Will Perform the Recited Function The proper test for meeting the definiteness requirement is that the corresponding structure (or material or acts) of a means- (or step-) plus-function limitation must be disclosed in the specification itself in a way that one skilled in the art will understand what structure (or material or acts) will perform the recited function. See Atmel Corp. v. Information Storage Devices, Inc., 198 F.3d 1374, 1381, 53 USPQ2d 1225, 1230 (Fed. Cir. 1999). As can be seen from the citations of the specification and the written description failing to contain sufficient information regarding the subject matter of the claims as to enable one skilled in the pertinent art to make and use the full scope of the claimed invention, the specification fails to set forth the corresponding structure, material, or acts in compliance with 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, and the claim limitation cannot "be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.” “Storage Unit” additionally lacks support in the specification for the corresponding structure, material, or acts, and does not disclose sufficient corresponding structure performing the entire claimed function to one of ordinary skill in the art for making and using the components of the unit which necessitate this unit. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Jadhav (US 11928186 B2), Nam (KR 102115534 B1) and Wei (Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification) Regarding Claim 1, Jadhav teaches: A multi-label classification method (Jadhav, [Col 8 Line 25];” The present invention … method”) obtaining an initial dataset comprising medical images (Jadhav, [Col 5 Line 18];” … medical image data that are indicative of particular structures and/or abnormalities in the subjects of the medical imaging”) and partial input labels, the partial input labels (Jadhav, [Col 5 Line 17]; “… annotated (labeled) medical image data”) annotating a labeled part of abnormal features on the medical images; training a first multi-label classification model (Jadhav, [Col 4 Line 38]; “… the DL or ML computer models may be a multilabel classification DL or ML computer model “) with the initial dataset (Jadhav, [Col 4 Line 10]; “Such DL and ML computer models are trained on curated training sets of input data and are tested on curated sets of testing data.”) Jadhav fails to teach: estimating difficulty levels of the medical images in the initial dataset based on predictions generated by the first multi-label classification model; dividing the initial dataset based on the difficulty levels of the medical images into at least a first subset and a second subset, wherein the second subset is estimated to have a higher difficulty level compared to the first subset; Nam teaches: estimating difficulty levels of the medical images in the initial dataset (Nam, [0031];” …learning image set 11 to which the label is given”) based on predictions generated by the first multi-label classification model;(Nam, [0035];” In step S120, a set of learning images belonging to the abnormality class is classified based on the abnormality detection difficulty. For example, the learning image set may be classified into a low difficulty image set and a high difficulty image set”) dividing the initial dataset based on the difficulty levels of the medical images into at least a first subset (Nam, [0048]; “… second image set 36) and a second subset (Nam, [0047]; “… first image set 35”), wherein the second subset is estimated to have a higher difficulty level compared to the first subset (Nam, [0046]; “… images in which the abnormality score of the detection model 33 is greater than or equal to the reference value may be classified as the first image set 35 and images that are less than the reference value may be classified as the second image set 36. “) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav with Nam. The motivation for the combination is to be able to assign the initial dataset based on the difficulty levels. (Nam, [0047];” As another example, the learning image set may be classified based on the stage of progression of the lesion. That is, images including lesions in an initial stage of progression may be classified as high difficulty images”.) Nam fails to teach: training a second multi-label classification model with the first subset during a first curriculum learning round; training the second multi-label classification model with the first subset and the second subset during a second curriculum learning round; and generating, based on the second multi-label classification model, predicted labels annotated on the medical images about each of the abnormal features. Wei teaches Training (Wei, [ Page 4 Paragraph 2]; “For our training schedule, we train our network on progressively harder images in four stages: • Stage 1: Very easy images only”) a second multi-label classification model with the first subset during a first curriculum learning round (Wei, [Page 4 Paragraph 2];”. Stage 1: Very easy image only”); Training (Wei, [ Page 4 Paragraph 2]; “For our training schedule, we train our network on progressively harder images in four stages: • Stage 1: Very easy images only • Stage 2: Very easy + easy images”) the second multi-label classification model with the first subset and the second subset during a second curriculum learning round; (Wei, [Page 4 Col 1 Paragraph 2];” Stage 2: Very easy + easy images”) generating, based on the second multi-label classification model, predicted labels annotated on the medical images about each of the abnormal features. (Wei, [Page 7 Col 1 Paragraph 2];” Specifically, we frame the predictions of each model as the annotations …”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, and Nam with Wei. The motivation for the combination is to be able multiple curriculum rounds with different subsets with varying difficulty as well generate predicted labels annotated. (Wei, Fig. 1) Figure 1 PNG media_image1.png 476 286 media_image1.png Greyscale Regarding Claim 13, Jadhav teaches: a storage unit, configured to store computer-executable instructions; and (Jadhav, [Col 8 Line 33];"The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.") a processing unit, coupled with the storage unit, the processing unit is configured to execute the computer-executable instructions to implement a first multi-label classification model and a second multi-label classification model, the processing unit is configured to: (Jadhav, [Col 8 Line 25];"The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.") obtaining an initial dataset comprising medical images (Jadhav, [Col 5 Line 18];” … medical image data that are indicative of particular structures and/or abnormalities in the subjects of the medical imaging”) and partial input labels, the partial input labels (Jadhav, [Col 5 Line 17]; “… annotated (labeled) medical image data”) annotating a labeled part of abnormal features on the medical images; train the first multi-label classification model (Jadhav, [Col 4 Line 38]; “… the DL or ML computer models may be a multilabel classification DL or ML computer model “) with the initial dataset; (Jadhav, [Col 4 Line 10]; “Such DL and ML computer models are trained on curated training sets of input data and are tested on curated sets of testing data.”) Jadhav fails to teach: estimate difficulty levels of the medical images in the initial dataset based on predictions generated by the first multi-label classification model; divide the initial dataset based on the difficulty levels of the medical images into at least a first subset and a second subset, wherein the second subset is estimated to have a higher difficulty level compared to the first subset; Nam teaches: estimate difficulty levels of the medical images in the initial dataset (Nam, [0031];” …learning image set 11 to which the label is given”) based on predictions generated by the first multi-label classification model;(Nam, [0035];” In step S120, a set of learning images belonging to the abnormality class is classified based on the abnormality detection difficulty. For example, the learning image set may be classified into a low difficulty image set and a high difficulty image set”) Divide the initial dataset based on the difficulty levels of the medical images into at least a first subset (Nam, [0048]; “… second image set 36) and a second subset (Nam, [0047]; “… first image set 35”), wherein the second subset is estimated to have a higher difficulty level compared to the first subset (Nam, [0046]; “… images in which the abnormality score of the detection model 33 is greater than or equal to the reference value may be classified as the first image set 35 and images that are less than the reference value may be classified as the second image set 36. “) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav with Nam. The motivation for the combination is to be able to assign the initial dataset based on the difficulty levels. (Nam, [0047];” As another example, the learning image set may be classified based on the stage of progression of the lesion. That is, images including lesions in an initial stage of progression may be classified as high difficulty images”.) Nam fails to teach: train the second multi-label classification model with the first subset during a first curriculum learning round; train the second multi-label classification model with the first subset and the second subset during a second curriculum learning round; and utilize the second multi-label classification model to generate predicted labels annotated on the medical images about each of the abnormal features. Wei teaches Train (Wei, [ Page 4 Paragraph 2]; “For our training schedule, we train our network on progressively harder images in four stages: • Stage 1: Very easy images only”) the second multi-label classification model with the first subset during a first curriculum learning round (Wei, [Page 4 Paragraph 2];”. Stage 1: Very easy image only”); Train (Wei, [ Page 4 Paragraph 2]; “For our training schedule, we train our network on progressively harder images in four stages: • Stage 1: Very easy images only • Stage 2: Very easy + easy images”) the second multi-label classification model with the first subset and the second subset during a second curriculum learning round; (Wei, [Page 4 Col 1 Paragraph 2];” Stage 2: Very easy + easy images”) utilize the second multi-label classification model to generate predicted labels annotated on the medical images about each of the abnormal features. (Wei, [Page 7 Col 1 Paragraph 2];” Specifically, we frame the predictions of each model as the annotations …”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, and Nam with Wei. The motivation for the combination is to be able multiple curriculum rounds with different subsets with varying difficulty as well generate predicted labels annotated. (Wei, Fig. 1) Figure 1 PNG media_image1.png 476 286 media_image1.png Greyscale Claims 2, 7, 8, 11, 12, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jadhav (US 11928186 B2), Nam (KR 102115534 B1), Wei (“Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification”) further in view of Kamen (US 20220358648 A1) Regarding Claim 2, the combination of Jadhav, Nam, and Wei fail to teach: wherein before training the first multi-label classification model, the multi-label classification method further comprises: performing an image pre-processing to the medical images in the initial dataset. Kamen teaches: wherein before training the first multi-label classification model, the multi-label classification method further comprises: performing an image pre-processing to the medical images in the initial dataset. (Kamen, [0051];” In one embodiment, the plurality of images of mpMRI image 302 may be preprocessed to address or remove variability or variances between the plurality of images before being received by localization network “) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei with Kamen. The motivation for the combination is to be able to preprocess medical images. (Kamen, [0051];” Removing variances between the plurality of images of mpMRI image 302 ensures a high level of performance even with limited data availability”) Regarding Claim 7, The combination of Jadhav, Nam, and Wei fails to teach: wherein the second multi-label classification model comprises a convolutional neural network and the first multi-label classification model is trained based on a Masked Binary Cross-Entropy Loss function. Kamen teaches: wherein the second multi-label classification model comprises a convolutional neural network and the first multi-label classification model is trained based on a Masked Binary Cross-Entropy Loss function. (Kamen, [0050];” For both single modality localization loss 212 and classification loss 214, a binary cross entropy loss function is chosen as the objective function.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei with Kamen. The motivation for the combination is to apply Masked Binary Cross-Entropy Loss Function on the first Multi-Label Classification. (Kamen, [0038]; The loss function for the detection network may be based on a DICE score (i.e., Dice similarity coefficient) and binary cross entropy between the predicted localization map and the ground truth heat map.) Regarding Claim 8, the combination of Jadhav, Nam, and Wei fails to teach: wherein the medical images comprise head computed tomography (CT) images. Kamen teaches: wherein the medical images comprise head computed tomography (CT) images. (Kamen, [0026];” …any suitable modality, such as, e.g., multi-parametric MRI (mpMRI), DynaCT, x-ray, ultrasound (US), single-photon emission computed tomography (SPECT), positron emission tomography (PET), etc.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei with Kamen. The motivation for the combination is obtain scans performed by Computed Tomography. (Kamen, [0026]; “At step 102, one or more input medical images depicting a lesion are received. In one embodiment, the medical image is a magnetic resonance imaging (MRI) image, however it should be understood that the medical image may be of any suitable modality, such as, e.g., multi-parametric MRI (mpMRI), DynaCT, x-ray, ultrasound (US), single-photon emission computed tomography (SPECT), positron emission tomography (PET), etc.”) Regarding Claim 11, the combination of Jadhav, Nam, Wei teaches: generating, by the second multi-label classification model, confidence values corresponding to the predicted labels; (Nam, [0044],” In some embodiments, the learning image set may be automatically classified based on an anomaly score (e.g., a confidence score of an anomaly class) output by the detection model.”) calculating an absolute error (Nam, [0050]; “prediction errors”) based on the confidence values and the partial input labels; and (Nam, [0050],” the transform model 37 may be further trained using prediction errors of the detection model 33”) The combination of Jadhav, Nam and Wei fail to teach: displaying the predicted labels in a ranking based on the absolute error. Kamen teaches: displaying the predicted labels in a ranking based on the absolute error. (Kamen, [0105],” By comparing the pseudo-label with the annotation, the uncertainty of each sample in annotated deployment dataset D.sub.dl can be determined and ranked. For example, probability values closer to 0.5 may indicate much more uncertainty than probability values closing to 0 or 1.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei with Kamen. The motivation for the combination is to be able to rank the predicted labels based on absolute error. (Kamen, [0110];” the cross-validation performance on multi-site dataset D.sub.S∪D.sub.M should be within a certain range (defined by error ∈).”) Regarding Claim 12, the combination of Jadhav, Nam, Wei, and Kamen teaches: collecting a correction command about revising (Nam, [0061];” ...updated”) the predicted labels; (Nam, [0061];” For example, when the prediction error of the detection model 50 is calculated for the fake image 53 converted through the first generator 41, the first generator 41 may be updated using the prediction error.”) obtaining revised input labels according to the correction command; ((Nam, [0096];” Therefore, when the transform model 113 is updated so that the prediction error 119 is minimized, the transform model 113 can be trained to perform an accurate difficulty transform while maintaining an ideal class and synthesize a fake image that is close to real.) training a third multi-label classification model during curriculum learning rounds in reference with the revised input labels. (Nam, [0086]; “In this case, the transformation model 83 may be trained to transform the image while maintaining the class well (i.e., training focused on maintaining the class is performed”). Regarding Claim 19, the combination of Jadhav, Nam, Wei teaches: a displayer, coupled with the processing unit, wherein the processing unit is configured to generate confidence values corresponding to the predicted labels based the second multi-label classification model; (Nam, [0044],” In some embodiments, the learning image set may be automatically classified based on an anomaly score (e.g., a confidence score of an anomaly class) output by the detection model.”) the processing unit is configured to calculate an absolute error based on the confidence values and the partial input labels (Nam, [0050]; “prediction errors”) based on the confidence values and the partial input labels; and (Nam, [0050],” the transform model 37 may be further trained using prediction errors of the detection model 33”) The combination of Jadhav, Nam and Wei fail to teach: the displayer is configured to display the predicted labels in a ranking based on the absolute error. Kamen teaches: the displayer is configured to display the predicted labels in a ranking based on the absolute error. (Kamen, [0105],” By comparing the pseudo-label with the annotation, the uncertainty of each sample in annotated deployment dataset D.sub.dl can be determined and ranked. For example, probability values closer to 0.5 may indicate much more uncertainty than probability values closing to 0 or 1.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei with Kamen. The motivation for the combination is to be able to rank the predicted labels based on absolute error. (Kamen, [0110];” the cross-validation performance on multi-site dataset D.sub.S∪D.sub.M should be within a certain range (defined by error ∈).”) Regarding Claim 20, the combination of Jadhav, Nam, Wei, and Kamen teaches: an input interface, coupled with the processing unit, wherein the input interface is configured to collect a correction command about revising (Nam, [0061];” ...updated”) the predicted labels (Nam, [0061];” For example, when the prediction error of the detection model 50 is calculated for the fake image 53 converted through the first generator 41, the first generator 41 may be updated using the prediction error.”) the processing unit is configured to obtain revised input labels according to the correction command ((Nam, [0096];” Therefore, when the transform model 113 is updated so that the prediction error 119 is minimized, the transform model 113 can be trained to perform an accurate difficulty transform while maintaining an ideal class and synthesize a fake image that is close to real.) train a third multi-label classification model during curriculum learning rounds in reference with the revised input labels. (Nam, [0086]; “In this case, the transformation model 83 may be trained to transform the image while maintaining the class well (i.e., training focused on maintaining the class is performed”). Claims 3, 9, 10, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Jadhav (US 11928186 B2), Nam (KR 102115534 B1), Wei (“Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification”), Kamen (US 20220358648 A1) further in view of Salehinejad (“A real‑world demonstration of machine learning generalizability in the detection of intracranial hemorrhage on head computerized tomography”) Regarding Claim 3, the combination of Jadhav, Nam, Wei, and Kamen fail to teach: wherein the image pre-processing comprises at least one of image matting, image windowing and sequential image stacking. Salehinejad teaches: wherein the image pre-processing comprises at least one of image matting (Salehinejad, [Page 2 Paragraph 5]; “Feature extraction from each image”), image windowing (Salehinejad, [Page 2 Paragraph 5]; “Adjustment of the window center and width of each CT image”) and sequential image stacking (Salehinejad, [Page 2 Paragraph 6]; “The three enhanced images are then stacked and passed to two deep convolutional neural networks”). Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Kamen with Salehinejad. The motivation for the combination is to be perform image matting (extraction process), image windowing and image stacking (Salehinejad, Figure 1) PNG media_image2.png 591 701 media_image2.png Greyscale Regarding Claim 9, the combination of Jadhav, Nam, Wei, and Kamen fail to teach: wherein the abnormal features comprise intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), subarachnoid hemorrhage (SAH), subdural intracranial hemorrhage (SDH) and epidural hemorrhage (EDH). Salehinejad teaches: wherein the abnormal features comprise intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), subarachnoid hemorrhage (SAH), subdural intracranial hemorrhage (SDH) and epidural hemorrhage (EDH). (Salehinejad, [Page 2 Paragraph 5];” Each CT image in this dataset was annotated by a neuroradiologist for the presence or absence of epidural (EDH), subdural (SDH), subarachnoid (SAH), intraventricular (IVH), and intraparenchymal (IPH) hemorrhage. This dataset consists of 874,035 images with class imbalance amongst the types of ICH”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Kamen with Salehinejad. The motivation for the combination is for abnormal features to comprise of intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), subarachnoid hemorrhage (SAH), subdural intracranial hemorrhage (SDH) and epidural hemorrhage (EDH) (Salehinejad, Table 1) PNG media_image3.png 213 571 media_image3.png Greyscale Regarding Claim 10, the combination of Jadhav, Nam, Wei, and Kamen fail to teach: wherein the second multi-label classification model is utilized to generate five predicted labels about positive or negative predictions of IPH, IVH, SAH, SDH and EDH corresponding to one medical image Salehinejad teaches: wherein the second multi-label classification model is utilized to generate five predicted labels about positive or negative predictions of IPH, IVH, SAH, SDH and EDH corresponding to one medical image (Salehinejad, [Page 2 Paragraph 7];” This distribution is passed to a set of thresholds where if at least the predicted probability of one hemorrhage type is more than or equal to its corresponding threshold, the output label will be positive for ICH.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Kamen with Salehinejad. The motivation for the combination is labels are generated based on positive or negative predictions of the abnormality in the medical image (Salehinejad, Figure 6) PNG media_image4.png 712 622 media_image4.png Greyscale Regarding Claim 14, the combination of Jadhav, Nam, Wei, and Kamen teaches: wherein before training the first multi-label classification model, the processing unit is further configured to perform an image pre-processing to the medical images in the initial dataset ((Kamen, [0051];” In one embodiment, the plurality of images of mpMRI image 302 may be preprocessed to address or remove variability or variances between the plurality of images before being received by localization network “) The combination of Jadhav, Nam, Wei, and Kamen fails to teach: the image pre-processing comprises at least one of image matting, image windowing and sequential image stacking. Salehinejad teaches: the image pre-processing comprises at least one of image matting, (Salehinejad, [Page 2 Paragraph 5]; “Feature extraction from each image”) image windowing (Salehinejad, [Page 2 Paragraph 5]; “Adjustment of the window center and width of each CT image”) and sequential image stacking (Salehinejad, [Page 2 Paragraph 6]; “The three enhanced images are then stacked and passed to two deep convolutional neural networks”). Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Kamen with Salehinejad. The motivation for the combination is to be perform image matting (extraction process), image windowing and image stacking (Salehinejad, Figure 1) PNG media_image2.png 591 701 media_image2.png Greyscale Regarding Claim 18, the combination of Jadhav, Nam, Wei and Kamen teach: wherein the medical images comprise head computed tomography (CT) images (Kamen, [0026];” …any suitable modality, such as, e.g., multi-parametric MRI (mpMRI), DynaCT, x-ray, ultrasound (US), single-photon emission computed tomography (SPECT), positron emission tomography (PET), etc.”) The combination of Jadhav, Nam, Wei and Kamen fail to teach: the abnormal features comprise intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), subarachnoid hemorrhage (SAH), subdural intracranial hemorrhage (SDH) and epidural hemorrhage (EDH) Salehinejad teaches: the abnormal features comprise intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), subarachnoid hemorrhage (SAH), subdural intracranial hemorrhage (SDH) and epidural hemorrhage (EDH) (Salehinejad, [Page 2 Paragraph 5];” Each CT image in this dataset was annotated by a neuroradiologist for the presence or absence of epidural (EDH), subdural (SDH), subarachnoid (SAH), intraventricular (IVH), and intraparenchymal (IPH) hemorrhage. This dataset consists of 874,035 images with class imbalance amongst the types of ICH”) the second multi-label classification model is utilized to generate five predicted labels about positive or negative predictions of IPH, IVH, SAH, SDH and EDH corresponding to one medical image (Salehinejad, [Page 2 Paragraph 7];” This distribution is passed to a set of thresholds where if at least the predicted probability of one hemorrhage type is more than or equal to its corresponding threshold, the output label will be positive for ICH.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Kamen with Salehinejad. The motivation for the combination is labels are generated based on positive or negative predictions of the abnormality in the medical image (Salehinejad, Figure 6) PNG media_image4.png 712 622 media_image4.png Greyscale Claims 4, 6, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Jadhav (US 11928186 B2), Nam (KR 102115534 B1), and Wei (“Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification”), further in view of Durand (“Learning a Deep ConvNet for Multi-label Classification with Partial Labels”) Regarding Claim 4, the combination of Jadhav, Nam, and Wei, fails to teach: wherein each of the medical images is potentially subject to M abnormal features, the partial input labels indicate positive or negative input labels about N abnormal features, M and N are positive integers and M > N, an unlabeled part of the abnormal features is unknown corresponding to the medical images in the initial dataset. Durand teaches: wherein each of the medical images is potentially subject to M abnormal features, the partial input labels indicate positive (Durand, [Page 3 Paragraph 4];” y (i) c = 1 “) or negative (Durand, [Page 3 Paragraph 4];”…resp. −1… “) input labels about N abnormal features, M and N are positive integers and M > N, an unlabeled part of the abnormal features is unknown(Durand, [Page 3 Paragraph 4];” …0…unknown “) corresponding to the medical images in the initial dataset. (Durand, [Page 3 Paragraph 4]; “y (i) c = 1 (resp. −1 and 0) means the category is present (resp. absent and unknown”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei, with Durand. The motivation for the combination is to be indicate partial label presence, absence or unknown based on positive or negative input label or unlabeled. The prior art indicated Y which is the label vector. (Durand, [Page 3 Paragraph 4]; “y (i) c = 1 (resp. −1 and 0) means the category is present (resp. absent and unknown”) Regarding Claim 6, The combination of Jadhav, Nam, and Wei teaches: wherein estimating the difficulty levels of the medical images comprises: generating, by the first multi-label classification model, probability values for each of the abnormal features relative to the medical images; and (Jadhav, [ Col 2 Line 4]; “The method also comprises receiving an output of the ML computer model, wherein the output is a vector output specifying probability values associated with labels in the plurality of labels.”) The combination of Jadhav, Nam and Wei fails to teach: estimating the difficulty levels based on a difficulty estimation function according to the probability values and the partial input labels. Durand teaches: estimating the difficulty levels based on a difficulty estimation function according to the probability values and the partial input labels. (Durand, [Page 4 Paragraph 6];” An easy example has a high absolute score whereas a hard example has a score close to 0.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei, with Durand. The motivation for the combination is to generate probability values as well as estimate the difficulty of the probability values which represented by the absolute score in the prior art. (Durand, [Page 5 Paragraph 1]; “To find the optimal v, we sort the examples by decreasing order of absolute score and label only the top-θ% of the missing labels.) Regarding Claim 15, the combination of Jadhav, Nam, and Wei, fails to teach: wherein each of the medical images is potentially subject to M abnormal features, the partial input labels indicate positive or negative input labels about N abnormal features, M and N are positive integers and M > N, an unlabeled part of the abnormal features is unknown corresponding to the medical images in the initial dataset. Durand teaches: wherein each of the medical images is potentially subject to M abnormal features, the partial input labels indicate positive (Durand, [Page 3 Paragraph 4];” y (i) c = 1 “) or negative (Durand, [Page 3 Paragraph 4];”…resp. −1… “) input labels about N abnormal features, M and N are positive integers and M > N, an unlabeled part of the abnormal features is unknown(Durand, [Page 3 Paragraph 4];” …0…unknown “) corresponding to the medical images in the initial dataset. (Durand, [Page 3 Paragraph 4]; “y (i) c = 1 (resp. −1 and 0) means the category is present (resp. absent and unknown”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei, with Durand. The motivation for the combination is to be indicate partial label presence, absence or unknown based on positive or negative input label or unlabeled. The prior art indicated Y which is the label vector. (Durand, [Page 3 Paragraph 4]; “y (i) c = 1 (resp. −1 and 0) means the category is present (resp. absent and unknown”) Regarding Claim 17, The combination of Jadhav, Nam, and Wei teaches: wherein estimating the difficulty levels of the medical images comprises: generating, by the first multi-label classification model, probability values for each of the abnormal features relative to the medical images; and (Jadhav, [ Col 2 Line 4]; “The method also comprises receiving an output of the ML computer model, wherein the output is a vector output specifying probability values associated with labels in the plurality of labels.”) The combination of Jadhav, Nam and Wei fails to teach: estimating the difficulty levels based on a difficulty estimation function according to the probability values and the partial input labels. Durand teaches: estimating the difficulty levels based on a difficulty estimation function according to the probability values and the partial input labels. (Durand, [Page 4 Paragraph 6];” An easy example has a high absolute score whereas a hard example has a score close to 0.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, and Wei, with Durand. The motivation for the combination is to generate probability values as well as estimate the difficulty of the probability values which represented by the absolute score in the prior art. (Durand, [Page 5 Paragraph 1]; “To find the optimal v, we sort the examples by decreasing order of absolute score and label only the top-θ% of the missing labels.) Claims 5, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Jadhav (US 11928186 B2), Nam (KR 102115534 B1), Wei (“Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification”), and Durand (“Learning a Deep ConvNet for Multi-label Classification with Partial Labels”) further in view of Kamen (US 20220358648 A1). Regarding Claim 5, the combination of Jadhav, Nam, Wei, and Durand fails to teach: wherein the first multi-label classification model comprises a convolutional neural network, and the first multi-label classification model is trained based on a Masked Binary Cross-Entropy Loss function according to the partial input labels without considering the unlabeled part of the abnormal features. Kamen teaches: wherein the first multi-label classification model comprises a convolutional neural network, and the first multi-label classification model is trained based on a Masked Binary Cross-Entropy Loss function according to the partial input labels without considering the unlabeled part of the abnormal features. (Kamen, [0050];” For single modality localization loss 316, multi-modality localization loss 318, and multi-modality classification loss 320, a binary cross entropy loss function is chosen as the objective function.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Durand with Kamen. The motivation for the combination is to be apply the Binary Cross Entropy Loss Function for training the first multi-label classification model. (Kamen, [0038]; The loss function for the detection network may be based on a DICE score (i.e., Dice similarity coefficient) and binary cross entropy between the predicted localization map and the ground truth heat map.) Regarding Claim 16, the combination of Jadhav, Nam, Wei, and Durand fails to teach: wherein the first multi-label classification model comprises a convolutional neural network, and the first multi-label classification model is trained based on a Masked Binary Cross-Entropy Loss function according to the partial input labels without considering the unlabeled part of the abnormal features. Kamen teaches: wherein the first multi-label classification model comprises a convolutional neural network, and the first multi-label classification model is trained based on a Masked Binary Cross-Entropy Loss function according to the partial input labels without considering the unlabeled part of the abnormal features. (Kamen, [0050];” For single modality localization loss 316, multi-modality localization loss 318, and multi-modality classification loss 320, a binary cross entropy loss function is chosen as the objective function.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Jadhav, Nam, Wei, and Durand with Kamen. The motivation for the combination is to be apply the Binary Cross Entropy Loss Function for training the first multi-label classification model. (Kamen, [0038]; The loss function for the detection network may be based on a DICE score (i.e., Dice similarity coefficient) and binary cross entropy between the predicted localization map and the ground truth heat map.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANGI SARKAR whose telephone number is (571)272-7262. The examiner can normally be reached M-F: 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANGI SARKAR/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 16, 2023
Application Filed
Feb 11, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month