DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. TW112101588, filed on 1/13/2023.
Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e).
Failure to provide a certified translation may result in no benefit being accorded for the non-English application.
Specification
The disclosure is objected to because of the following informalities:
In the Title, “Modles” should read “Models.”
Paragraph [0039], line 14, “an image an image” should read “an image.”
Paragraph [0052], lines 23 and 28, “algorisms” should read “algorithms.”
Paragraph [0068], line 10, “algorisms” should read “algorithms.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7 and 9-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 step (c) recites “segmenting each of the plurality of processed images of step (b) to produce a plurality of segmented images, and/or detecting the attribute on each of the plurality of processed images of step (b) to produce a plurality of extracted sub-images.” The term “and/or” requires only one of the limitations to be present. However, selection of “or” will render Claim 1 indefinite. Claim 1 step (c) brings about three different possible branches. Selection of either branch one (1) or branch two (2), “or,” will render Claim 1 indefinite. Branch three (3), “and,” must be selected to prevent indefiniteness issues in Claim 1.
If branch one (1) is taken, step (c) produces a plurality of segmented images.
If branch two (2) is taken, step (c) produces a plurality of extracted sub-images.
If branch three (3) is taken, step (c) produces a plurality of segmented images and a plurality of extracted sub-images.
Claim 1 step (d) recites, “segmenting each of the plurality of extracted sub-images of step (c) to produce a plurality of segmented sub-images.” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the extracted sub-images will not exist. Step (d) requires that branch two (2) or branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
Claim 1 step (e) recites, “combining each of the extracted sub-images of step (c) and each of the segmented sub-images of step (d), thereby producing a plurality of combined images respectively exhibiting the attribute for each of the mammographic image.” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since neither the extracted sub-images or segmented sub-images will exist. Step (e) requires that branch two (2) or branch three (3) be taken in Claim 1 step (c) to avoid indefiniteness issues.
Claim 1 step (f) recites, “classifying and training the plurality of combined images of step (e) with the aid of a convolutional neural network, thereby establishing the model.” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the combined images will not exist. Step (f) requires that branch two (2) or branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
Claim 2 recites, “the method of claim 1, wherein in step (c), upon being detected, the attribute on each of the processed images of step (b) is framed to produce a framed image.” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the attribute will not have been detected in the processed images. Claim 2 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
Claim 3 recites, “the method of claim 2, further comprising mask filtering the framed image and the segmented image of step (c).” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the framed image will not exist. If branch two (2) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the segmented image will not exist. Claim 3 requires that branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues since both limitations are required to produce the both the framed image and the segmented image.
Claim 4 recites, “the method of claim 3, further comprising cropping the framed image to produce the extracted sub-image of step (c).” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the framed image will not exist. If branch two (2) is taken in step (c) of Claim 1, the claim will be rendered indefinite due to its dependence on Claim 3, which requires both limitations from step (c) of Claim 1. Therefore, Claim 4 requires that branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
Claim 5 recites, “the method of claim 4, further comprising, after step (d) or step (e), updating the segmented image of step (c) with the aid of the segmented sub-image of step (d) and the framed image.” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite because the segmented sub-image of step (d) and the framed image will not exist. If branch (2) is taken in step (c) of Claim 1, the claim will be rendered indefinite because the segmented image of step (c) will not exist. Claim 5 requires that branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
Claim 6 recites, “the method of claim 1, wherein in step (c), the attribute on each of the processed images of step (b) is detected by use of an object detection algorithm.” If branch one (1) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the attribute will not be detected in step (c) of Claim 1. Claim 6 requires that branch two (2) or three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
Claim 7 recites, “the method of claim 1, wherein in step (c), each of the processed images is segmented by use of a U-net architecture.” If branch two (2) is taken in step (c) of Claim 1, the claim will be rendered indefinite since the processed images will not be segmented. Claim 7 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 1 to avoid indefiniteness issues.
For the sake of further prosecution, the Examiner will interpret Claim 1 step (c) to read, “segmenting each of the plurality of processed images of step (b) to produce a plurality of segmented images, and
Claim 9 step (c) recites, “segmenting the processed image of step (b) to produce a segmented image, and/or detecting the attribute on the processed image of step (b), thereby producing an extracted sub-image thereof.” The term “and/or” requires only one of the elements to be present. However, selection of “or” will render Claim 9 indefinite. Claim 9 step (c) brings about three different possible branches. Selection of either branch one (1) or branch two (2), “or,” in step (c) of Claim 9 will render the claim indefinite. Branch three (3), “and,” must be selected to prevent indefiniteness issues in Claim 9.
If branch one (1) is taken, step (c) produces a segmented image.
If branch two (2) is taken, step (c) produces an extracted sub-image.
If branch three (3) is taken, step (c) produces a segmented image and an extracted sub-image.
Claim 9 step (d) recites, “segmenting the extracted sub-image of step (c) to produce a segmented sub-image.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since no extracted sub-image will exist. Step (d) of Claim 9 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 9 step (e) recites, “combining the extracted sub-image of step (c) and the segmented sub-image of step (d), thereby producing a text image exhibiting the attribute for the mammographic image.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since no extracted sub-image or segmented sub-image will exist to be combined. Step (e) of Claim 9 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 9 step (f) recites, “determining the breast lesion of the subject by processing the text image of step (e) within the model established by the method of claim 1.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since no text image will exist to be processed. Step (f) of Claim 9 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 9 step (g) recites, “providing an anti-cancer treatment to the subject based on the breast lesion determined in step (f).” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since no text image will exist in step (f). Step (g) of Claim 9 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 10 recites, “the method of claim 9, wherein in step (c), upon being detected, the attribute on the processed images of step (b) is framed to produce a framed image.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since the detected attribute will not exist, and therefore cannot be framed. Claim 10 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 11 recites, “the method of claim 10, further comprising mask filtering the framed image and the segmented image of step (c) to eliminate any mistaken attribute detected in step (c).” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since the framed image will not exist. If branch two (2) is taken in step (c) of Claim 9, the claim will be rendered indefinite since the segmented image of step (c) will not exist. Claim 11 requires that branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues since both limitations in Claim 9 step (c) are required to produce both the framed image and the segmented image.
Claim 12 recites, “the method of claim 11, further comprising cropping the framed image to produce the extracted sub-image of step (c).” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since the framed image will not exist. Therefore, Claim 12 requires that branch two (2) or branch three (3) be taken in step (c) of claim 9 to avoid indefiniteness issues.
Claim 13 recites, “the method of claim 12, further comprising, after step (d) or step (e), updating the segmented image of step (c) with the aid of the segmented sub-image of step (d) and the framed image.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite because the segmented sub-image of step (d) and the framed image will not exist. If branch two (2) is taken in step (c) of Claim 9, the claim will be rendered indefinite because the segmented image of step (c) will not exist. Claim 13 requires that branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 14 recites, “the method of claim 9, wherein in step (c), the attribute on the processed image of step (b) is detected by use of an object detection algorithm.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite since the attribute will not be detected in step (c) of Claim 9. Claim 14 requires that branch two (2) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 15 recites, “the method of claim 9, wherein in step (c), the processed image is segmented by performing a U-net architecture.” If branch two (2) is taken in step (c) of Claim 9, the claim will be rendered indefinite since the processed image will not be segmented in step (c) of Claim 9. Claim 15 requires that branch one (1) or branch three (3) be taken in step (c) of Claim 9 to avoid indefiniteness issues.
Claim 16 recites, “the method of claim 9, wherein in step (g), the anti-cancer treatment is selected from the group consisting of a surgery, a radiofrequency ablation, a systemic chemotherapy, a transarterial chemoembolization (TACE), an immunotherapy, a targeted drug therapy, a hormone therapy, and a combination thereof.” If branch one (1) is taken in step (c) of Claim 9, the claim will be rendered indefinite because the text image required to determine the breast lesion in step (f) will not exist. Claim 16 requires that branch two (2) or branch three (3) be taken in step (c) of claim 9 to avoid indefiniteness issues.
For the sake of further prosecution, the Examiner will interpret Claim 9 step (c) to read, “segmenting the processed image of step (b) to produce a segmented image, and
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Mahmood et al. (NPL “Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach,” 2022, hereafter referred to as Mahmood) in view of Heindl et al. (U.S. Patent Pub. No. 2020/0167928 A1, hereafter referred to as Heindl).
Regarding Claim 1, Mahmood teaches a method for building a model for determining a breast lesion in a subject (Abstract, Mahmood teaches a novel deep learning-based convolutional neural network for diagnosing breast malignancy tissues.), comprising: (a) obtaining a plurality of mammographic images of the breast from the subject (Fig. 1, (a)-(d), “Datasets,” Mahmood teaches obtaining mammography images from Private and MIAS (Mammogram Image Analysis Society) datasets. The images include different views of both breasts.), in which each of the mammographic images comprises an attribute of the breast lesion selected from the group consisting of location, margin, calcification, lump, mass, shape, size, status of the breast lesion, and a combination thereof (Fig. 1, “Materials and methodology,” “Datasets,” “Discussion,” “Patches extractions and normalizations,” Mahmood teaches mammographic images containing dense breast masses with distinct shapes, edges/boundaries, and sizes, classified by calcification or mass based on their appearance, which may lead to the diagnosis of breast malignancies (status). Additionally, each suspicious region has an identified location. Fig. 1 shows mammography images exhibiting lump and mass of the breast lesions. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “selected from” and the combination isn’t specified to be only one or a few of the limitations need to be met.); (b) producing a plurality of processed images via subjecting each of the plurality of mammographic images to image treatments selected from the group consisting of image cropping, image denoising, image flipping, histogram equalization, image padding, and a combination thereof (“Introduction,” “Mammogram preprocessing,” Mahmood teaches preprocessing images in the MIAS and Private datasets to eliminate noise. Contrast limited adaptive histogram equalization (CLAHE) was applied to enhance the overall quality of the mammographic images. Data augmentation techniques were applied to the images including flipping and cropping. Image enhancement approaches were applied as well, including margins on the images. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “selected from” and the combination isn’t specified to be only one or a few of the limitations need to be met.); and (c) (Fig. 1 (e-h), Mahmood teaches cropping suspicious areas in mammography images containing malignant or benign breast tumors. Fig. 1 (e-f) show examples of extracted ROIs of benign tumors. Fig. 1 (g-h) show examples of extracted ROIs of malignant tumors. The Examiner interprets extracting ROIs with suspicious areas (benign or malignant tumors) to be detecting the attribute (status of breast lesion and mass). Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “and/or” to only require one of the limitations. However, for the sake of further prosecution, the Examiner will interpret claim 1 step (c) to read “and.”).
Mahmood does not explicitly disclose (c) segmenting each of the plurality of processed images of step (b) to produce a plurality of segmented images,
Heindl is in the same field of art of using a neural network model to identify abnormalities in mammography images. Further, Heindl teaches (c) segmenting each of the plurality of processed images of step (b) to produce a plurality of segmented images (Paragraphs [0024-25], [0001], Heindl teaches segmentation of anatomical regions in mammograms. The anatomical regions comprise at least a part of a human breast area.), (d) segmenting each of the plurality of extracted sub-images of step (c) to produce a plurality of segmented sub-images (Paragraphs [0027], [0048], Heindl teaches predicting a location of one or more segmented regions in image patches. Lesions may be segmented. The lesions which may be segmented may comprise cancerous growths, masses, abscesses, lacerations, calcifications, or other irregularities within biological tissue.); (e) combining each of the extracted sub-images of step (c) and each of the segmented sub-images of step (d) (Paragraphs [0046], [0041], [0020], [0011], Heindl teaches a binary mask upscaled to the original size of the input image which is stored as an overlay. The overlay is a segmentation outline showing one or more locations of one or more regions on the original image. The segmentation determines a number of useful characteristic data, such as area, shape, and size, which is more precise than traditional methods. This segmentation method can be used to more accurately detect malignant tumors. The Examiner interprets an overlay on the original image to be synonymous to “combining” the images since the claim is silent as to how the images are combined.), thereby producing a plurality of combined images respectively exhibiting the attribute for each of the mammographic images (Paragraphs [0046], [0048], Fig. 3, reference characters 204-206, Heindl teaches an overlay which may comprise any markings on one or more parts of the original image, for example by outlining different areas of human breast tissue. Lesions (cancerous growths, masses, abscesses, lacerations, calcifications, etc.) may also be segmented, for example, a lesion on a mammogram.); and (f) classifying and training the plurality of combined images of step (e) with the aid of a convolutional neural network, thereby establishing the model (Paragraph [0044], Heindl teaches training a fully convolutional network (FCN) to generate probability masks by providing a set of input values and associated weights. During training, the correct class for each value is known. Therefore, the FCN’s calculated output can be compared to the correct values. An error term can then be established and the weights are adjusted so that future input values, the output probability mask is closer to the correct value. The specification of the instant application states, “the combined images used for training a machine learning model serve as ‘reference images’” (Paragraph [0039], Wang et al). Therefore, the “correct values” for which the FCN compares the calculated output to as a reference, are interpreted by the Examiner as “combined images.”).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Mahmood by segmenting the processed images and the extracted ROIs or “sub-images” to predict locations of breast lesions and generate respective masks, combining the ROIs and generated masks to clearly show the location of the attribute on the image, and train the fully convolutional network with the known mask values for each image, that is taught by Heindl to make the invention that can achieve expert-level accuracy or better with regard to segmenting and identifying anatomical and pathological regions in mammograms (Heindl, Paragraph [0013]); thus, one of ordinary skill in the art would have been motivated to combine the references because they are both in the field of using a neural network model to identify lesions in mammography images (Heindl, Paragraph [0029]), (Mahmood, “The experimental architecture of CNN-based model”).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
In regards to Claim 6, Mahmood in view of Heindl teach the method of claim 1, wherein in step (c), the attribute on each of the processed images of step (b) is detected by use of an object detection algorithm (“Introduction,” Mahmood discloses a convolutional neural network for classifying the ROI of breast masses, enabling physicians to detect even the most minor breast masses early. The Examiner interprets that a convolutional neural network is an object detection algorithm in light of the instant application’s specification, which states, “examples of object detection algorithms suitable for use in the present method include … convolutional neural networks (Paragraph [0051], Wang et al.).”).
In regards to Claim 8, Mahmood in view of Heindl teach the method of claim 1, wherein the subject is a human (Claim 8, Heindl teaches wherein the medical image data comprises one or more mammograms, the one or more regions comprises an anatomical region, and the anatomical region comprise at least part of a human breast area.).
Claims 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Mahmood et al. (NPL “Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach,” 2022, hereafter referred to as Mahmood) in view of Heindl et al. (U.S. Patent Pub. No. 2020/0167928 A1, hereafter referred to as Heindl) in further view of Park et al. (U.S. Patent Pub. No. 2025/0288267 A1, hereafter referred to as Park).
Regarding Claim 2, Mahmood in view of Heindl teaches the method of claim 1.
Mahmood in view of Heindl does not explicitly disclose wherein in step (c), upon being detected, the attribute on each of the processed images of step (b) is framed to produce a framed image.
Park is in the same field of art of using a neural network to predict one or more locations of lesions in mammography images. Further, Park teaches wherein in step (c), upon being detected (Paragraph [0010], Park teaches applying a neural network employing a You Only Look Once X (YOLOX) architecture to predict one or more locations and probabilities of lesions.), the attribute on each of the processed images of step (b) is framed to produce a framed image (Paragraph [0010], Park teaches generating at least one bounding box prediction to provide a particular prediction of the breast cancer.).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Mahmood by detecting the attribute in the processed mammography images and framing the breast cancer prediction with a bounding box, that is taught by Park to make the invention that provides richer and more precise training signals to the model and leads to increased lesion detection performance (Park, Paragraph [0035]); thus, one of ordinary skill in the art would have been motivated to combine the references because they are each in the field of using a neural network model to identify lesions in mammography images (Park, Abstract), (Heindl, Paragraph [0029]), (Mahmood, “The experimental architecture of CNN-based model”).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
In regards to Claim 3, Mahmood in view of Heindl in further view of Park teach the method of claim 2, further comprising mask filtering the framed image and the segmented image of step (c) to eliminate any mistaken attribute detected in step (c) (Fig. 3, reference character 205, Paragraphs [0044-45], Heindl teaches removing small areas of the binary mask and the segmentation if the region or lesion is incorrectly identified or not identified. For example, if a segmentation has an area of zeros, surrounded entirely by ones, then the zeros may be set to ones according to a predetermined threshold for the area.).
In regards to Claim 4, Mahmood in view of Heindl in further view of Park teach the method of claim 3, further comprising cropping the framed image to produce the extracted sub-image of step (c) (Paragraphs [0044], [0046], Park teaches modifying images by cropping them with a cropping window placed in all possible locations. For example, in images of larger breasts, only cropping one window might not cover the whole breast parenchyma. The bounding boxes can be extracted again after augmentation to ensure the tightest fit to the lesion.).
In regards to Claim 5, Mahmood in view of Heindl in further view of Park discloses the method of claim 4, further comprising, after step (d) or step (e), updating the segmented image of step (c) with the aid of the segmented sub-image of step (d) (Paragraph [0046], Heindl teaches an overlay (binary mask) upscaled to the original size of the input image. The overlay contains markings for one or more parts of the original image, such as outlines for different areas of breast tissue or lesions.) and the framed image (Paragraph [0047], Park teaches object detection to predict bounding-box labels, which teaches teach models exactly where each lesion is located and what it looks like. The Examiner interprets overlaying the segmented sub-image and the framed image to be “updating” the segmented image in light of the instant application’s specification, which states, “the segmented image of step S103 can be updated with the aid of the segmented sub-image and the framed image. Specifically, the segmented sub-image and the framed image are overlaid (Wang et al., Paragraph [0059]).”).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Mahmood et al. (NPL “Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach,” 2022, hereafter referred to as Mahmood) in view of Heindl et al. (U.S. Patent Pub. No. 2020/0167928 A1, hereafter referred to as Heindl) in further view of Juluru et al. (U.S. Patent Pub. No. US 2025/0272831 A1, hereafter referred to as Juluru).
Regarding Claim 7, Mahmood in view of Heindl teach the method of claim 1.
Mahmood in view of Heindl does not explicitly disclose wherein in step (c), each of the processed images is segmented by use of a U-net architecture.
Juluru is in the same field of art of using a neural network model to identify abnormalities in mammogram images. Further, Juluru teaches wherein in step (c), each of the processed images is segmented by use of a U-net architecture (Fig. 9, Paragraph [0109], Juluru teaches a U-net architecture which takes a processed mammographic image as input and outputs a segmentation map.).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Mahmood in view of Heindl by incorporating the step of segmenting the mammography images using a U-net algorithm that is taught by Juluru to make the invention that efficiently and accurately performs segmentation of biomedical images, such as mammography images (Juluru, Paragraph [0033]); thus, one of ordinary skill in the art would have been motivated to combine the references because they are each in the field of using a neural network model to identify abnormalities in mammography images to assess risk of developing breast cancer (Juluru, Paragraph [0007]), (Mahmood, “Conclusion and future work”), (Heindl, Paragraph [0015]).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claims 9 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Nabavi et al. (U.S. Patent Pub. No. US 2024/0428577 A1, hereafter referred to as Nabavi) in view of Heindl et al. (U.S. Patent Pub. No. 2020/0167928 A1, hereafter referred to as Heindl).
Regarding Claim 9, Nabavi teaches a method for treating a breast cancer via determining a breast lesion in a subject (Abstract, Figs. 21 and 14, Paragraphs [0053], [0100], Nabavi teaches a method for identifying the abnormality wherein identifying the abnormality influences treatment of the abnormality. The treatment may be applied or adjusted based on the identified abnormality.), comprising: (a) obtaining a mammographic image of the breast from the subject (Paragraph [0061], Nabavi teaches receiving current and previous mammogram images from a patient. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language “a” as “one or more” since the claim is silent to the specific number of mammographic images.), in which the mammographic image comprises an attribute of the breast lesion selected from the group consisting of location, margin, calcification, lump, mass, shape, size, status of the breast lesion, and a combination thereof (Paragraphs [0080], [0103], [0079], [0094-95], Fig. 11A, Nabavi discloses mammographic images from current and previous years in Fig. 11A. The mammographic images contain location, margins, calcifications, mass, lump, shape (round, oval, irregular, lobulated, etc.), size (tumor area ratio), and status (benign/cancer) for the breast lesions. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “selected from” and “a combination” isn’t specified to only require one or a few of the limitations.); (b) producing a processed image via subjecting the mammographic image to image treatments selected from the group consisting of image cropping, image denoising, image flipping, histogram equalization, image padding, and a combination thereof (Paragraph [0082], Fig. 17B, Nabavi teaches in the data preprocessing step, cropping, applying Contrast Limited Adaptive Histogram Equalization (CLAHE), adding a constant margin of 20 pixels (padding), and flipping, to the mammogram images. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “selected from” and “a combination” isn’t specified to only require one or a few of the limitations.); (d) segmenting the extracted sub-image of step (c) to produce a segmented sub-image (Paragraph [0123], Figs. 18, 19A ,19B, and 20, Nabavi teaches a UFCN model which provides breast abnormal variation maps (AVM). The AVM is a binary mask that indicates abnormal regions. A threshold is applied to select the most activated regions in the AVM, indicating cancer regions.); thereby producing a text image exhibiting the attribute for the mammographic image (Paragraphs [0112], [0127], Fig. 16C, Nabavi teaches predicting binary mask abnormal variation maps (AVM), which indicate abnormal regions. Fig. 16C illustrates a block diagram of the breast abnormality detection module, and further shows an example of a binary mask indicating the detected abnormal region. The Examiner interprets a binary mask as a “text image” because it is an image in which each pixel is either a zero or a one and encodes information about the location and shape of the abnormal regions (attributes/lesions). In addition, the claim is silent to the meaning of “text image.”); (f) determining the breast lesion of the subject by processing the text image of step (e) within the model established by the method of claim 1 (Paragraphs [0106], [0112], [0145], Fig. 15, Nabavi teaches the unsupervised feature correlation network (UFCN). A breast abnormality map detection module (BAM) which generates binary mask abnormal variation maps (AVM), is embedded in the decoder stage of the UFCN. Using a threshold, BAM selects the most activated regions as AVMs, indicating cancer regions. An abnormality detection device may be implemented using the UFCN. The abnormality indication/location device may output the identification and/or location to a controller. The Examiner interprets that the binary mask AVM is processed in order to transmit the identified abnormality to the controller.); and (g) providing an anti-cancer treatment to the subject based on the breast lesion determined in step (f) (Paragraphs [0144-145], Nabavi teaches applying or adjusting treatment to the patient based on the identified abnormality, the treatment comprising at least one of surgery, chemotherapy, hormonal therapy, immunotherapy, or radiation therapy. In one of the disclosed embodiments, a controller may control a radiation therapy apparatus to direct radiation to the determined location at a selected intensity for a selected amount of time for precise treatment to a patient.).
Nabavi does not disclose (c) segmenting the processed image of step (b) to produce a segmented image, and/or detecting the attribute on the processed image of step (b), thereby producing an extracted sub-image thereof; and (e) combining the extracted sub-image of step (c) and the segmented sub-image of step (d).
Heindl is in the same field of art of using a neural network model to identify abnormalities in mammography images. Further, Heindl teaches (c) segmenting the processed image of step (b) to produce a segmented image (Paragraphs [0024-25], [0001], Heindl teaches segmentation of anatomical regions in mammograms. The anatomical regions comprise at least a part of a human breast area.), and/or detecting the attribute on the processed image of step (b), thereby producing an extracted sub-image thereof (Paragraph [0029], Heindl teaches detecting and identifying the presence of lesions in mammography images. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “and/or” to only require one of the limitations. For the sake of further prosecution, the Examiner interprets “and/or” to be “and” due to the indefiniteness issues that would otherwise arise, discussed in the above sections. Additionally, the Examiner interprets the claim language “a/an” as “one or more” since the claim is silent to the specific number of images/sub-images.); and (e) combining the extracted sub-image of step (c) and the segmented sub-image of step (d) (Paragraphs [0046], [0041], [0020], [0011], Heindl teaches a binary mask upscaled to the original size of the input image and is stored as an overlay. The overlay comprises a segmentation outline showing one or more locations of one or more regions on the original image. The segmentation determines a number of useful characteristic data, such as area, shape, and size, which is more precise than traditional methods. As a result, this segmentation method can be used to more accurately detect a malignant tumor. The segmentation outline is overlaid on the input image patch, thereby “combining” them. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language “the” as “one or more” since the claim is silent to the specific number of sub-images.).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Navabi by detecting the lesion in the processed image and combining the extracted sub-images and segmented sub-images by applying an overlay that is taught by Heindl to detect cancerous tumors more quickly than waiting for a human expert to become available to identify the tumor and hence treatment can begin sooner (Heindl, Paragraph [0015]); thus, one of ordinary skill in the art would have been motivated to combine the references because faster identification of regions of interest and lesions may aid screening and clinical assessment of breast cancer (Heindl, Paragraph [0015]).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
In regards to Claim 16, Nabavi in view of Heindl discloses the method of claim 9, wherein in step (g), the anti-cancer treatment is selected from the group consisting of a surgery, a radiofrequency ablation, a systemic chemotherapy, a transarterial chemoembolization (TACE), an immunotherapy, a targeted drug therapy, a hormone therapy, and a combination thereof (Paragraph [0045], Nabavi teaches applying or adjusting treatment based on the indication. For example, an indication suggesting abnormal tissue would compel medical personnel to perform surgery, chemotherapy, hormonal therapy, immunotherapy, or radiation therapy, additional testing, or a combination of surgery, chemotherapy, hormonal therapy, immunotherapy, radiation therapy, or additional testing. Under Broadest Reasonable Interpretation, the Examiner interprets the claim language of “selected from” and “a combination thereof” isn’t specified, to be only one or a few of the limitations are required.).
In regards to Claim 17, Nabavi in view of Heindl discloses the method of claim 9, wherein the subject is a human (Claim 8, Heindl teaches wherein the medical image data comprises one or more mammograms, the one or more regions comprises an anatomical region, and the anatomical region comprise at least part of a human breast area.).
Claims 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over Nabavi et al. (U.S. Patent Pub. No. US 2024/0428577 A1, hereafter referred to as Nabavi) in view of Heindl et al. (U.S. Patent Pub. No. 2020/0167928 A1, hereafter referred to as Heindl) in further view of Park et al. (U.S. Patent Pub. No. 2025/0288267 A1, hereafter referred to as Park).
Regarding Claim 10, Nabavi in view Heindl discloses the method of claim 9.
Nabavi in view of Heindl does not explicitly disclose wherein in step (c), upon being detected, the attribute on the processed images of step (b) is framed to produce a framed image.
Park is in the same field of art of using a neural network to predict one or more locations and probabilities of lesions in mammography images. Further, Park teaches wherein in step (c), upon being detected (Paragraph [0010], Park teaches applying a neural network employing a You Only Look Once X (YOLOX) architecture to predict one or more locations and probabilities of lesions.), the attribute on the processed images of step (b) is framed to produce a framed image (Paragraph [0010], Park teaches generating at least one bounding box prediction to provide a particular prediction of the breast cancer).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Navabi in view of Heindl by detecting the attribute in the processed mammography image and framing the breast cancer prediction with a bounding box, that is taught by Park to make the invention that provides richer and more precise training signals to the model and leads to increased performance for detecting breast lesions (Park, Paragraph [0035]); thus, one of ordinary skill in the art would have been motivated to combine the references because they are each in the field of using a neural network model to identify lesions in mammography images (Park, Abstract), (Heindl, Paragraph [0029]), (Navabi, Abstract).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
In regards to Claim 11, Nabavi in view of Heindl in further view of Park discloses the method of claim 10, further comprising mask filtering the framed image and the segmented image of step (c) to eliminate any mistaken attribute detected in step (c) (Paragraphs [0024], [0045], Fig. 3, reference character 205, Heindl teaches removing small areas of the binary mask and the segmentation if the region or lesion is incorrectly identified or not identified. For example, if a segmentation has an area of zeros, surrounded entirely by ones, then the zeros may be set to ones according to a predetermined threshold for the area.).
In regards to Claim 12, Nabavi in view of Heindl in further view of Park discloses the method of claim 11, further comprising cropping the framed image to produce the extracted sub-image of step (c) (Paragraphs [0044], [0046], Park teaches modifying images by cropping them with a cropping window placed in all possible locations. For example, in images of larger breasts, only cropping one window might not cover the whole breast parenchyma. The bounding boxes can be extracted again after augmentation to ensure the tightest fit to the lesion.).
In regards to Claim 13, Nabavi in view of Heindl in further view of Park discloses the method of claim 12, further comprising, after step (d) or step (e), updating the segmented image of step (c) with the aid of the segmented sub-image of step (d) (Paragraph [0046], Heindl teaches an overlay (binary mask) upscaled to the original size of the input image. The overlay contains markings for one or more parts of the original image, such as outlines for different areas of breast tissue or lesions.) and the framed image (Paragraph [0047], Park teaches object detection to predict bounding-box labels, which teaches teach models exactly where each lesion is located and what it looks like. The Examiner interprets overlaying the segmented sub-image and the framed image to be “updating” the segmented image in light of the instant application’s specification, which states, “the segmented image of step S103 can be updated with the aid of the segmented sub-image and the framed image. Specifically, the segmented sub-image and the framed image are overlaid (Wang et al., Paragraph [0059]).”).
In regards to Claim 14, Nabavi in view of Heindl in further view of Park disclose the method of claim 9, wherein in step (c), the attribute on the processed image of step (b) is detected by use of an object detection algorithm (Paragraph [0009], Park teaches applying a neural network employing a You Only Look Once X (YOLOX) architecture to predict one or more locations of lesions.).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Nabavi et al. (U.S. Patent Pub. No. US 2024/0428577 A1, hereafter referred to as Nabavi) in view of Heindl et al. (U.S. Patent Pub. No. 2020/0167928 A1, hereafter referred to as Heindl) in further view of Juluru et al. (U.S. Patent Pub. No. US 2025/0272831 A1, hereafter referred to as Juluru).
Regarding Claim 15, Nabavi in view of Heindl discloses the method of claim 9.
Nabavi in view of Heindl does not explicitly disclose wherein in step (c), the processed image is segmented by performing a U-net architecture.
Juluru is in the same field of art of identifying breast abnormalities in mammography images using a neural network. Further, Juluru teaches wherein in step (c), the processed image is segmented by performing a U-net architecture (Fig. 9, Paragraph [0109], Juluru teaches a U-net architecture which takes a pre-processed breast image as input and outputs a segmentation map.).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Navabi in view of Heindl by segmenting the processed mammogram image using a U-net architecture that is taught by Juluru to efficiently and accurately perform segmentation of biomedical images, such as a mammography image (Juluru, Paragraph [0033]); thus, one of ordinary skill in the art would have been motivated to combine the references because they are each in the field of detecting breast abnormalities using a neural network model (Juluru, Paragraph [0034]), (Nabavi, Paragraph [0008]), (Heindl, Paragraph [0013]).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYDNEY L BLACKSTEN whose telephone number is (571)272-7651. The examiner can normally be reached 8:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached at 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SYDNEY L BLACKSTEN/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674