DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 6, 8-9, 11 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Kalafut et al (11,545,266) in view of Kwon (12,198,333) and Simonyan (12,257,063).
Regarding claims 1, 6 and 11 Kalafut discloses
Splitting a T2 weighted image into a first group of two-dimensional images (note col. 5 lines 2-6 and block 112 DWI data, weighted parameter describe can be T2 weighted and two-dimensional);
Inputting the first group of two-dimensional images into a mask R-CNN (col. 4 lines 50-55, machine learning receive diffuse weighted images) model to obtain a first group of two-dimensional parenchymal brain images (note block 104 and col. 5 lines 60- col. 6 lines 20, machine learning component outputs data generated from brain anatomical region, classify an analyze DWI data)
Using a first group of two-dimensional parenchymal brain images to form a T2 weighted parenchymal brain images (note col. 6 lines 1-6, anatomical region is brain anatomical region) Kalafut does not discloses performing a pre-process on T2 weighted brain image to obtain a pre-process T2 weighted brain image. Kwon discloses performing a pre-process on T2 weighted brain image to obtain a pre-process T2 weighted brain image (note fig. 2 (b) image pre-processing and col. 4 lines 33-35 and col. 5 lines 58). Kalafut and Kwon are combinable because they are from the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include performing a pre-process on the T2 weighted brain image to obtain a pre-processed T2 weighted brain image in the system Kalafut as evidenced by Kwon. The suggestion/motivation for doing so provides image analysis that leads to the improvement of the quality of the images (note col. 9 lines 28-35). It would have been obvious to combine Kwon with Kalafut to obtain the invention as specified by claims 1, 6 and 11.
Kalafut and Kwon does not clearly disclose using a three-dimensional convolutional neural network model to segment and quantize a brain edema area in the pre-processed T2 weighted parenchymal brain image. Simonyan discloses using a three-dimensional convolutional neural network model to segment and quantize a brain area in the pre-processed T2 weighted parenchymal brain image (note col. 3 lines 5-13, cites using 3D convolutional neural network). Kalafut, Kwon and Simonyan are combinable because they are from the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include 3D convolutional neural network in the system of Kalafut and Kwon as evidenced by Simonyan. The suggestion/motivation for doing so provides clinical parameter outputs (note col. 1 lines 59-61). It would have been obvious to combine Simonyan with Kalafut and Kwon to obtain the invention as specified by claims 1, 6 and 11.
Regarding claim 3, 8 and 13 Kalafut, Kwon and Simonyan discloses.
Wherein the pre-process comprises an image resampling (note col. 16 lines 59- col. 17 lines 17, image pre-process based CLAHE method).
Regarding claims 4, 9 and 14 Kalafut, Kwon and Simonyan discloses,
Wherein the pre-process comprises an image normalization (note col. 16 lines 59- col. 17 lines 17, image pre-process based CLAHE method).
Allowable Subject Matter
Claims 2, 5, 7, 10, 12 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter for dependent claims 2, 5, 7. 10, 12 and 15.
Regarding dependent claims 2, 7 and 12, prior art could not be found for the features splitting the co-registered T1C weighted image into a second group of two-dimensional images; inputting the second group of two-dimensional images into the mask R-CNN model to obtain a second group of two-dimensional parenchymal brain images; using the second group of two-dimensional parenchymal brain images to form a T1C weighted parenchymal brain image; performing the pre-process on the T1C weighted parenchymal brain image to obtain a pre-processed T1C weighted parenchymal brain image; and using another three-dimensional convolutional neural network model to segment and quantize a metastatic brain tumor area in the pre-processed T1C weighted parenchymal brain image. These features in combination with other features could not be found in the prior art.
Regarding dependent claims 5, 10 and 15, prior art could not be found wherein the three-dimensional convolutional neural network model has two convolution paths, one of the two convolution paths extracts each region in training data so as to perform a feature extraction on the each region, another of the two convolution paths selects each corresponding expanded area based on a center point of the each area and performs an under-sampling on the each corresponding expanded area for the feature extraction. These features in combination with other features could not be found in the prior art.
Related Prior Art
Reyes et al (11,526,994) two-dimensional parenchymal brain images to form a T2 weighted parenchymal brain images;
Bhushan et al (11,776,173) inputting the first group of two-dimensional images into a mask R-CNN (mask region-based convolutional neural network) model to obtain a first group of two-dimensional parenchymal brain images (note col. 10 lines 52-56, calibration data input to mask component data, deep learning network, and col. 10 lines 52-62, obtain anatomical landmark of brain region of interest);
Ida et al (10,043,293) Splitting a T2 weighted image (note fig. 33 block 343d and col. 34 lines 50-53, block pair detector 343d is a processing unit that divides the T2-weighted image) into a first group of two-dimensional images (note col. 4 lines 3-12 the detector 13 is a two-dimensional array type detector);
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GREGORY M DESIRE whose telephone number is (571)272-7449. The examiner can normally be reached Monday-Friday 6:30am-3:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
G.D.
January 24, 2026
/GREGORY M DESIRE/Primary Examiner, Art Unit 2676