Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
This communication is in response to the Application filed on 02/28/2024.
Claims 1-12, 14 and 16 are pending. Claims 13 and 15 have been cancelled.
Claim Objections
Claim 16 is objected to because of the following informalities:
In claim 16, line 9, “the denoised” should be changed to “a denoised”.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 7 and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites the limitation "the noise" in line 2-3. There is insufficient antecedent basis for this limitation in the claim. Clarification/explanation is required.
Claim 8 recites the limitation "the noise" in line 5. There is insufficient antecedent basis for this limitation in the claim. Clarification/explanation is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 1, 4-7, 11-12, 14 and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by IDA et al. (U.S. Publication No. 2019/0378270) (hereafter, "IDA").
Regarding claim 1, IDA teaches a computer-implemented method for denoising a medical image, the computer-implemented method comprising ([0210] the noise reduction program causes a computer to input the noise correlation map and the medical image or the intermediate image to the learned model functioned to generate the denoise image): obtaining the medical image formed of a plurality of pixels ([0067] the noise reduction target image 101 corresponds to a medical image; [0068] the medical image in the present application example corresponds to an MR image (hereinafter referred to as a pre-denoise image) obtained before the denoise process by the noise reduction function 1417 in FIG. 1; [0102] The MR image is generated based on the MR data collected by the main scan; [0105] the processing circuitry 141 inputs a plurality of pixel values in the MR image 101 generated by step Sa2; [0039] a plurality of pixel values in the noise reduction target image 101); obtaining a noise map containing an estimated measure of a statistical parameter for each pixel of the medical image ([0036] When the amount of noise in the noise reduction target image 101 is known, each of a plurality of pixel values in the noise correlation map corresponds to local dispersion or standard deviation of noise; [0055] The processing circuitry 141 generates the noise correlation map 102 by performing the second learned model, to which the noise reduction target image 101 is input, by the correlation data generation function 1419; [0035] a reference signal 102 such as a noise correlation map correlated with a spatial distribution of noise amount in the processing target signal 101); modifying the medical image using the noise map to produce a modified medical image; and ([0039] The processing circuitry 141 generates the combination data 104 by combining a plurality of pixel values in the noise reduction target image 101 and a plurality of pixel values in the noise correlation map 102 in the input layer 103 by the noise reduction function 1417. Specifically, the processing circuitry 141 allocates a plurality of pixel values (y1, y2, . . . , yN) in the noise reduction target image to a first input range 104a in the input vector 104. In addition, the processing circuitry 141 allocates a plurality of pixel values (x1, x2, . . . , xN) in the noise correlation map 102 to a second input range 104b in the input vector 104; [0042] the processing circuitry 141 performs a convolution process on the noise reduction target image 101 and the noise correlation map 102 by using a filter having a plurality of learned weighting coefficients by the noise reduction function 1417. The processing circuitry 141 generates data to be input from the input layer 103 to the first intermediate layer 204 by the convolution process; [0046] Note that the weighting coefficients in each of the filters used in the CNN 105 are learned by a method called an error back propagation method by using many learning data before implementing the noise reduction function 1417) processing the modified medical image, using a machine-leaning method, to produce a denoised medical image ([0039] The processing circuitry 141 outputs the input vector 104 to the CNN 105; [0040] The processing circuitry 141 holds the signal 106 output from the CNN 105 by the noise reduction function 1417 as the vector format 107a indicating pixel values (z1, z2, . . . , zN) of the denoise image 108 in the output layer 107. The processing circuitry 141 generates the denoise image 108 by rearranging a plurality of components in the vector format 107a as the pixels; [0037] The CNN 105 recursively repeats the conversion of the combination data 104, that is, performs the forward propagation process by using the combination data 104 as the input and outputs the converted signal 106 to the output layer 107. Using the converted signal 106, the output layer 107 outputs a signal (hereinafter referred to as a denoise image) 108 in which the noise of the noise reduction target image 101 is reduced; [0034] The learned model 105 is a learned machine learning model).
Regarding claim 4, IDA teaches all the limitations of claim 1 above. IDA teaches further comprising inputting the modified medical image to the machine-learning method and ([0037] The input layer 103 outputs, to the CNN 105, data (hereinafter referred to as combination data) 104 obtained by combining the noise reduction target image 101 and the noise correlation map 102) receiving, as output from the machine-learning method, the denoised medical image ([0040] The processing circuitry 141 holds the signal 106 output from the CNN 105 by the noise reduction function 1417 as the vector format 107a indicating pixel values (z1, z2, . . . , zN) of the denoise image 108 in the output layer 107; [0037] The CNN 105 recursively repeats the conversion of the combination data 104, that is, performs the forward propagation process by using the combination data 104 as the input and outputs the converted signal 106 to the output layer 107. Using the converted signal 106, the output layer 107 outputs a signal (hereinafter referred to as a denoise image) 108 in which the noise of the noise reduction target image 101 is reduced).
Regarding claim 5, IDA teaches all the limitations of claim 1 above. IDA teaches wherein the machine-learning method comprises a neural network ([0034] The learned model 105 is a learned machine learning model of a forward propagation network learned from many learning data. The learned model 105 is, for example, a deep neural network (hereinafter referred to as DNN) ... explanation will be given taking as an example a convolution neural network (hereinafter referred to as CNN) as a DNN).
Regarding claim 6, IDA teaches all the limitations of claim 1 above. IDA teaches wherein the noise map provides an estimated amount of standard deviation or variance of noise for each pixel of the medical image ([0036] When the amount of noise in the noise reduction target image 101 is known, each of a plurality of pixel values in the noise correlation map corresponds to local dispersion or standard deviation of noise; [0047] The noise image corresponds to an image showing Gaussian noise with the pixel value of the noise correlation map as the standard deviation; [0035] sets a processing target signal 101 such as a noise reduction target image and a reference signal 102 such as a noise correlation map correlated with a spatial distribution of noise amount in the processing target signal 101).
Regarding claim 7, IDA teaches all the limitations of claim 1 above. IDA teaches wherein the noise map provides, for each pixel of the medical image, an estimated correlation between the noise of said pixel and the noise of one or more neighboring pixels ([0036] When the amount of noise in the noise reduction target image 101 is known, each of a plurality of pixel values in the noise correlation map corresponds to local dispersion; [0039] the processing circuitry 141 allocates a plurality of pixel values (y1, y2, . . . , yN) in the noise reduction target image to a first input range 104a in the input vector 104. In addition, the processing circuitry 141 allocates a plurality of pixel values (x1, x2, . . . , xN) in the noise correlation map 102 to a second input range 104b in the input vector 104; [0035] sets a processing target signal 101 such as a noise reduction target image and a reference signal 102 such as a noise correlation map correlated with a spatial distribution of noise amount in the processing target signal 101).
Regarding claim 11, IDA teaches all the limitations of claim 1 above. IDA teaches wherein: the medical image is a medical image that has been reconstructed from raw data using a first reconstruction algorithm ([0165] The processing circuity 44 performs the reconstruction processing on the projection data after pre-processing and before denoise process by using a filter corrected back projection method, a successive approximation reconstruction method, or the like by the reconstruction processing function 443, and generates data of the pre-denoise CT image); and the machine-learning method has been trained using a training dataset that includes one or more training images that have been reconstructed from raw data using a second, different reconstruction algorithm ([0131] The processing circuitry 141 generates the density map by performing a Fourier transform on data in the k-space by the reconstruction function 1413. The density map is a map that corresponds to the density of data in the k-space; [0132] the density map … is used as the noise correlation map 102; [0133] The channel to which each of the plurality of noise correlation maps is input is set when learning the machine learning model).
Regarding claim 12, IDA teaches all the limitations of claim 1 above. IDA teaches wherein the medical image is a computed tomography medical image ([0029] FIG. 1 is a block diagram illustrating a configuration example of processing circuitry 141 mounted on a medical image diagnostic apparatus according to a present embodiment. The medical image diagnostic apparatus is, for example, a medical magnetic resonance imaging (hereinafter referred to as MRI) apparatus and a medical x-ray computed tomography (hereinafter referred to as CT) apparatus).
With respect to claim 14, arguments analogous to those presented for claim 1, are applicable.
With respect to claim 16, arguments analogous to those presented for claim 1, are applicable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over IDA et al. (U.S. Publication No. 2019/0378270) (hereafter, "IDA") in view of HERBST et al. (U.S Publication No. 2022/0096034) (hereafter, "HERBST").
Regarding claim 2, IDA teaches all the limitations of claim 1 above. IDA does not expressly teach further comprising dividing the medical image by the noise map.
However, HERBST teaches further comprising dividing the medical image by the noise map ([0086] The entire image in this case is divided into a noise component and a component of image-relevant information; [0119] the noise component in its entirety should be divided up so that there are no non-allocated noise components that lie at frequencies between two noise components used for the method; [0092] the statistics in the noise components (the “noise frequency bands”) can be described via the standard deviation σ and the average value).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of IDA to incorporate the step/system of dividing an image into noise and signal components based on noise map defined by the noise component's standard deviation taught by HERBST.
The suggestion/motivation for doing so would have been to improve the image noise reduction ([0086] The inventive method serves in this case to improve a given noise reduction method; [0111] The conventional image information in this case represents the denoised image in accordance with the image noise reduction algorithm used and can be used together with the denoised image in accordance with the invention to improve the image noise reduction algorithm). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine IDA and HERBST to obtain the invention as specified in claim 2.
Regarding claim 3, the combination of IDA and HERBST teaches all the limitations of claim 2 above. IDA teaches further comprising: processing the modified medical image using the machine-learning method to generate a predicted noise image, the predicted noise image representing a predicted amount of noise in each pixel of the modified medical image ([0037] The CNN 105 recursively repeats the conversion of the combination data 104, that is, performs the forward propagation process by using the combination data 104 as the input and outputs the converted signal 106 to the output layer 107. Using the converted signal 106, the output layer 107 outputs a signal (hereinafter referred to as a denoise image) 108 in which the noise of the noise reduction target image 101 is reduced; [0040] The processing circuitry 141 holds the signal 106 output from the CNN 105 by the noise reduction function 1417); multiplying the modified medical image by the noise map to produce a calibrated predicted noise image ([0040] The processing circuitry 141 holds the signal 106 output from the CNN 105 by the noise reduction function 1417 as the vector format 107a indicating pixel values (z1, z2, . . . , zN) of the denoise image 108 in the output layer 107; [0047] the processing circuitry 141 may generate the denoise image by outputting the noise image ... the weighting coefficients are learned so that the output image at the time of inputting the noise-containing image and the noise correlation map approaches an image showing noise (hereinafter referred to as a noise image). The noise image corresponds to an image showing Gaussian noise with the pixel value of the noise correlation map; [0048]; [0114]); and subtracting the calibrated predicted noise image or a scaled version of the calibrated predicted noise image from the medical image to produce the denoised medical image ([0047] the processing circuitry 141 may generate the denoise image by outputting the noise image, instead of outputting the denoise image, by the noise reduction function 1417, and then subtracting the noise image from the noise reduction target image 101 ... the weighting coefficients are learned so that the output image at the time of inputting the noise-containing image and the noise correlation map approaches an image showing noise (hereinafter referred to as a noise image)).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over IDA et al. (U.S. Publication No. 2019/0378270) (hereafter, "IDA") in view of Litwiller et al. (U.S Publication No. 2021/0272240) (hereafter, "Litwiller").
Regarding claim 8, IDA teaches all the limitations of claim 1 above. IDA teaches wherein: the medical image is one of a plurality of medical images ([0069] as the noise correlation map 102 corresponding to each of a plurality of time-series images generated by executing a main scan over a number of times; [0099] the imaging control circuitry 131 executes the prescan with respect to the subject P ... Based on the prescan-MR data regarding each of the plurality of receive coils 127 used for parallel imaging; [0068] the prescan can be executed after the main scan; [0044] Each of the intermediate layers in the CNN 105 may have a plurality of channels corresponding to the number of images) … and the noise map provides, for each pixel of the medical image, an estimated measure of a covariance or correlation between the noise of that pixel and the noise of a corresponding pixel of another of the plurality of medical images ([0069] The subtraction image to be used as the noise correlation map 102 may also be generated by subtracting each of the time-series images from an MR image of a reference time in the plurality of time-series images. The subtraction image to be used as the noise correlation map 102 may also be generated by differentiating the two adjacent MR images in the plurality of time-series images; [0068] The noise correlation map 102 is not limited to the sensitivity map or the g map, and may be any image as long as it is an image that correlates with the noise amount in the MR image).
IDA does not expressly teach … that represent a same scene, produced by a multi-channel imaging process.
However, Litwiller teaches that represent a same scene ([0028] training data pairs comprise corresponding pairs of noisy and pristine medical images of a same anatomical region), produced by a multi-channel imaging process ([0028] training module 112 includes instructions for generating training data pairs from medical image data 114; [0029] Medical image data 114 includes for example, MR images acquired using an MRI system, ultrasound images acquired by an ultrasound system, etc.).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of IDA to incorporate the step/system of using medical images, representing a same anatomical region, acquired by a multi-channel imaging process taught by Litwiller.
The suggestion/motivation for doing so would have been to improve the diagnostic quality of the image by reducing noise ([0004] By mapping the medical image comprising colored noise to a de-noised medical image using the trained CNN, colored noise in the image may be significantly reduced, thereby increasing the clarity and diagnostic quality of the image). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine IDA and Litwiller to obtain the invention as specified in claim 8.
Claim 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over IDA et al. (U.S. Publication No. 2019/0378270) (hereafter, "IDA") in view of Mentl et al. (U.S Publication No. 2018/0240219) (hereafter, "Mentl").
Regarding claim 9, IDA teaches all the limitations of claim 1 above. IDA teaches further comprising: obtaining a first medical image ([0067] the noise reduction target image 101 corresponds to a medical image).
IDA does not expressly teach processing the first medical image using a frequency filter to obtain a first filtered medical image having values within a predetermined frequency range; and setting the first filtered medical image as the medical image.
However, Mentl teaches processing the first medical image ([0002] computed tomography (CT) imaging ... CT imaging reconstructs medical images from multiple X-ray projections; [0035] input 101 is a noisy image In; [0035] input 101 is a noisy image In) using a frequency filter to obtain a first filtered medical image having values within a predetermined frequency range; and setting the first filtered medical image as the medical image ([0043] LPF 221 passes low spatial frequency image data denoised at the original scale; [0045]).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of IDA to incorporate the step/system of obtaining a filtered medical image having a low frequency range by using the LPF taught by Mentl.
The suggestion/motivation for doing so would have been to improve the efficiency for reducing noises and to enhance the 3D CT image data by removing noises ([0002] When monitoring an interventional surgery, a high-quality reconstruction may be required, while providing an efficient approach to reduce varying levels of noise in near real-time; [0031] deep-learning-based networks are provided to solve the denoising problem, removing the noise to enhance the 3D CT image data). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine IDA and Mentl to obtain the invention as specified in claim 9.
Regarding claim 10, the combination of IDA and Mentl teaches all the limitations of claim 9 above. Mentl teaches further comprising: processing the first medical image ([0002] computed tomography (CT) imaging ... CT imaging reconstructs medical images from multiple X-ray projections; [0035] input 101 is a noisy image In) to obtain a second filtered medical image having values within a second, different predetermined frequency range ([0043] HPF 211 passes high spatial frequency image data denoised at the original scale); and combining the second filtered medical image and the denoised medical image to produce a denoised first medical image ([0043] To generate the reconstructed output 203 (i.e., the denoised image Ir), the upscaled outputs of each scale are combined by summation block 223 ... The summation block 223 is a weighted sum and is trainable by network 200. Prior to summation, the outputs of each scale are passed through additional trainable high pass and low pass filters HPF 211 and LPF 221).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL C. CHANG whose telephone number is (571)270-1277. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan S. Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL C CHANG/ Examiner, Art Unit 2669
/CHAN S PARK/ Supervisory Patent Examiner, Art Unit 2669