DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In independent claims 1 and 16, applicant amended “… super-resolution image sharpness neural network without upsampling…” is indefinite because by definition super-resolution is an advanced form of upsampling. Moreover, filed specification discloses “without an upsampling layer,” which is not the same as “without upsampling” in claim.
For examination purpose, interpretation is taken based on filed specification.
Response to Arguments
Applicant's arguments filed 09/12/2025 have been fully considered but they are not persuasive.
Regarding amended claim 1, applicant argued that Li does not mention whether or not an upsampling layer or upsampling is included in a specific target processing model, or discuss how the matrix size of the high resolution image output by the target processing model compares to the matrix size of the image input into the target processing model. Then applicant argued that Xiong teach a super-resolution network without “upsampling convolution layers” but not “a super-resolution image sharpness neural network without upsampling”
However, the examiner respectfully disagrees. First off, as explained in 112(b) rejections above, amended “without upsampling” is indefinite. So, claim interpretation is taken based on filed specification. Second off, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., discuss how the matrix size of the high resolution image output by the target processing model compares to the matrix size of the image input into the target processing model) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Also, one of ordinary skill in the art would know that upsampling directly involves increasing the size of a matrix. Third off, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). As primary reference, Li does teach generating super resolution image thru neural network from input MR image, which correspond to backbone application of instant application. Without mentioning an upsampling layer in Li shows that Li does not contradict or teach away from the argued claim. The question unanswered in Li is that whether super-resolution can be done without an upsampling layer as the argued claim claims. Secondary reference, Xiong teach “NoUCSR: Efficient Super-Resolution Network without Upsampling Convolution,” which is an efficient neural network model without upsampling convolution layers. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to recognize that super resolution in Li can be achieved thru a neural network without upsampling layers in view of Xiong.
Thus, rejections are proper and maintained.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US2022/0092739) in view of Xiong et al. (“NoUCSR: Efficient Super-Resolution Network without Upsampling Convolution”) and Cordani et al. (US2023/0019874).
To claim 1, Li teach a method for generating a magnetic resonance (MR) image of a subject, the method comprising:
receiving an MR image of the subject reconstructed from undersampled MR data of the subject (paragraphs 0079, 0143);
providing the MR image of the subject to a super-resolution image sharpness neural network without upsampling (by filed specification, without upsampling layer is interpreted; since upsampling layer is not mentioned, such negative limitation would be an obvious implementation), the image sharpness neural network trained using a set of loss functions (paragraphs 0042, 0082, 0095, 0136, generating an image with a relatively high resolution level); and
generating an enhanced resolution MR image of the subject with increased sharpness based on the MR image of the subject using the image sharpness neural network (paragraphs 0098).
But, Li do not expressly disclose the set of loss functions including an L1 Fast Fourier Transform (FFT) loss function.
In furthering said obviousness, Xiong teach a super-resolution network without upsampling convolution layers (page 3378 abstract; page 3379 left column), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Li, in order to implement super-resolution.
Cordani teach training neural network with L1 Fast Fourier Transform (FFT) loss function (abstract, paragraphs 0022, 0056, 0062, 0067-0068).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching of Li and Xiong, in order to improve training by design preference.
To claim 16, Li, Xiong and Cordani teach a system for generating a magnetic resonance (MR) image of a subject (as explained in response to claim 1 above).
To claim 2, Li, Xiong and Cordani teach claim 1.
Li, Xiong and Cordani teach wherein the MR image of the subject is reconstructed from undersampled MR data from a central region of k-space (Li, paragraph 0168).
To claim 3, Li, Xiong and Cordani teach claim 1.
Li, Xiong and Cordani teach wherein the image sharpness neural network is a deep learning neural network comprising a generator network comprising two-dimensional (2D) convolution layers and residual dense blocks (Li, paragraphs 0079, 0095).
To claim 15, Li, Xiong and Cordani teach claim 1.
Li, Xiong and Cordani teach further comprising displaying the enhanced resolution MR image of the subject with increased sharpness (Li, paragraph 0055).
To claim 17, Li, Xiong and Cordani teach claim 16.
Li, Xiong and Cordani teach further comprising a display coupled to the image sharpness neural network and configured to display the enhanced resolution MR image of the subject with increased sharpness (Li, paragraph 0055).
To claim 18, Li, Xiong and Cordani teach claim 16.
Li, Xiong and Cordani teach wherein the image sharpness neural network is a deep learning neural network comprising a generator network comprising two-dimensional (2D) convolution layers and residual dense blocks (Li, paragraphs 0079, 0095).
Claim(s) 4-9, 19-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US2022/0092739) in view of Xiong et al. (“NoUCSR: Efficient Super-Resolution Network without Upsampling Convolution”), Cordani et al. (US2023/0019874) and Halupka et al. (US2020/0286208).
To claims 4 and 19, Li, Xiong and Cordani teach claims 3 and 18.
But, Li, Xiong and Cordani do not expressly disclose wherein the generator network includes four 2D convolution layers and twenty-three residual dense blocks.
Halupka teach generator network includes four 2D convolution layers and twenty-three residual dense blocks (paragraphs 0032, 0035, 0045, 0093), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Li, Xiong and Cordani, in order to enhance input image.
To claims 5 and 20, Li, Xiong, Cordani and Halupka teach claims 4 and 19.
Li, Xiong, Cordani and Halupka teach wherein at least three of the four 2D convolution layers includes a plurality of filters (Halpuka, paragraph 0035).
To claims 6 and 21, Li, Xiong, Cordani and Halupka teach claims 5 and 20.
Li, Xiong, Cordani and Halupka teach wherein the plurality of filters for each of the at least three 2D convolution layers includes sixty four filters (Halpuka, paragraph 0035).
To claims 7 and 22, Li, Xiong, Cordani and Halupka teach claims 4 and 19.
Li, Xiong, Cordani and Halupka teach wherein at least one of the four 2D convolution layers includes one filter (LI, paragraphs 0095, 0134).
To claims 8 and 23, Li, Xiong and Cordani teach claims 3 and 18.
But, Li, Xiong and Cordani do not expressly disclose wherein the image sharpness neural network further comprises a discriminator network comprising a 2D convolution layer and six discriminator blocks.
Halupka teach image sharpness neural network further comprises a discriminator network comprising a 2D convolution layer and six discriminator blocks (paragraphs 0045-0046), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Li, Xiong and Cordani, in order to further structure implementation.
To claims 9 and 24, Li, Xiong, Cordani and Halupka teach claims 8 and 23.
Li, Xiong, Cordani and Halupka teach wherein the discriminator network is a fully convolutional neural network (Li, paragraph 0095).
Claim(s) 10-14, 25-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US2022/0092739) in view of Xiong et al. (“NoUCSR: Efficient Super-Resolution Network without Upsampling Convolution”), Cordani et al. (US2023/0019874) and Langoju et al. (US2023/0052595).
To claims 10 and 25, Xiong, Cordani and Halupka teach claims 1 and 16.
But, Xiong, Cordani and Halupka do not expressly disclose wherein the set of loss functions further includes pixel loss function, a perceptual loss function, and a relativistic average generative adversarial network (GAN) loss function.
Langoju teach set of loss functions further includes pixel loss function, a perceptual loss function, and a relativistic average generative adversarial network (GAN) loss function (paragraphs 0007,0038), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Li, Xiong and Cordani, in order to improve training.
To claims 11 and 26, Xiong, Cordani and Halupka teach claims 1 and 16.
But, Xiong, Cordani and Halupka do not expressly disclose wherein the image sharpness neural network is trained using a training dataset comprising pairs of training images, wherein each pair comprises a training high resolution reference image and corresponding training low resolution image.
Langoju teach image sharpness neural network is trained using a training dataset comprising pairs of training images, wherein each pair comprises a training high resolution reference image and corresponding training low resolution image (paragraph 0005), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the method of Li, Xiong and Cordani, in order to improve training.
To claims 12 and 27, Li, Xiong, Cordani and Langoju teach claims 11 and 26.
Li, Xiong, Cordani and Langoju teach wherein the training high resolution reference image and the training low resolution image in each pair are reconstructed from undersampled MR data (Li, paragraphs 0143, 0146).
To claims 13 and 28, Li, Xiong, Cordani and Langoju teach claims 12 and 27.
Li, Xiong, Cordani and Langoju teach wherein the training low resolution image in each pair is reconstructed by undersampling k-space in a phase-encoding direction (Li, paragraph 0167).
To claims 14 and 29, Li, Xiong, Cordani and Langoju teach claims 13 and 28.
Li, Xiong, Cordani and Langoju teach wherein undersampling k-space in a phase- encoding direction includes retrospectively undersampling phase encode lines of k-space (Langoju, paragraph 0023).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 December 11, 2025