DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ozcan et al. (US 2022/0114711 A1) in view of Takeshima Tomochika et al. (JP 2021071936 A).
In considering claim 1, Ozcan et al. discloses all the claimed subject matter, note 1) the claimed a processing unit configured to input an input image to a convolutional neural network, and output an output image from the convolutional neural network is met by x is the low-resolution input image 20 to the generator network 120, g(x) is the network output image and the loss is computed using g(x) (Fig. 35A, page 22, paragraph #0197 to page 23, paragraph #0200), and 2) the claimed a training unit configured to use an evaluation function including an error evaluation term representing an evaluation value related to an error between the output image and the target image ( the L.sub.1 loss is the mean pixel difference between the generator's output 124 and the ground truth image) and a regularization term representing an evaluation value related to a difference of pixel values between adjacent pixels in the output image (the formula [0201] calculates the anisotropic total variation loss using differences between adjacent pixels), and train the convolutional neural network based on a value of the evaluation function is met by training the deep neural network 10 based on the overall loss function for the generator network (Fig. 35A, page 23, paragraph #0199 to paragraph #0203). However, Ozcan et al. explicitly do not disclose the claimed wherein the output image after respective processes of the processing unit and the training unit are repeatedly performed a plurality of times is set as an image after the noise reduction processing.
Takeshima Tomochika et al. teach that the first processing unit 10 repeatedly learns a convolutional neural network (CNN) by using a random noise image Bn as an input image and the target image A as a teaching image for each of N pieces of random noise images B1-BN, and acquires an image output from the CNN after the repeated learning as an intermediate image Cn (Fig. 1, see the abstract).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the repeatedly performed training as taught by Takeshima Tomochika et al. into Ozcan et al.’s system in order to effectively reduce noise of the target image even when only one target image exists or even when an SN ratio of the target image is low.
In considering claim 2, the claimed wherein the target image is a tomographic image of a subject created based on coincidence information collected by using a radiation tomography apparatus is met by the tomographic image (page 1, lines 17-26 of Takeshima Tomochika et al.).
The motivation to combine the references has been discussed in claim 1 above.
In considering claim 3, the claimed wherein the processing unit is configured to input an image representing morphological information of the subject to the convolutional neural network as the input image is met by the TIRF-SIM images that undergo rapid morphological changes during development (Figs. 31A-31O, page 5, paragraph #0060 of Ozcan et al.).
The motivation to combine the references has been discussed in claim 1 above.
In considering claim 4, the claimed wherein the processing unit is configured to input an MRI image of the subject to the convolutional neural network as the input image is met by the MRI image (page 1, lines 17-26 of Takeshima Tomochika et al.).
The motivation to combine the references has been discussed in claim 1 above.
In considering claim 5, the claimed wherein the processing unit is configured to input a CT image of the subject to the convolutional neural network as the input image is met by the CT image (page 1, lines 17-26 of Takeshima Tomochika et al.).
The motivation to combine the references has been discussed in claim 1 above.
In considering claim 6, the claimed wherein the processing unit is configured to input a static PET image of the subject to the convolutional neural network as the input image is met by the PET image (page 1, lines 17-26 of Takeshima Tomochika et al.).
The motivation to combine the references has been discussed in claim 1 above.
In considering claim 7, the claimed wherein the processing unit is configured to input a random noise image to the convolutional neural network as the input image is met by the random noise image B (Fig. 1, page 2, lines 26-44 of Takeshima Tomochika et al.).
The motivation to combine the references has been discussed in claim 1 above.
Method claims 8-14 are rejected for the same reason as discussed in apparatus claims 1-7 above, respectively.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Anastasio et al. (US 2021/0150779 A1) disclose deep learning-assisted image reconstruction for tomographic imaging.
Matsuura et al. (US 2020/0311878 A1) disclose apparatus and method for image reconstruction using feature-aware deep learning.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRANG U TRAN whose telephone number is (571)272-7358. The examiner can normally be reached M-F 10:00AM- 6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN W. MILLER can be reached at 571-272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
February 24, 2026
/TRANG U TRAN/Primary Examiner, Art Unit 2422