Prosecution Insights
Last updated: April 19, 2026
Application No. 18/944,084

METHOD AND SYSTEM FOR DETERMINING POST-OPERATIVE IMAGES OF AN ANOMALY USING DEEP-LEARNING MODELS

Non-Final OA §103
Filed
Nov 12, 2024
Examiner
MAYNARD, JOHNATHAN A
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
L&T Technology Services Limited
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
74 granted / 189 resolved
-30.8% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
31 currently pending
Career history
220
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 9, 11, 13, 14, 16, and 17 are objected to because of the following informalities: Claim 9, line 3 recites “wherein the memory stores.” This should recite “wherein the memory Claim 9, line 5 recites “simultaneously input pre-operative image.” This should recite “simultaneously input a pre-operative image.” Claim 11, lines 2-3 recites “MRI images of the body part, wherein the training set.” This should read “MRI images Claim 13, lines 1-2 recites “pre-operative MRI images and the set of post-operative MRI images are resized.” This should read “pre-operative MRI images Claim 14, line 2 recites “MRI images and the set of post-operative MRI images are normalized.” This should read “MRI images Claim 16, line 1 recites “The systemof claim 9.” This should read “The system of claim 9.” Claim 17, line 5 recites “body part of a patient, to a first.” This should read “body part of a patient, Claim 17, line 17 recites “determinin a final post-operative.” This should read “determining a final post-operative.” Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5, and 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Miao et al. (“Post-operative MRI synthesis from pre-operative MRI and post-operative CT using conditional GAN for the assessment of degree of resection” May 2024), hereinafter “Miao,” in further view of Mukherkjee et al. (“Brain tumor image generation using an aggregation of GAN models with style transfer” 2022), hereinafter “Mukherkjee,” or, in the alternative, in further view of Hooper et al. (U.S. Pub. No. 20200311932), hereinafter “Hooper.” Regarding claim 1, Miao discloses a method of determining post-operative image of an anomaly (a method for synthesizing post-operative MRI images of a brain tumor/lesion/glioma, P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2), comprising: inputting, by a processor, a pre-operative image of the anomaly, detected in an internal body part of a patient, to a first generative adversarial network (GAN) (input, by computer software implementing a deep learning model on a processor, a pre-operative image of the anomaly, pre-operative MRI image of the tumor/lesion/glioma, detected in an internal body part of the patient, brain, to a generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2-5, P.3, ¶5, P.4, ¶2, Fig. 1); determining, by the processor, a first post-operative image of the anomaly from the first generative adversarial network (GAN) (determine, by computer software implementing a deep learning model on a processor, a post-operative image of the anomaly, post-operative MRI images of a brain tumor/lesion/glioma, from the generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2), wherein the first GAN is trained based on a training data comprising a training set of post-operative images of the anomaly corresponding to a training set of pre-operative images of the anomaly (the GAN, CoCosNet conditional Generative Adversarial Net (cGAN), is trained based on a training data comprising a training set of post-operative images of the anomaly, training set of post-operative MRI images of the tumor/lesion/glioma, corresponding to a training set of pre-operative images of the anomaly, training set of post-operative MRI images correspond to a training set of pre-operative MRI images of the tumor/lesion/glioma, P.2, ¶3-5); applying, by the processor, a Structural Similarity Index Measure (SSIM) score to the first post-operative image (applying, by computer software implementing a deep learning model on a processor, a Structural Similarity Index Measure (SSIM) score to the post-operative image, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, while Miao discloses training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly does not appear to disclose training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine the image of the anomaly. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches simultaneously inputting, by a processor, an image of the anomaly, detected in an internal body part of a patient, to a first generative adversarial network (GAN), a second GAN, and a third GAN (simultaneously input, by computer software implementing method of implementing a deep learning model on a processor, an image of the anomaly, MRI image of the tumor/glioma/meningioma/pituitary, detected in an internal body part of the patient, brain, to a first generative adversarial network (GAN), first deep convolutional generative adversarial network DCGAN-1, a second GAN, DCGAN-2, and a third GAN, Wasserstein generative adversarial network WGAN, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1); determining, by the processor , a first image of the anomaly from the first generative adversarial network (GAN), a second image of the anomaly from the second GAN, and a third image of the anomaly from the third GAN (determining, by computer software implementing method of implementing a deep learning model on a processor, a first image of the anomaly from the first generative adversarial network (GAN), image of the tumor/glioma/meningioma/pituitary synthesized by DCGAN-1, a second image of the anomaly from the second GAN, image of the tumor/glioma/meningioma/pituitary synthesized by DCGAN-2, and a third image of the anomaly from the third GAN, image of the tumor/glioma/meningioma/pituitary synthesized by WGAN, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1), wherein each of the first GAN, the second GAN, and the third GAN are trained based on a training data comprising a training set of images of the anomaly (each of the first GAN, DCGAN-1, the second GAN, DCGAN-2, and the third GAN, WGAN, are trained based on a training data comprising a training set of images of the anomaly, training set of MRI images of the tumor/glioma/meningioma/pituitary, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1); selecting, by the processor, two of the first image, the second image, and the third image based on a Structural Similarity Index Measure (SSIM) score of each of the first image, the second image and the third image (selecting, by computer software implementing method of implementing a deep learning model on a processor, two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, based on a Structural Similarity Index Measure (SSIM) score of each of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶1, P.3, ¶7 – P.4, ¶1, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1); and determining, by the processor, a final image of the anomaly by performing a pixel-wise aggregation of the selected two of the first image, the second image and the third image (determining, by computer software implementing method of implementing a deep learning model on a processor, a final image of the anomaly, final, aggregated, synthesized MRI image of the tumor/glioma/meningioma/pituitary, by performing a pixel-wise aggregation of the selected two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known process of training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Additionally, or, in the alternative, while Miao in further view of Mukherkjee teaches computer software implementing a method of implementing a deep learning model on a processor, Miao in further view of Mukherkjee, may not explictly teach a processor. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hooper teaches a processor (synthetic medical image generation system including a processor, Abstract, [0005], [0024], [0025], [0045]-[0048]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hooper’s known technique of providing a processor in communication with memory and machine readable instructions therein to cause the processor to perform methods steps to implement a deep learning model to Miao in further view of Mukherkjee’s known process for computer software implementing a method of implementing a deep learning model to achieve the predictable result that “[s]ynthetic medical image generation processing systems can be implemented using any of a variety of computing platforms” ([0041]) and “any number of different system architectures can be utilized as appropriate to the requirements of specific applications” ([0044] and [0048]). Regarding claim 2, Miao discloses the SSIM score is determined based on texture, luminance and contrast of the first post-operative image (the SSIM score is determined based on luminance, contrast and structure information, i.e., texture, of the post-operative image, CoCosNet synthesized post-operative MRI image, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, while Miao discloses determining the SSIM score of a post-operative image, Miao does not appear to disclose the SSIM score is determined for each of the first image, the second image, and the third image. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches the SSIM score is determined for each of the first image, the second image, and the third image (the SSIM score is determined for each of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶1, P.3, ¶7 – P.4, ¶1, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known process of training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Regarding claim 3, Miao discloses the training set of pre-operative images of the anomaly are determined from a set of pre-operative MRI images of the body part (the training set of pre-operative images of the anomaly, tumor/lesion/glioma, are determined from a set of pre-operative MRI images of the body part, brain, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2), the training set of post-operative images of the anomaly are determined from a set of post-operative MRI images of the body part (the training set of post-operative images of the anomaly, tumor/lesion/glioma are determined from a set of post-operative MRI images of the body part, brain, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2), and each of the set of pre-operative MRI images and each of the set of post-operative images capture the body part from a predefined angular view with respect to a central axis of the body part (each of the set of pre-operative MRI images and each of the set of post-operative images capture the body part, brain, from a predefined angular view with respect to a central axis of the body part, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2 demonstrate that the pre-operative MRI images and the post-operative MRI images are captured from a predefined angular view with respect to a central axis of the body part, i.e., an axial view). Regarding claim 5, Miao discloses each of the set of pre-operative MRI images and the set of post-operative MRI images are resized to a predefined size (each of the set of pre-operative MRI images and the set of post-operative MRI images are resized to 512*512 pixels, P.2, ¶3-4). Regarding claim 7, Miao discloses the GAN measures a similarity score index of the post-operative image with the training set of post-operative images of the anomaly (the GAN measures a similarity score index of the post-operative image with the training set of post-operative images of the anomaly, training set of post-operative MRI images of the tumor/lesion/glioma, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, Miao does not appear to disclose the third GAN measures a similarity score index of the third image with the training set of images of the anomaly by using a critic model. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches the third GAN measures a similarity score index of the third image with the training set of images of the anomaly by using a critic model (the third GAN, Wasserstein generative adversarial network WGAN, measures a similarity score index of the third image with the training set of images of the anomaly, tumor/glioma/meningioma/pituitary, by using a critic model, P.10, ¶5 – P.11, ¶3). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known process of training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Regarding claim 8, Miao discloses the determination of a post-operative image (determine, by computer software implementing a deep learning model on a processor, a post-operative image of the anomaly, post-operative MRI images of a brain tumor/lesion/glioma, from the generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2). However, Miao does not appear to disclose the determination of the final image comprises: detecting, by the processor, edges of the selected two of the first image, the second image, and the third image; reducing, by the processor and upon detection of the edges, noise of the selected two of the first image, the second image, and the third image using a gaussian filter to determine corresponding two smoothened images; and assigning, by the processor, weights to pixels of the corresponding two smoothened images, the final image is determined based on pixel-wise aggregation of the corresponding two smoothened images based on the weights assigned to pixels with higher intensity. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches the determination of the final image comprises: detecting, by the processor, edges of the selected two of the first image, the second image, and the third image (detecting, by computer software implementing method of implementing a deep learning model on a processor, edges of the selected two of the first image synthesized by DCGAN-1, a second image synthesized by DCGAN-2, and a third image synthesized by WGAN, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1); reducing, by the processor and upon detection of the edges, noise of the selected two of the first image, the second image, and the third image using a gaussian filter to determine corresponding two smoothened images (reducing, by computer software implementing method of implementing a deep learning model on a processor, and upon detection of the edges, noise of the selected two of the first image synthesized by DCGAN-1, a second image synthesized by DCGAN-2, and a third image synthesized by WGAN, using a gaussian filter to determine corresponding two smoothened images, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1; see also title of reference 45 “An adaptive gaussian filter for noise reduction and edge detection” referenced in P.11, ¶4 – P.12, ¶2); and assigning, by the processor, weights to pixels of the corresponding two smoothened images (assigning, by computer software implementing method of implementing a deep learning model on a processor, weights to the pixels of the corresponding two smoothened images, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1), the final image is determined based on pixel-wise aggregation of the corresponding two smoothened images based on the weights assigned to pixels with higher intensity (the final image is determined based on pixel-wise aggregation of the corresponding two smoothened images based on the weights assigned to pixels with higher intensity, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known process of training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Additionally, or, in the alternative, while Miao in further view of Mukherkjee teaches computer software implementing a method of implementing a deep learning model on a processor, Miao in further view of Mukherkjee, may not explictly teach a processor. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hooper teaches a processor (synthetic medical image generation system including a processor, Abstract, [0005], [0024], [0025], [0045]-[0048]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hooper’s known technique of providing a processor in communication with memory and machine readable instructions therein to cause the processor to perform methods steps to implement a deep learning model to Miao in further view of Mukherkjee’s known process for computer software implementing a method of implementing a deep learning model to achieve the predictable result that “[s]ynthetic medical image generation processing systems can be implemented using any of a variety of computing platforms” ([0041]) and “any number of different system architectures can be utilized as appropriate to the requirements of specific applications” ([0044] and [0048]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee, or, in the alternative, in further view of Hooper, as in claim 3 above, and further in view of Hindawi et al. (“Synthesis of brain tumor MRI images using GAN aggregation with style transfer” 2022), hereinafter “Hindawi.” Regarding claim 4, Miao discloses the training set of pre-operative images and the training set of post-operative images and an abnormality in the body part captured in the set of pre-operative MRI images and the set of post-operative MRI images respectively (the training set of pre-operative images and the training set of post-operative images and an abnormality, tumor/lesion/glioma, in the body part, brain, captured in the set of pre-operative MRI images and the set of post-operative MRI images respectively, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2). However, Miao does not appear to disclose the training set of images are determined using an encoder-decoder convolutional neural network pretrained to detect the abnormality in the body part captured in the set of MRI images. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hindawi teaches the training set of images are determined using an encoder-decoder convolutional neural network pretrained to detect the abnormality in the body part captured in the set of MRI images (training set of images for one or more GANs are determined using an encoder-decoder convolution neural network, U-Net, pretrained to detect abnormal region, tumor, in the body part, brain, captured in the set of MRI images, P.1, ¶1, P.2, ¶1, P.3, ¶10 – P.4, ¶2). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hindawi’s known technique of using an encoder-decoder U-NET CNN pretrained to detect the abnormality in the body part captured in the set of MRI images to Miao in further view of Mukherkjee’s known process of training and using a combination of GANs to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that using a U-Net encoder-decoder CNN produces a more precise segmentation map and/or produces locally and globally coherent images. See, e.g., Hindawi, P.3, ¶10. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee, or, in the alternative, in further view of Hooper, as in claim 3 above, and further in view of Cai et al. (U.S. Pub. No. 2024/0177832), hereinafter “Cai.” Regarding claim 6, Miao discloses pixel intensity values of each of the set of pre-operative MRI images and the set of post-operative MRI images are normalized (pixel intensity values of each of the set of pre-operative MRI images and the set of post-operative MRI images are normalized, P.2, ¶3-4). However, Miao does not appear to explictly disclose that the normalization is between 0 and 1. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Cai teaches the normalization is between 0 and 1 (pixel intensity values of the set of training MRI images are normalized between 0 and 1, [0069]-[0074]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Cai’s known technique of normalizing between 0 and 1 to Miao in further view of Mukherkjee’s known apparatus for normalizing the pixel intensity values of each of the set of pre-operative MRI images and the set of post-operative MRI images to achieve the predictable result that such “[n]ormalization or standardization can eliminate detrimental effect arising from abnormal data, making all indicative values at the same quantitative level such that they are more comparable to each other and the accuracy of the discriminator can be greatly enhanced.” Cai, [0074]. Claims 9-11, 13 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee in further view of Hooper. Regarding claim 9, Miao discloses a system for determining post-operative images of an anomaly (a system for synthesizing post-operative MRI images of a brain tumor/lesion/glioma, P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2), comprising: a processor (computer software implementing a deep learning model on a processor, P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2); and input pre-operative image of an anomaly detected in an internal body part to a first generative adversarial network (GAN) (input a pre-operative image of the anomaly, pre-operative MRI image of the tumor/lesion/glioma, detected in an internal body part of the patient, brain, to a generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2-5, P.3, ¶5, P.4, ¶2, Fig. 1); determine a first post-operative image of the anomaly from the first GAN (determine a post-operative MRI image of a brain tumor/lesion/glioma, from the generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2), wherein the first GAN is trained based on a training data comprising a training set of post-operative images of the anomaly corresponding to a training set pre-operative images of the anomaly (the GAN, CoCosNet conditional Generative Adversarial Net (cGAN), is trained based on a training data comprising a training set of post-operative images of the anomaly, training set of post-operative MRI images of the tumor/lesion/glioma, corresponding to a training set of pre-operative images of the anomaly, training set of post-operative MRI images correspond to a training set of pre-operative MRI images of the tumor/lesion/glioma, P.2, ¶3-5); apply a Structural Similarity Index Measure (SSIM) score to the first post-operative image (applying, by computer software implementing a deep learning model on a processor, a Structural Similarity Index Measure (SSIM) score to the post-operative image, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, while Miao discloses training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly does not appear to disclose training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine the image of the anomaly. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches simultaneously input image of an anomaly detected in an internal body part to a first generative adversarial network (GAN), a second GAN and a third GAN (simultaneously input an image of the anomaly, MRI image of the tumor/glioma/meningioma/pituitary, detected in an internal body part of the patient, brain, to a first generative adversarial network (GAN), first deep convolutional generative adversarial network DCGAN-1, a second GAN, DCGAN-2, and a third GAN, Wasserstein generative adversarial network WGAN, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1); determine a first image of the anomaly from the first GAN, a second image of the anomaly from the second GAN and a third image of the anomaly from the third GAN (determine a first image of the anomaly from the first generative adversarial network (GAN), image of the tumor/glioma/meningioma/pituitary synthesized by DCGAN-1, a second image of the anomaly from the second GAN, image of the tumor/glioma/meningioma/pituitary synthesized by DCGAN-2, and a third image of the anomaly from the third GAN, image of the tumor/glioma/meningioma/pituitary synthesized by WGAN, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1), wherein each of the first GAN, the second GAN, and the third GAN are trained based on a training data comprising a training set of images of the anomaly (each of the first GAN, DCGAN-1, the second GAN, DCGAN-2, and the third GAN, WGAN, are trained based on a training data comprising a training set of images of the anomaly, training set of MRI images of the tumor/glioma/meningioma/pituitary, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1); select two of the first image, the second image, and the third image based on a Structural Similarity Index Measure (SSIM) score of each of the first image, the second image and the third image (selecting, by computer software implementing method of implementing a deep learning model on a processor, two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, based on a Structural Similarity Index Measure (SSIM) score of each of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶1, P.3, ¶7 – P.4, ¶1, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1); and determine a final image of the anomaly by performing a pixel-wise aggregation of the selected two of the first image, the second image and the third image (determining, by computer software implementing method of implementing a deep learning model on a processor, a final image of the anomaly, final, aggregated, synthesized MRI image of the tumor/glioma/meningioma/pituitary, by performing a pixel-wise aggregation of the selected two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known apparatus for training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. However, while Miao in further view of Mukherkjee teaches computer software implementing a method of implementing a deep learning model on a processor, Miao in further view of Mukherkjee, may not explictly teach a memory communicably coupled to the processor, wherein the memory stores processor-executable instruction, which, on executing by the processor cause the processor perform method steps. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hooper teaches a processor (synthetic medical image generation system including a processor, Abstract, [0005], [0024], [0025], [0045]-[0048]); and a memory communicably coupled to the processor, wherein the memory stores processor-executable instruction, which, on executing by the processor cause the processor to perform method steps (synthetic medical image generation system includes a memory in communication with the processor, wherein the memory contains an image synthesis application including machine readable instructions to cause the processor to perform method steps, Abstract, [0005], [0024], [0025], [0045]-[0048]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hooper’s known technique of providing a processor in communication with memory and machine readable instructions therein to cause the processor to perform methods steps to implement a deep learning model to Miao in further view of Mukherkjee’s known apparatus for computer software implementing a method of implementing a deep learning model on a processor to achieve the predictable result that “[s]ynthetic medical image generation processing systems can be implemented using any of a variety of computing platforms” ([0041]) and “any number of different system architectures can be utilized as appropriate to the requirements of specific applications” ([0044] and [0048]). Regarding claim 10, Miao discloses the SSIM score is determined based on texture, luminance and contrast of the first post-operative image (the SSIM score is determined based on luminance, contrast and structure information, i.e., texture, of the post-operative image, CoCosNet synthesized post-operative MRI image, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, while Miao discloses determining the SSIM score of a post-operative image, Miao does not appear to disclose the SSIM score is determined for each of the first image, the second image, and the third image. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches the SSIM score is determined for each of the first image, the second image, and the third image (the SSIM score is determined for each of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶1, P.3, ¶7 – P.4, ¶1, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known apparatus for training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Regarding claim 11, Miao discloses the training set of pre-operative images of the anomaly are determined from a set of pre-operative MRI images of the body part (the training set of pre-operative images of the anomaly, tumor/lesion/glioma, are determined from a set of pre-operative MRI images of the body part, brain, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2), the training set of post-operative images of the anomaly are determined from a set of post-operative MRI images of the body part (the training set of post-operative images of the anomaly, tumor/lesion/glioma are determined from a set of post-operative MRI images of the body part, brain, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2), and each of the set of pre-operative MRI images and each of the set of post-operative images capture the body part from a predefined angular view with respect to a central axis of the body part (each of the set of pre-operative MRI images and each of the set of post-operative images capture the body part, brain, from a predefined angular view with respect to a central axis of the body part, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2 demonstrate that the pre-operative MRI images and the post-operative MRI images are captured from a predefined angular view with respect to a central axis of the body part, i.e., an axial view). Regarding claim 13, Miao discloses each of the set of pre-operative MRI images and the set of post-operative MRI images are resized to a predefined size (each of the set of pre-operative MRI images and the set of post-operative MRI images are resized to 512*512 pixels, P.2, ¶3-4). Regarding claim 15, Miao discloses the GAN measures a similarity score index of the post-operative image with the training set of post-operative images of the anomaly (the GAN measures a similarity score index of the post-operative image with the training set of post-operative images of the anomaly, training set of post-operative MRI images of the tumor/lesion/glioma, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, Miao does not appear to disclose the third GAN measures a similarity score index of the third image with the training set of images of the anomaly by using a critic model. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches the third GAN measures a similarity score index of the third image with the training set of images of the anomaly by using a critic model (the third GAN, Wasserstein generative adversarial network WGAN, measures a similarity score index of the third image with the training set of images of the anomaly, tumor/glioma/meningioma/pituitary, by using a critic model, P.10, ¶5 – P.11, ¶3). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known apparatus for training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Regarding claim 16, Miao discloses to determine the post-operative image, the processor is configured to perform method steps (determine, by computer software implementing a deep learning model on a processor, a post-operative image of the anomaly, post-operative MRI images of a brain tumor/lesion/glioma, from the generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2). However, Miao does not appear to disclose to determine the final image comprises: detecting, by the processor, edges of the selected two of the first image, the second image, and the third image; reducing, by the processor and upon detection of the edges, noise of the selected two of the first image, the second image, and the third image using a gaussian filter to determine corresponding two smoothened images; and assigning, by the processor, weights to pixels of the corresponding two smoothened images, the final image is determined based on pixel-wise aggregation of the corresponding two smoothened images based on the weights assigned to pixels with higher intensity. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches to determine the final post-operative image, the processor (to determine, by computer software implementing method of implementing a deep learning model on a processor, a final image of the anomaly, final, aggregated, synthesized MRI image of the tumor/glioma/meningioma/pituitary, by performing a pixel-wise aggregation of the selected two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1) is configured to: detect edges of the selected two of the first image, the second image, and the third image (detecting edges of the selected two of the first image synthesized by DCGAN-1, a second image synthesized by DCGAN-2, and a third image synthesized by WGAN, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1); upon the detection of the edges, reduce noise of the selected two of the first image, the second image, and the third image using a gaussian filter to determine corresponding two smoothened images (upon the detection of the edges, reduce noise of the selected two of the first image synthesized by DCGAN-1, a second image synthesized by DCGAN-2, and a third image synthesized by WGAN, using a gaussian filter to determine corresponding two smoothened images, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1; see also title of reference 45 “An adaptive gaussian filter for noise reduction and edge detection” referenced in P.11, ¶4 – P.12, ¶2); and assign weights to pixels of the corresponding two smoothened images (assign weights to the pixels of the corresponding two smoothened images, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1), the final image is determined based on pixel-wise aggregation of the corresponding two smoothened images based on the weights assigned to pixels with higher intensity (the final image is determined based on pixel-wise aggregation of the corresponding two smoothened images based on the weights assigned to pixels with higher intensity, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known apparatus for training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee in further view of Hooper as in claim 11 above, and further in view of Hindawi. Regarding claim 12, Miao discloses the training set of pre-operative images and the training set of post-operative images and an abnormality in the body part captured in the set of pre-operative MRI images and the set of post-operative MRI images respectively (the training set of pre-operative images and the training set of post-operative images and an abnormality, tumor/lesion/glioma, in the body part, brain, captured in the set of pre-operative MRI images and the set of post-operative MRI images respectively, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2). However, Miao does not appear to disclose the training set of images are determined using an encoder-decoder convolutional neural network pretrained to detect the abnormality in the body part captured in the set of MRI images. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hindawi teaches the training set of images are determined using an encoder-decoder convolutional neural network pretrained to detect the abnormality in the body part captured in the set of MRI images (training set of images for one or more GANs are determined using an encoder-decoder convolution neural network, U-Net, pretrained to detect abnormal region, tumor, in the body part, brain, captured in the set of MRI images, P.1, ¶1, P.2, ¶1, P.3, ¶10 – P.4, ¶2). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hindawi’s known technique of using an encoder-decoder U-NET CNN pretrained to detect the abnormality in the body part captured in the set of MRI images to Miao in further view of Mukherkjee in further view of Hooper’s known apparatus for training and using a combination of GANs to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that using a U-Net encoder-decoder CNN produces a more precise segmentation map and/or produces locally and globally coherent images. See, e.g., Hindawi, P.3, ¶10. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee in further view of Hooper as in claim 11 above, and further in view of Cai. Regarding claim 14, Miao discloses pixel intensity values of each of the set of pre-operative MRI images and the set of post-operative MRI images are normalized (pixel intensity values of each of the set of pre-operative MRI images and the set of post-operative MRI images are normalized, P.2, ¶3-4). However, Miao does not appear to explictly disclose that the normalization is between 0 and 1. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Cai teaches the normalization is between 0 and 1 (pixel intensity values of the set of training MRI images are normalized between 0 and 1, [0069]-[0074]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Cai’s known technique of normalizing between 0 and 1 to Miao in further view of Mukherkjee in further view of Hooper’s known apparatus for normalizing the pixel intensity values of each of the set of pre-operative MRI images and the set of post-operative MRI images to achieve the predictable result that such “[n]ormalization or standardization can eliminate detrimental effect arising from abnormal data, making all indicative values at the same quantitative level such that they are more comparable to each other and the accuracy of the discriminator can be greatly enhanced.” Cai, [0074]. Claims 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee in further view of Hooper. Regarding claim 17, Miao discloses computer-executable instructions for determining post-operative image of an anomaly (computer software implement a deep learning model on a processor for synthesizing post-operative MRI images of a brain tumor/lesion/glioma, P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2), the computer-executable instructions configured for: inputting a pre-operative image of the anomaly, detected in an internal body part of a patient, to a first generative adversarial network (GAN) (input a pre-operative image of the anomaly, pre-operative MRI image of the tumor/lesion/glioma, detected in an internal body part of the patient, brain, to a generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2-5, P.3, ¶5, P.4, ¶2, Fig. 1); determining a first post-operative image of the anomaly from the first generative adversarial network (GAN) (determine a post-operative image of the anomaly, post-operative MRI images of a brain tumor/lesion/glioma, from the generative adversarial network (GAN), CoCosNet conditional Generative Adversarial Net (cGAN), P.1, ¶1-4, P.2, ¶2, P.2, ¶4, P.2, ¶6 – P.3, ¶3, P.3, ¶5, P.4, ¶2, Figs.1-2), wherein the first GAN is trained based on a training data comprising a training set of post-operative images of the anomaly corresponding to a training set of pre-operative images of the anomaly (the GAN, CoCosNet conditional Generative Adversarial Net (cGAN), is trained based on a training data comprising a training set of post-operative images of the anomaly, training set of post-operative MRI images of the tumor/lesion/glioma, corresponding to a training set of pre-operative images of the anomaly, training set of post-operative MRI images correspond to a training set of pre-operative MRI images of the tumor/lesion/glioma, P.2, ¶3-5); applying, by the processor, a Structural Similarity Index Measure (SSIM) score to the first post-operative image (applying, by computer software implementing a deep learning model on a processor, a Structural Similarity Index Measure (SSIM) score to the post-operative image, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, while Miao discloses training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly does not appear to disclose training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine the image of the anomaly. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches simultaneously inputting a pre-operative image of the anomaly, detected in an internal body part of a patient, to a first generative adversarial network (GAN), a second GAN and a third GAN (simultaneously input an image of the anomaly, MRI image of the tumor/glioma/meningioma/pituitary, detected in an internal body part of the patient, brain, to a first generative adversarial network (GAN), first deep convolutional generative adversarial network DCGAN-1, a second GAN, DCGAN-2, and a third GAN, Wasserstein generative adversarial network WGAN, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1); determining a first image of the anomaly from the first generative adversarial network (GAN), a second image of the anomaly from the second GAN, and a third image of the anomaly from the third GAN (determining a first image of the anomaly from the first generative adversarial network (GAN), image of the tumor/glioma/meningioma/pituitary synthesized by DCGAN-1, a second image of the anomaly from the second GAN, image of the tumor/glioma/meningioma/pituitary synthesized by DCGAN-2, and a third image of the anomaly from the third GAN, image of the tumor/glioma/meningioma/pituitary synthesized by WGAN, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1), wherein each of the first GAN, the second GAN, and the third GAN are trained based on a training data comprising a training set of images of the anomaly (each of the first GAN, DCGAN-1, the second GAN, DCGAN-2, and the third GAN, WGAN, are trained based on a training data comprising a training set of images of the anomaly, training set of MRI images of the tumor/glioma/meningioma/pituitary, P.2, ¶7 – P.3, ¶1, P.3, ¶3-5, P.10, ¶4 – P.11, ¶3, P.13, ¶7-10, Figs. 1, 3, Table 1); selecting, by the processor, two of the first image, the second image, and the third image based on a Structural Similarity Index Measure (SSIM) score of each of the first image, the second image and the third image (selecting, by computer software implementing method of implementing a deep learning model on a processor, two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, based on a Structural Similarity Index Measure (SSIM) score of each of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶1, P.3, ¶7 – P.4, ¶1, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1); and determining, by the processor, a final image of the anomaly by performing a pixel-wise aggregation of the selected two of the first image, the second image and the third image (determining, by computer software implementing method of implementing a deep learning model on a processor, a final image of the anomaly, final, aggregated, synthesized MRI image of the tumor/glioma/meningioma/pituitary, by performing a pixel-wise aggregation of the selected two of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶2, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known technique of training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known apparatus for training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. However, while Miao in further view of Mukherkjee teaches computer-executable instructions for determining post-operative image of an anomaly, Miao in further view of Mukherkjee, may not explictly teach a non-transitory computer-readable medium storing computer-executable instructions, the computer-executable instructions configured for implementing method steps. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hooper teaches a non-transitory computer-readable medium storing computer-executable instructions for determining post-operative image of an anomaly, the computer-executable instructions configured for implementing method steps (synthetic medical image generation system includes a computing platform and a memory in communication with the processor, wherein the memory contains an image synthesis application including machine readable instructions to cause the processor to perform method steps, Abstract, [0005], [0024], [0025], [0045]-[0048]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hooper’s known technique of providing a processor in communication with memory in a computing platform and machine readable instructions therein to cause the processor to perform methods steps to implement a deep learning model to Miao in further view of Mukherkjee’s known apparatus for computer software implementing a method of implementing a deep learning model on a processor to achieve the predictable result that “[s]ynthetic medical image generation processing systems can be implemented using any of a variety of computing platforms” ([0041]) and “any number of different system architectures can be utilized as appropriate to the requirements of specific applications” ([0044] and [0048]). Regarding claim 18, Miao discloses the SSIM score is determined based on texture, luminance and contrast of the first post-operative image (the SSIM score is determined based on luminance, contrast and structure information, i.e., texture, of the post-operative image, CoCosNet synthesized post-operative MRI image, P.1, ¶1-4, P.2, ¶2, P.2, ¶6 – P.3, ¶1, P.3, ¶3, P.3, ¶5). However, while Miao discloses determining the SSIM score of a post-operative image, Miao does not appear to disclose the SSIM score is determined for each of the first image, the second image, and the third image. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Mukherkjee teaches the SSIM score is determined for each of the first image, the second image, and the third image (the SSIM score is determined for each of the first image, DCGAN-1 synthesized image, the second image, DCGAN-2 synthesized image, and the third image, WGAN synthesized image, P.1, ¶1, P.2, ¶7 – P.3, ¶1, P.3, ¶7 – P.4, ¶1, P.11, ¶4 – P.12, ¶2, P.14, ¶2, Figs. 3, 8, Tables 1 and 2, Algorithm 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Mukherkjee’s known apparatus for training and simultaneously using a first GAN, a second GAN, and third GAN and pixel-wise aggregation to determine an image of the anomaly to Miao’s known process of training and using a GAN to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that combining different GANs improves over stand-alone GANs through understanding of distributed features, overcomes the limitation of data unavailability, and/or provides understanding of information variance. See, e.g., Mukherkjee, P.1, ¶1. Regarding claim 19, Miao discloses the training set of pre-operative images of the anomaly are determined from a set of pre-operative MRI images of the body part (the training set of pre-operative images of the anomaly, tumor/lesion/glioma, are determined from a set of pre-operative MRI images of the body part, brain, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2), the training set of post-operative images of the anomaly are determined from a set of post-operative MRI images of the body part (the training set of post-operative images of the anomaly, tumor/lesion/glioma are determined from a set of post-operative MRI images of the body part, brain, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2), and each of the set of pre-operative MRI images and each of the set of post-operative images capture the body part from a predefined angular view with respect to a central axis of the body part (each of the set of pre-operative MRI images and each of the set of post-operative images capture the body part, brain, from a predefined angular view with respect to a central axis of the body part, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2 demonstrate that the pre-operative MRI images and the post-operative MRI images are captured from a predefined angular view with respect to a central axis of the body part, i.e., an axial view). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Miao in further view of Mukherkjee in further view of Hooper as in claim 19 above, and further in view of Hindawi. Regarding claim 20, Miao discloses the training set of pre-operative images and the training set of post-operative images and an abnormality in the body part captured in the set of pre-operative MRI images and the set of post-operative MRI images respectively (the training set of pre-operative images and the training set of post-operative images and an abnormality, tumor/lesion/glioma, in the body part, brain, captured in the set of pre-operative MRI images and the set of post-operative MRI images respectively, P.1, ¶1-4, P.2, ¶2-5, Figs. 1 and 2). However, Miao does not appear to disclose the training set of images are determined using an encoder-decoder convolutional neural network pretrained to detect the abnormality in the body part captured in the set of MRI images. However, in the same field of endeavor of generative adversarial networks and magnetic resonance imaging, Hindawi teaches the training set of images are determined using an encoder-decoder convolutional neural network pretrained to detect the abnormality in the body part captured in the set of MRI images (training set of images for one or more GANs are determined using an encoder-decoder convolution neural network, U-Net, pretrained to detect abnormal region, tumor, in the body part, brain, captured in the set of MRI images, P.1, ¶1, P.2, ¶1, P.3, ¶10 – P.4, ¶2). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Hindawi’s known technique of using an encoder-decoder U-NET CNN pretrained to detect the abnormality in the body part captured in the set of MRI images to Miao in further view of Mukherkjee in further view of Hooper’s known apparatus for training and using a combination of GANs to determine a post-operative image of an anomaly from an input pre-operative image of the anomaly to achieve the predictable result that using a U-Net encoder-decoder CNN produces a more precise segmentation map and/or produces locally and globally coherent images. See, e.g., Hindawi, P.3, ¶10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Johnathan Maynard whose telephone number is (571)272-7977. The examiner can normally be reached 10 AM - 6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 571-270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M./Examiner, Art Unit 3798 /KEITH M RAYMOND/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Nov 12, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594084
Ultrasound Device for Use with Synthetic Cavitation Nuclei
2y 5m to grant Granted Apr 07, 2026
Patent 12588817
SYSTEMS AND METHODS FOR GENERATING DIAGNOSTIC SCAN PARAMETERS FROM CALIBRATION IMAGES
2y 5m to grant Granted Mar 31, 2026
Patent 12575734
DEVICES AND RELATED ASPECTS FOR MAGNETIC RESONANCE IMAGING-BASED IN- SITU TISSUE CHARACTERIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12571862
B1 FIELD MAP WITH CONTRAST MEDIUM INJECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12544142
Method and System for Associating Pre-Operative Plan with Position Data of Surgical Instrument
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
46%
With Interview (+6.9%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month