DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 are currently pending in the application filed November 16, 2022.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 08/05/2024, and 08/28/2024 have been considered by the Examiner.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because it contains legal phraseology often in patent claims such as “comprises” and colons being present. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
The disclosure is objected to because of the following informalities:
In paragraph [0056], "subsampledfeature" should read "subsampled feature".
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6, 11, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 11189016 B1) in view of Hassen (Multifocus Image Fusion Using Local Phase Coherence Measurement and Carlo (CA 2497212 C).).
Regarding Claim 1, Chen teaches:
A computer-implemented method for simulating images with different contrast enhancement levels, the computer-implemented method comprising (Chen, [ Col 11, Line 37]; “the processing device 140 may be implemented by a computing device 200 having one or more components as described in FIG. 2”)
providing an iterative model comprising a plurality of iterations, wherein a given iteration comprises a deep learning model configured to i) take an input comprising a synthesized image generated by a previous iteration, wherein the synthesized image has a first intermediate contrast enhancement level, (Chen, [ Col 23, Line 29]; “ The processing device 140 may update an estimated intermediate image by performing, based on the initial estimated intermediate image, a plurality of iterations of the first object function.)
and ii) output a corresponding synthesized image has a second intermediate contrast enhancement level, wherein the second intermediate contrast enhancement level is lower than the first intermediate contrast enhancement level; and (Chen, [Col 28, line 1];” In some embodiments, the intermediate image determined by the first iterative operation may have low contrast in extrapolated spatial frequency largely due to the convolution of the real object with the PSF of the image acquisition device 110 (e.g., 2D-SIM/SD-SIM) and the extension of bandwidth due to the sparsity and continuity constraints.”)
Chen fails to teach:
applying the iterative model to a first input image corresponding to a higher contrast enhancement level and a second input image corresponding to a lower contrast enhancement level, and outputting a plurality of synthesized images corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level and the lower contrast enhancement.
Hassen teaches:
applying the iterative model to a first input image corresponding to a higher contrast enhancement level (Hassen, [Pg 62, Paragraph 2];”…by applying the proposed approach to … a high contrast/blurred image) and a second input image corresponding to a lower contrast enhancement level, and (Hassen, [Pg 62, Paragraph 2];”… by applying the proposed approach to a low contrast/sharp.”) outputting a plurality of synthesized images corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level and the lower contrast enhancement. (Hassen, [Pg 62, Paragraph 2];” … obtain a fused image with both high contrast and high sharpness.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen with Hassen and Carlo. The motivation for the combination is to be able to generate intermediate images from high and low contrast enhancement images comparatively. (Hassen, [Pg 59, Paragraph 3], This module is where the actual combination of multi-resolution coefficients is performed. The key idea of our approach is to maintain the phases of the coefficients with maximal local sharpness while boost their magnitudes to achieve the maximal local energy. By doing so, sharp and high contrast features from both images are combined.)
Hassen fails to teach:
outputting a plurality of synthesized images corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level and the lower contrast enhancement
Carlo teaches:
outputting a plurality of synthesized images (Carlo, [Pg 3, Line 7]; “identifying contrast values from first and second sensor images to form an intermediate contrast map”) corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level (Carlo, [Pg 10, Line 7];” the contrast values … relatively higher in value”.) and the lower contrast enhancement (Carlo, [Pg 10, Line 6]; “the contrast values …relatively lower value”).
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen with Hassen and Carlo. The motivation for the combination is to be able to generate intermediate images from high and low contrast enhancement images comparatively. (Chen, Fig 10)
Figure 10
PNG
media_image1.png
792
1118
media_image1.png
Greyscale
Regarding Claim 6, Chen teaches:
wherein the deep learning model in each iteration is trained based at least in part on a simulated truth image. (Chen, [(Col 46, Line 1]; “As shown in FIG. 14D, compared to the synthetic ground truth, the average peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values of SR images reconstructed with different techniques from raw images may corrupt with different levels of noise (0%, 11%, 25%, 50%, 80%”).
Regarding Claim 11, Chen teaches:
A non-transitory computer-readable storage medium including instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising (Chen, [Col 2, Line 16]; “The non-transitory computer readable medium may include at least one set of instructions for image processing.”).
providing an iterative model comprising a plurality of iterations, wherein a given iteration comprises a deep learning model configured to i) take an input comprising a synthesized image (Chen, [ Col 23, Line 36]; “ The processing device 140 may update an estimated intermediate image by performing, based on the initial estimated intermediate image, a plurality of iterations of the first object function.) generated by a previous iteration, wherein the synthesized image has a first intermediate contrast enhancement level, (Chen, [ Col 23, Line 29]; “… an initial estimated intermediate image based on the preliminary image.)
and ii) output a corresponding synthesized image has a second intermediate contrast enhancement level, wherein the second intermediate contrast enhancement level is lower than the first intermediate contrast enhancement level; and (Chen, [Col 28, line 1];” In some embodiments, the intermediate image determined by the first iterative operation may have low contrast in extrapolated spatial frequency largely due to the convolution of the real object with the PSF of the image acquisition device 110 (e.g., 2D-SIM/SD-SIM) and the extension of bandwidth due to the sparsity and continuity constraints.”)
Chen fails to teach:
applying the iterative model to a first input image corresponding to a higher contrast enhancement level and a second input image corresponding to a lower contrast enhancement level, and outputting a plurality of synthesized images corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level and the lower contrast enhancement.
Hassen teaches:
applying the iterative model to a first input image corresponding to a higher contrast enhancement level (Hassen, [Pg 62, Paragraph 2]; “… by applying the proposed approach to … a high contrast/blurred image”) and a second input image corresponding to a lower contrast enhancement level, and (Hassen, [Pg 62, Paragraph 2];” …by applying the proposed approach to a low contrast/sharp.”) outputting a plurality of synthesized images corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level and the lower contrast enhancement. (Hassen, [Pg 62, Paragraph 2];”… obtain a fused image with both high contrast and high sharpness.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen with Hassen and Carlo. The motivation for the combination is to be able to generate intermediate images from high and low contrast enhancement images comparatively. (Hassen, [Pg 59, Paragraph 3], This module is where the actual combination of multi-resolution coefficients is performed. The key idea of our approach is to maintain the phases of the coefficients with maximal local sharpness while boost their magnitudes to achieve the maximal local energy. By doing so, sharp and high contrast features from both images are combined.)
Hassen fails to teach:
outputting a plurality of synthesized images corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level and the lower contrast enhancement
Carlo teaches:
outputting a plurality of synthesized images (Carlo, [Pg 3, Line 7]; “identifying contrast values from first and second sensor images to form an intermediate contrast map”) corresponding to a plurality of intermediate contrast enhancement levels between the higher contrast enhancement level (Carlo, [Pg 10, Line 7];” the contrast values … relatively higher in value.”) and the lower contrast enhancement (Carlo, [Pg 10, Line 6]; “the contrast values …relatively lower value”).
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen with Hassen and Carlo. The motivation for the combination is to be able to generate intermediate images from high and low contrast enhancement images comparatively. (Chen, Fig 10)
Figure 10
PNG
media_image1.png
792
1118
media_image1.png
Greyscale
Regarding Claim 16, Chen teaches:
wherein the deep learning model in each iteration is trained based at least in part on a simulated truth image. (Chen, [(Col 46, Line 1]; “As shown in FIG. 14D, compared to the synthetic ground truth, the average peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values of SR images reconstructed with different techniques from raw images may corrupt with different levels of noise (0%, 11%, 25%, 50%, 80%”).
Claims 2-5, 10, 12-15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 11189016 B1), Carlo (CA 2497212 C), and Hassen (Multifocus Image Fusion Using Local Phase Coherence Measurement as applied to Claim 1 above, and further in view of Dalmaz (“ResViT: Residual vision transformers for multi-modal medical image synthesis”).
Regarding Claim 2, the combination of Chen, Carlo, and Hassen fails to teach:
wherein the deep learning model comprises a transformer model.
Dalmaz teaches:
wherein the deep learning model comprises a transformer model. (Dalmaz, [Pg 2, Col 1, Paragraph 2]; “We introduce the first adversarial model for medical image synthesis with a transformer-based generator to translate between multi-modal imaging data”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to apply transformer models to create better quality images. (Dalmaz, [Pg 2, Col 1, Paragraph 1]; “The bottleneck comprises novel aggregated residual transformer (ART) blocks to synergistically preserve local and global context, with a weight-sharing strategy to minimize model complexity. To improve practical utility, a unified ResViT implementation is introduced that consolidates models for numerous source-target configurations. Demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT from MRI.”)
Regarding Claim 3, the combination of Chen, Carlo and Hassan fails to teach:
wherein the deep learning model comprises a sequence of global transformer blocks
Dalmaz Teaches:
wherein the deep learning model comprises a sequence of global transformer blocks (Dalmaz, [Pg 3, Col 1, Paragraph 1]; “ResViT leverages a hybrid architecture of deep convolutional operators and transformer blocks to simultaneously learn high-resolution structural and global contextual features”.)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to apply transformer blocks for the transformer models. (Dalmaz, Fig 1)
Figure 1
PNG
media_image2.png
701
713
media_image2.png
Greyscale
Regarding Claim 4, the combination of Chen, Carlo and Hassen fails to teach:
wherein at least one of the global transformer blocks comprises a subsample process to generate a sub-image as an attention feature map.
Dalmaz teaches:
wherein at least one of the global transformer blocks comprises a subsample process to generate a sub-image as an attention feature map (Dalmaz, [Pg 13, Col 1, Paragraph 3], “While attention maps can be distributed across image regions, they mainly capture implicit contextual information via modification of local CNN features. Since feature representations are primarily extracted via convolutional filtering, the resulting model can still manifest limited expressiveness for global context.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to generate an attention feature map just like the downsampled feature maps. (Dalmaz, [Pg 9, Col 1, Paragraph 2, “To interpret the information that self-attention mechanisms focus on during synthesis tasks, we computed and visualized the attention maps as captured by the transformer modules in ResViT. Attention maps were calculated based on the Attention Rollout technique, and a single average map was extracted for a given transformer module”)
Regarding Claim 5, the combination of Chen, Carlo and Hassen fails to teach:
wherein the sub-image is sampled at a stride to extract global information from the image data.
Dalmaz Teaches:
wherein the sub-image is sampled at a stride to extract global information from the image data. (Dalmaz, [Pg 4, Col 2, Paragraph 1]; “a downsampling block (DS):
PNG
media_image3.png
22
191
media_image3.png
Greyscale
where DS is implemented as a stack of strided convolutional layers,
PNG
media_image4.png
28
141
media_image4.png
Greyscale
are downsampled feature maps with W0 = W/M, H0 = H/M, M denoting the down sampling factor. A transformer branch then processes f j to extract contextual information.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to extract global information from the image data sampled at a stride just like how the contextual information is extracted. (Dalmaz, [Pg 13, Col 1, Paragraph 3], “Since feature representations are primarily extracted via convolutional filtering, the resulting model can still manifest limited expressiveness for global context.”)
Regarding Claim 10, the combination of Chen, Carlo and Hassen fails to teach:
wherein the first input image or the second input image is acquired by a transforming magnetic resonance (MR) device.
Dalmaz Teaches
wherein the first input image or the second input image is acquired by a transforming magnetic resonance (MR) device. (Dalmaz, [Pg 5, Col 2, Paragraph 3]; “We demonstrated the proposed ResViT model on two multi-contrast brain MRI datasets”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to use MRI images specifically as input. (Dalmaz, Fig 5.)
Figure 5
PNG
media_image5.png
423
743
media_image5.png
Greyscale
Regarding Claim 12, the combination of Chen, Carlo and Hassen fails to teach:
wherein the deep learning model comprises a transformer model.
Dalmaz teaches:
wherein the deep learning model comprises a transformer model. (Dalmaz, [Pg 2, Col 1, Paragraph 2]; “We introduce the first adversarial model for medical image synthesis with a transformer-based generator to translate between multi-modal imaging data”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to apply transformer models to create better quality images. (Dalmaz, [Pg 2, Col 1, Paragraph 1]; The bottleneck comprises novel aggregated residual transformer (ART) blocks to synergistically preserve local and global context, with a weight-sharing strategy to minimize model complexity. To improve practical utility, a unified ResViT implementation is introduced that consolidates models for numerous source-target configurations. Demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT from MRI)
Regarding Claim 13, the combination of Chen, Carlo and Hassen fails to teach:
wherein the deep learning model comprises a sequence of global transformer blocks.
Dalmaz Teaches:
wherein the deep learning model comprises a sequence of global transformer blocks (Dalmaz, [Pg 3, Col 1, Paragraph 1]; ResViT leverages a hybrid architecture of deep convolutional operators and transformer blocks to simultaneously learn high-resolution structural and global contextual features.)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to apply transformer blocks for the transformer models. (Dalmaz, Fig 1)
PNG
media_image2.png
701
713
media_image2.png
Greyscale
Figure 1
Regarding Claim 14, the combination of Chen, Carlo and Hassen fails to teach:
wherein at least one of the global transformer blocks comprises a subsample process to generate a sub-image as an attention feature map.
Dalmaz teaches:
wherein at least one of the global transformer blocks comprises a subsample process to generate a sub-image as an attention feature map. (Dalmaz, [Pg 13, Col 1, Paragraph 3], “While attention maps can be distributed across image regions, they mainly capture implicit contextual information via modification of local CNN features. Since feature representations are primarily extracted via convolutional filtering, the resulting model can still manifest limited expressiveness for global context.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to generate an attention feature map just like the downsampled feature maps. (Dalmaz, [Pg 9, Col 1, Paragraph 2, “To interpret the information that self-attention mechanisms focus on during synthesis tasks, we computed and visualized the attention maps as captured by the transformer modules in ResViT. Attention maps were calculated based on the Attention Rollout technique, and a single average map was extracted for a given transformer module”)
Regarding Claim 15, the combination of Chen, Carlo and Hassen fails to teach:
wherein the sub-image is sampled at a stride to extract global information from the image data.
Dalmaz Teaches:
wherein the sub-image is sampled at a stride to extract global information from the image data. (Dalmaz, [Pg 4, Col 2, Paragraph 1]; “a downsampling block (DS):
PNG
media_image3.png
22
191
media_image3.png
Greyscale
where DS is implemented as a stack of strided convolutional layers,
PNG
media_image4.png
28
141
media_image4.png
Greyscale
are downsampled feature maps with W0 = W/M, H0 = H/M, M denoting the down sampling factor. A transformer branch then processes f j to extract contextual information.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to extract global information from the image data sampled at a stride just like how the contextual information is extracted. (Dalmaz, [Pg 13, Col 1, Paragraph 3], “Since feature representations are primarily extracted via convolutional filtering, the resulting model can still manifest limited expressiveness for global context.”).
Regarding Claim 20, the combination of Chen, Carlo and Hassen fails to teach:
wherein the first input image or the second input image is acquired by a transforming magnetic resonance (MR) device.
Dalmaz Teaches
wherein the first input image or the second input image is acquired by a transforming magnetic resonance (MR) device. (Dalmaz, [Pg 5, Col 2, Paragraph 3]; “We demonstrated the proposed ResViT model on two multi-contrast brain MRI datasets”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Dalmaz. The motivation for the combination is to be able to use MRI images specifically as input. (Dalmaz, Fig 5.)
Figure 5
PNG
media_image5.png
423
743
media_image5.png
Greyscale
Claims 7, 8, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 11189016 B1), Carlo (CA 2497212 C), and Hassen (Multifocus Image Fusion Using Local Phase Coherence Measurement Chen (US 11189016 B1), Carlo (CA 2497212 C) as applied to Claim 1 above, and further in view of Greg (AU2018346938A1)
Regarding Claim 7, The combination of Chen, Carlo and Hassen fails to teach:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a pre-contrast image, a post-contrast image and a low-dose image.
Greg teaches:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a pre-contrast image (Greg, [0015], “The inputs … zero-contrast dose image 100), a post-contrast image (Greg, [0015], “The inputs … full-contrast dose image 116) and a low-dose image (Greg, [0015], “The … low-contrast dose image 102).
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Greg. The motivation for the combination is to be able to apply various contrast level images for the training model. (Greg, Fig 1.)
PNG
media_image6.png
858
737
media_image6.png
Greyscale
Regarding Claim 8, The combination of Chen, Carlo and Hassen fails to teach:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a first image corresponding to a first contrast dose level, a second image corresponding to a second contrast dose level and a third image corresponding to a third contrast dose level, wherein the first contrast dose level is higher than the second contrast dose level which is higher than the third contrast dose level.
Greg teaches:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a first image corresponding to a first contrast dose level (Greg, [0016, “an additional dose (eg, 90%) of contrast is administered to give a total of 100% dose, and a full-dose image 204 is then obtained.” ) a second image corresponding to a second contrast dose level (Greg, [0016,”… , a low-dose (eg, 10%) contrast is administered and a low-dose image 202 is obtained”) and a third image corresponding to a third contrast dose level, wherein the first contrast dose level is higher than the second contrast dose level which is higher than the third contrast dose level(Greg, [0016],” After the pre-contrast (zero-dose) image 200 is obtained)”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Greg. The motivation for the combination is to be obtain image corresponding to each contrast level. (Greg, Fig 2.)
PNG
media_image7.png
609
324
media_image7.png
Greyscale
Regarding Claim 17, The combination of Chen, Carlo and Hassen fails to teach:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a pre-contrast image, a post-contrast image and a low-dose image.
Greg teaches:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a pre-contrast image (Greg, [0015], “The inputs … zero-contrast dose image 100), a post-contrast image (Greg, [0015], “The inputs … full-contrast dose image 116) and a low-dose image (Greg, [0015], “The … low-contrast dose image 102).
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Greg. The motivation for the combination is to be able to apply various contrast level images for the training model. (Greg, Fig 1.)
PNG
media_image6.png
858
737
media_image6.png
Greyscale
Regarding Claim 18, The combination of Chen, Carlo and Hassen fails to teach:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a first image corresponding to a first contrast dose level, a second image corresponding to a second contrast dose level and a third image corresponding to a third contrast dose level, wherein the first contrast dose level is higher than the second contrast dose level which is higher than the third contrast dose level.
Greg teaches:
wherein the iterative model or the deep learning model is trained utilizing a training dataset comprising a first image corresponding to a first contrast dose level (Greg, [0016, “an additional dose (eg, 90%) of contrast is administered to give a total of 100% dose, and a full-dose image 204 is then obtained.” ) a second image corresponding to a second contrast dose level (Greg, [0016,”… , a low-dose (eg, 10%) contrast is administered and a low-dose image 202 is obtained”) and a third image corresponding to a third contrast dose level, wherein the first contrast dose level is higher than the second contrast dose level which is higher than the third contrast dose level(Greg, [0016],” After the pre-contrast (zero-dose) image 200 is obtained)”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Greg. The motivation for the combination is to be obtain image corresponding to each contrast level. (Greg, Fig 2.)
PNG
media_image7.png
609
324
media_image7.png
Greyscale
Claims 9, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 11189016 B1), Carlo (CA 2497212 C), and Hassen (Multifocus Image Fusion Using Local Phase Coherence Measurement as applied to Claim 1 above, and further in view of Greg (AU2018346938A1) and Dalmaz (“ResViT: Residual vision transformers for multi-modal medical image synthesis”).
Regarding Claim 9, the combination of Chen, Carlo and Hassen fails to teach:
wherein the second image is used as ground truth for the training.
Dalmaz teaches:
wherein the second image is used as ground truth for the training. (Dalmaz, [Pg 7, Col 2, Paragraph 2]; “Synthesis quality was assessed via PSNR and Structural Similarity Index (SSIM) [102]. Metrics were calculated between ground truth and synthesized target images.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Greg and Dalmaz. The motivation for the combination is to us the ground truth image for calculation of metric which is necessary for training. (Dalmaz, [8Pg 8, Col 1, Paragraph 2]; “Variant models were trained when transformer modules were ablated from ART blocks, when residual CNNs were ablated from transformer-retaining ART blocks, and when the adversarial loss term and the discriminator were ablated. In addition to PSNR and SSIM, we measured the Frechet inception distance (FID) [103] between ´ the synthesized and ground truth images to evaluate the importance of adversarial learning.”)
Regarding Claim 19, the combination of Chen, Carlo and Hassen fails to teach:
wherein the second image is used as ground truth for the training.
Dalmaz teaches:
wherein the second image is used as ground truth for the training. (Dalmaz, [Pg 7, Col 2, Paragraph 2]; “Synthesis quality was assessed via PSNR and Structural Similarity Index (SSIM) [102]. Metrics were calculated between ground truth and synthesized target images.”)
Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Chen, Carlo and Hassen with Greg and Dalmaz. The motivation for the combination is to us the ground truth image for calculation of metric which is necessary for training. (Dalmaz, [8Pg 8, Col 1, Paragraph 2]; “Variant models were trained when transformer modules were ablated from ART blocks, when residual CNNs were ablated from transformer-retaining ART blocks, and when the adversarial loss term and the discriminator were ablated. In addition to PSNR and SSIM, we measured the Frechet inception distance (FID) [103] between ´ the synthesized and ground truth images to evaluate the importance of adversarial learning.”)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANGI SARKAR whose telephone number is (571)272-7262. The examiner can normally be reached M-F: 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIVANGI SARKAR/Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666