DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
Claims 1-14 are pending regarding this application.
Claims 15-25 are cancelled.
Election/Restrictions
Applicant’s election without traverse of the restriction requirement applied to Group I: Claims 1-14, drawn to an image reconstructing method and Group II: Claims 15-25, drawn to an image generation training method in the reply filed on 12/17/2025 is acknowledged.
Claims 15-25 withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/17/2025. As noted above, claims 15-25 have been cancelled.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 9-11, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Jo et al. (“Deep Arbitrary HDRI: Inverse Tone Mapping With Controllable Exposure Changes”), hereinafter Jo, in view of Qu et al. (“Synthesized 7T MRI from 3T MRI via Deep Learning in Spatial and Wavelet Domains”), hereinafter Qu.
Regarding claim 1, image reconstructing method, for generating an output image according to an input image and a target EV(exposure value), comprising:
(a) extracting at least one first feature map of the input image (Jo teaches “compress[ing] an input image into the latent representation in the encoder” in Section III(A));
(b) synthesizing at least one (Jo teaches that “the target EV is input into the exposure control network and the brightness feature generator in the form of a matrix with spatial dimensions of the concatenated image or feature map” in Section III(A). This feature map is interpreted as equivalent to the feature map which is associated with the input image as it has the dimensions of the input image. The target EV is expanded (synthesized) to the spatial dimensions of the aforementioned feature map (see figure 3). This expanded matrix is interpreted as the third feature map, which is input into both the exposure control network and the brightness feature generator. See also FIG. 2);
(c) performing (Jo teaches that the “Spatially-adaptive normalization restores the high frequency component upon changes in the exposure value by providing different weights and bias for each pixel during the denormalization process. The gamma adjusts the variance of the feature values. It gives the effect like adjusting brightness variance. Then, the beta shifts the feature values, which has a role of fine-tuning brightness value” in Section III(A)(1); see also FIG. 4 and 5. Here, the result of the spatially-adaptive normalizations is interpreted as the brightness transformation (see the output of the Brightness feature generator, which are interpreted as equivalent to the fourth feature maps that are based on the expanded target EV matrix (third feature map)), wherein normalization is interpreted as equivalent to the ; and
(d) synthesizing (Jo teaches that the input image is concatenated with the target EV to identify the low frequency component and the brightness feature maps are processed to produce the high frequency component, and the low and high frequency components are added by the exposure control network to produce the output image (EV +1.5 in Figure 2). See also Section III and “the low frequency component restored through the EV conditional convolution and the high frequency component restored through the SA Norm are added element-wise, the exposure value of the image is changed by the target EV” in Figure 3).
Jo fails to teach a second feature map; (c) performing affine brightness transformation to the third feature map to generate fourth feature maps and synthesizing the input image with the fourth feature maps to generate the output image.
However, Qu teaches multiple feature maps being generated from the input image (Qu, see figure 3 and Section 3.1);
(c) performing affine brightness transformation to the third feature map to generate fourth feature maps (Qu teaches “transform[ing] feature maps using learned affine mappings” in Section 3.2.1; Qu’s teaching of the affine transformations can be combined with Jo’s teaching of the brightness transformation to teach the above limitation) and synthesizing the input image with the fourth feature maps to generate the output image (Qu teaches that the method takes “the wavelet modulated feature maps and generates the residual image that is added to the 3T image for generating the 7T image” in Section 3.3.2; here, the wavelet modulated feature maps are equivalent to the transformed feature maps taught in the above citation. These modulated feature maps are synthesized with the input image (3T image) to generate the output image (7T image). See also Section 3.5 which states that “the input 3T image was processed slice by slice to synthesize the final 7T image”).
Jo and Qu are both considered to be analogous to the claimed invention because they are in the same field of synthesizing images to improve image quality. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jo to incorporate the teachings of Qu and include “multiple feature maps being generated from the input image; (c) performing affine brightness transformation to the third feature map to generate fourth feature maps and synthesizing the input image with the fourth feature maps to generate the output image”. The motivation for doing so would have been that the “synthesized images led to significant improved overall quality (p < 0.01), image contrast (p < 0.01), and outlines of deep brain structures (p < 0.01) compared to the original 3T images” and “synthesizing high-quality 7T images with better tissue contrast and greater details”, as suggested by Qu in Section 4.6 and the Abstract, respectively. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Jo with Qu to obtain the invention specified in claim 1.
Regarding claim 2, Jo and Qu teach the image reconstructing method of claim 1,
wherein the target EV is a non-integer (Jo teaches that the target EV is “in the form of a matrix with spatial dimensions of the concatenated image or feature map” in Section III(A). See also Fig. 3).
Regarding claim 3, Jo and Qu teach the image reconstructing method claim 2,
wherein the image reconstructing method refers to at least one ground truth image to generate the output image (Jo teaches a ground truth image used in the training process of the model which is used to generate the output image in Section III(B)),
wherein an EV of the ground truth image is an integer and an EV of the output image is a non-integer (Jo teaches “the exposure control network learns N target EVs represented by T” wherein T is an integer and “the L1 loss calculates the pixel-wise mean absolute error (MAE) between the image generated by changing the input image I^i by EV T and the ground truth image I^(i+T)” in Section B. Here, the EV of the ground truth is T, which is an integer. Since the output image is a synthesized version of multi-exposure images (see Figure 2), it is inherent that the EV of the output image is a non-integer).
Regarding claim 4, Jo and Qu teach the image reconstructing method of claim 1,
wherein the step (a) uses a hierarchical U-Net structure to extract the first feature map (Jo teaches a method which “compresses an input image into the latent representation in the encoder” wherein “we maintain the encoder and skip connection of U-Net” in Section III(A). Here the U-Net structure is hierarchical as it has an encoder and decoder which acts as the hierarchical structure).
Regarding claim 9, Jo and Qu teach the image reconstructing method of claim 1,
wherein the step (b) synthesizes the second feature map by an implicit module (Jo teaches that the “network generates feature maps (called brightness features) abstracted at different levels from the luminance component of the input image and the target EV” in Section III(A)(1). Here, the brightness generator is interpreted as equivalent to the claimed implicit module).
Regarding claim 10, Jo and Qu teach the image reconstructing method of claim 1,
wherein the fourth feature map is generated by scaling up the third feature map (Qu teaches that “the WAT layer modulates the feature maps F by scaling them linearly with γl” in Section 3.2). Similar motivations as applied to claim 1 can be applied here.
Regarding claim 11, Jo and Qu teach the image reconstructing method of claim 1,
wherein the step (d) generates the output image by adding one of the fourth feature map to a multiplying result of another one of the fourth feature maps (Jo teaches a multiplying result of a fourth feature map wherein “the obtained gamma and beta are multiplied and added to normalized activation element-wise” in Section III(A)(1), then the fourth feature maps are merged as shown in FIG. 2, wherein the merging is interpreted as equivalent to the adding process. Qu additionally teaches the multiplying result of a fourth feature map in Section 3.2 wherein the intermediate feature maps are multiplied by the affine parameters).
Regarding claim 13, Jo and Qu teach the image reconstructing method of claim 1, further comprising:
repeatedly performing the steps (a) , (b) , (c) , (d) to generate different ones of output images corresponding to different ones of the target EVs (Jo teaches multi-exposure images can be generated from the image taken with the middle exposure. For example, multi-exposure images of EV –3 to EV +3 can be generated by passing each target EV and image with EV 0 through the network 6 times” in Section III); and
generating a reconstructed image according to the different ones of the output images (Jo teaches generating the reconstructed image based on a merging algorithm applied to the multiple output images in Section III and Figure 2).
Regarding claim 14, Jo and Qu teach the image reconstructing method of claim 13,
wherein dynamic ranges of the different ones of the output images are lower than a dynamic range of the reconstructed image (Jo teaches “generating a LDR image with a different exposure from the original LDR image” in order to “generate multi-exposure images from a single LDR image and subsequently merge them into the HDR image” in Section 1, wherein the output HDR image (see figure 2) is interpreted as equivalent to the reconstructed image, and the multi-exposure images are interpreted as the output images which have a lower dynamic range than the output HDR image. See figure 2 and Section III).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Jo et al. (“Deep Arbitrary HDRI: Inverse Tone Mapping With Controllable Exposure Changes”), hereinafter Jo, in view of Qu et al. (“Synthesized 7T MRI from 3T MRI via Deep Learning in Spatial and Wavelet Domains”), hereinafter Qu and Dangi et al. (U.S. Publication No. 2021/0360179 A1), hereinafter Dangi.
Regarding claim 12, Jo and Qu teach the image reconstructing method of claim 1.
While Qu teaches using a deep learning network to perform the affine brightness transformation in Section 1, Jo and Qu fail to teach wherein the step (c) performing the affine brightness transformation to the fourth feature map to generate fifth feature maps by at least one CNN (Convolutional neural network) .
However, Dangi teaches wherein the step (c) performing the affine brightness transformation to the fourth feature map to generate fifth feature maps by at least one CNN (Convolutional neural network) (Dangi teaches “the one or more trained neural networks output one or more affine coefficients based on use of the image data as input to the one or more trained neural networks” wherein “Generating the one or more maps can include generating a first map at least by transforming the image data using the one or more affine coefficients. The image data can include luminance channel data corresponding to an image, so that transforming the image data using the one or more affine coefficients includes transforming the luminance channel data using the one or more affine coefficients” in para. [0252], wherein, the one or more trained NNs can be CNNs as shown in para. [0240] and [0244]).
Jo, Qu, and Dangi are both considered to be analogous to the claimed invention because they are in the same field of adjusting/analyzing feature maps to improve image quality. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jo (as modified by Qu) to incorporate the teachings of Dangi and include “wherein the step (c) performing the affine brightness transformation to the fourth feature map to generate fifth feature maps by at least one CNN (Convolutional neural network)”. The motivation for doing so would have been that the “use of affine coefficients and/or local linearity constraints for generating maps can produce higher quality spatially varying image modifications than systems that do not use affine coefficients and/or local linearity constraints for generating maps, for instance due to better alignment between the image data and the maps, and reduced halo effects at the boundaries of depicted objects”, as suggested by Dangi in para. [0087]. Therefore, it would have been obvious to one of ordinary skill at the time the invention was filed to combine Jo and Qu with Dangi to obtain the invention specified in claim 12.
Allowable Subject Matter
Claims 5-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter.
The best prior art of record is Jo, Qu, Dangi, and Yang et al. (U.S. Publication No. 20230177643 A1), hereinafter Yang. Prior art applied alone or in combination with fails to anticipate or render obvious claims 5-8.
Claim 5
Regarding claim 5, Jo and Qu teach the image reconstructing method of claim 1,
wherein the step (a) comprises:
extracting the first feature map with a first size and the first feature map with a second size (Qu teaches extracting feature maps of different sizes in Section 3.3.1. See also Figure 3).
Jo further teaches performing concatenation to generate a second feature map in figures 2 and 3.
Yang further teaches scaling and concatenating feature maps.
However, neither Jo, nor Qu, nor Dangi, nor Yang, nor the combination, teaches performing concatenation to the first scale-up feature map and the first feature map with the second size to generate the second feature map.
Claim 6
Regarding claim 6, Jo and Qu teach the image reconstructing method of claim 1,
wherein the step (a) comprises:
extracting the first feature map with a first size and the first feature map with a second size (Qu teaches extracting feature maps of different sizes in Section 3.3.1. See also Figure 3).
Jo further teaches performing concatenation to generate a second feature map in figures 2 and 3.
Yang further teaches scaling and concatenating feature maps.
However, neither Jo, nor Qu, nor Dangi, nor Yang, nor the combination, teaches performing concatenation to a fifth scale-up feature map of a fifth feature map and the first feature map with the second size to generate the second feature map; wherein the fifth feature map is generated via synthesizing the target EV and a sixth feature map; wherein the sixth feature map is generated by performing concatenation to the first feature map with the first size.
Claims 7-8 include allowable subject matter by virtue of being dependent upon claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Furumura et al. (U.S. Publication No. 2013/0229546 A1) teaches a method of combining LDR images to output a HDR image.
Kang et al. (KR 20220028814 A, see English translation) teaches a method of inverse tone mapping and method for inverse mapping through exposure changing.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to KYLA G ALLEN whose telephone number is (703)756-5315. The examiner can
normally be reached M-F 7:30am - 4:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a
USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use
the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor,
John Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this
application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from
Patent Center. Unpublished application information in Patent Center is available to registered users. To
file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and
https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional
questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like
assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or
571-272-1000.
/Kyla Guan-Ping Tiao Allen/
Examiner, Art Unit 2661
/JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661