DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to the Applicant
Limitations appearing inside {} are intended to indicate the limitations not taught by said prior art(s)/combinations.
Response to Amendments
The Amendment filled 12/12/2025 in response to Non-Final Office Action mailed 09/24/2025 has been entered. Claims 1, 4, 6, 8-15, 17, and 19 have been amended. No new matter has been introduced. Objections to claims 1-20 have been withdrawn in light of the amended claims. Rejections to claims 1,4-6, 10, and 14 under 35 USC §112(b) have been withdrawn in light of the amended claims.
Claims 1-20 are currently pending.
Response to Arguments/Remarks
Applicant’s arguments/remarks with respect to references of record have been considered and persuasive with regards to the 101 rejections, and 102 rejections of claim 1 and 11, and the 103 rejections.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8, 9, 11, 12, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Shibata et. al., US 20170316542 A1, hereinafter Shibata in view of DeWeert et al., US 20190228506 A1, hereinafter DeWeert
Regarding claim 1, Shibata teaches A method comprising:
accessing (1) a corrupted image of a scene captured by a camera (Shibata ¶[0036]; image receiving unit 10 receives an input image from other device such as a camera or a scanner; ¶[0111] the input image [Y] is, in general, a blurred image), (2) an estimated true image of the scene (Shibata, ¶[0106]; reconstructed image [X]), (3) an estimated corruption operator f for the camera (Shibata, ¶[0108]; and a blur matrix [B]; ¶[0109]; for example, a point spread function (PSF)), and (4) one or more uncertainty metrics (interpretation: uncertainty metrics includes
R
f
, regularization term of
f
¶[32]) for {the estimated corruption operator f that quantify an uncertainty in the estimate of the estimated corruption operator} f (Shibata, ¶[0106]; regularization term);
generating, by applying a corruption operation to the estimated true image and the estimated corruption operator f, a predicted corrupted image of the scene captured by the camera (Shibata ¶[0112]; applying the blurring function [B] to the reconstructed image [X]);
determining a difference between the predicted corrupted image and the corrupted image captured by the camera (Shibata, ¶[0112]; the error term E.sub.data([X]) is a function including the input image [Y], the reconstructed image [X], and, the blur matrix [B]. See eq 9, shown below, where X is the reconstructed image, B is the blurring function, B·X is the predicted corrupted image, and Y is the input, blurred, image ¶[0111-0112]);
PNG
media_image1.png
39
566
media_image1.png
Greyscale
determining, based on the one or more uncertainty metrics for the estimated corruption operator f, a likelihood distribution for the estimated corruption operator f (Shibata, ¶[0093]; regularization strength estimating unit 240 estimates the regularization strengths (λ) (i.e., uncertainty) of the pixels of the input image based on the attribute reliability (i.e., likelihood)… the attribute reliability calculating unit 230 may estimate the regularization strengths λ for pixels of a partial area of the input image); and
updating, based on the likelihood distribution for the estimated corruption operator f and on the determined difference between the predicted corrupted image and the corrupted image captured by the camera, the estimated corruption operator f (Shibata, see Eq 7, shown below, and Eq 9, shown above, and ¶[0106-0107]l the optimization (i.e., updating) function (E(X)), is the sum of the determined regularization term Ereg([X]) and an error term Edata([X]); and see Fig 9, image reconstruction unit 250 updating the training information receiving unit 210).
PNG
media_image2.png
637
576
media_image2.png
Greyscale
PNG
media_image3.png
570
760
media_image3.png
Greyscale
Shibata teaches a regularization (i.e., uncertainty) of the input image, but does not explicitly disclose the estimated corruption operator f that quantify an uncertainty in the estimate of the estimated corruption operator f.
However, DeWeert, a similar field of endeavor of image deblurring, teaches the estimated corruption operator f that quantify an uncertainty in the estimate of the estimated corruption operator f (DeWeert, ¶[0102]; noise-regularized PSF estimate, and See Fig 6C).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include regularized PSF estimate as taught by DeWeert to the invention of Shibata. The motivation to do so would be to improve estimation of PSFs for image deblurring in a noise-robust manner.
Regarding claim 8, the combination of Shibata and DeWeert teaches the method of Claim 1. Shibata further teaches wherein the accessed estimated corruption operator f for the camera comprises an initial estimate of the corruption operator f generated by: generating, by the camera, a corrupted image of a known input (Shibata, ¶[0036]; The image receiving unit 10 receives an input image from a camera; ¶[0045]; training information receiving unit 210, when receiving the input image, also receives area information, attribute information, and image quality information (information for setting a strength of regularization; ¶[0048]), along with the input image (i.e., these information are interpreted as information of a known input) );
accessing one or more initial uncertainty metrics for the corruption operator f (Shibata, [0064]; the training information acquiring unit 212 acquires image quality information (for example, regularization strengths of the pixels of the reconstructed image) relating to the specified area; ¶[0065]; training information receiving unit 210 uses regularization strengths (i.e., uncertainty) of the input image as the image quality information); and
determining, based on the one or more initial uncertainty metrics and on a difference between an estimated corrupted image of the known input and the generated corrupted image of the known input, the initial estimate of the corruption operator f (Shibata, see Eq 7 and Eq 9, shown above, and ¶[0106-0107]l the optimization function (E(X)), is the sum of the determined regularization term Ereg([X]) and an error term Edata([X]); and see Fig 9, image reconstruction unit 250 updating the training information receiving unit 210. The process is iterative implying that an initial estimate f will be determined in the first iteration).
Regarding claim 9, the combination of Shibata and DeWeert teaches the method of Claim 8. Shibata further teaches wherein the accessed one or more uncertainty metrics for f comprise one or more initial uncertainty metrics for the corruption operator f as further updated based on the difference between the estimated corrupted image of the known input and the generated corrupted image of the known input (Shibata, ¶[0093]; regularization strength estimating unit 240 estimates the regularization strengths (λ) (i.e., uncertainty) of the pixels of the input image based on the attribute reliability (i.e., likelihood), the attribute reliability calculating unit 230 may estimate the regularization (i.e., uncertainty) strengths λ for pixels of a partial area of the input image. The process is iterative implying that an initial estimate f will be determined in the first iteration).
Regarding claim 11, the combination of Shibata and DeWeert teaches the method of Claim 1. Shibata further teaches where the corruption operator f comprises at least one of: a pseudo-differential operator; a non-stationary point-spread function; or a spatially invariant point-spread function (Shibata, See Eq 10, shown below and ¶[0115]; matrix representing a differential filter for an image).
PNG
media_image4.png
192
402
media_image4.png
Greyscale
Regarding claim 12, the combination of Shibata and DeWeert teaches the method of claim 11. Shibata further teaches wherein: the corruption operator f comprises the spatially invariant point-spread function (Shibata, ¶[0109]; blurring function includes a point spread function (PSF)); and the estimated true image of the scene is determined by Fourier-domain deconvolution of the corrupted image by the accessed corruption operator f (Shibata, ¶[0072-0073]; multiply-adding a filter; ¶[0104]; image reconstructing unit 250 may generate the reconstructed image by using image enhancement that enhances a specific frequency component in an area where the above-described regularization strength is strong).
Claim 15 is similarly analyzed as analogous claim 1. Shibata further teaches One or more non-transitory computer readable storage media storing instructions and coupled to one or more processors that are operable to execute the instructions (Shibata, ¶[0155]; the ROM 620 and the internal storage device 640 are non-transitory recording media).
Regarding claim 16, the combination of Shibata and DeWeert teaches the method of Claim 15. Shibata further teaches further comprising instructions coupled to one or more processors that are operable to execute the instructions to access one or more image priors for the corrupted image of the scene captured by the camera, wherein accessing an estimated true image of the scene comprises generating, based on the one or more image priors, the estimated true image (Shibata, ¶[0152]; The ROM 620 stores the program to be executed by the CPU 610 and static data. The ROM 620 is, for example, P-ROM (Programmable-ROM) or flash ROM).
Claim 17 is similarly analyzed as analogous Claim 8.
Claim 18 is similarly analyzed as analogous Claim 9.
Claims 2-7 are rejected under 35 U.S.C. 103 as being unpatentable over Shibata in view of DeWeert, and further in view of Hu et al., US 11107205 B2, hereinafter Hu.
Regarding claim 2, the combination of Shibata and DeWeert teaches the method of Claim 1. Shibata further teaches further comprising accessing one or more image priors for the corrupted image of the scene captured by the camera (Shibata, ¶[0167]; The learning image receiving unit 261 receives at least one image (learning image (i.e., estimated true image)) that is different from the input image),
wherein accessing an estimated true image of the scene comprises generating, {based on the one or more image priors} the estimated true image (Shibata, ¶[0187];. The learning image receiving unit 261 receives images, such as surveillance camera images, medical images, or satellite images, as the learning images (i.e., estimated true images). The combination does not explicitly disclose wherein accessing an estimated true image of the scene comprises generating, based on the one or more image priors the estimated true image.
However Hu teaches discloses wherein accessing an estimated true image of the scene comprises generating, based on the one or more image priors the estimated true image (Hu, Fig 12 and [Col 23:27-28]; multiple images of a scene are captured using an electronic device at step 1202; and [Col 24:17-18]; blending (i.e., generating) the synthesized images (i.e., based on image priors) is performed at step 1210; [Col 24:35:37]; the output of the post-processing is a final image of the scene (i.e., estimated true image)).
Shibata and Hu are analogous art because they are from the same field of endeavor of multi-exposure fusion of multiple image frames and for deblurring multiple image frames. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include image priors as taught by Hu to the combined invention of Shibata and DeWeert. The motivation to do so would be to generate a final image of the scene using at least some of the image frames and at least some of the blending maps, by blending the at least some of the image frames using the at least some of the blending maps, include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
Regarding claim 3, the combination of Shibata, DeWeert, and Hu teach the method of Claim 2. Hu further teaches further comprising updating (Hu, See Fig 8, shown below, exhibits “Another Iteration” 814 (i.e. update) and [Col 24:49-55]; various steps in FIG. 12 could overlap, occur in parallel, occur in a different order, or occur any number of times (i.e., update),
based on one or more image priors for the corrupted image of the scene (Hu, Fig 8 exhibits training patches at 802, where training patches includes a set of images [Col 17:38] (i.e., image priors), and
PNG
media_image5.png
672
448
media_image5.png
Greyscale
Shibata further teaches on the determined difference between the predicted corrupted image and the corrupted image captured by the camera, the estimated true image of the scene (Shibata, Eq 9 for difference between predicted corrupted image and the corrupted image captured by the camera).
Regarding claim 4, the combination of Shibata, DeWeert and Hu teach the method of Claim 3. Hu further teaches further comprising iteratively performing the method until at least one of: the difference between the predicted corrupted image and the corrupted image captured by the camera is less than a difference threshold; a change between two iterations in the difference between the predicted corrupted image and the corrupted image captured by the camera change is less than a convergence threshold; or an iterative threshold is reached (Hu, [Col 30:55-61]; The threshold/transfer function 2010 uses the noise level estimate to identify when differences detected in the image frames are actually representative of motion in the image frames).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include thresholding function as taught by Hu to the combined invention of Shibata and DeWeert. The motivation to do so would be to identify locations where the two image frames differ.
Regarding claim 5, the combination of Shibata, DeWeert, and Hu teach the method of Claim 4. Shibata further teaches further comprising storing, after a final iteration, the updated estimated corruption operator f for use in correcting a subsequent corrupted image captured by the camera (Shibata, See Fig 1 and ¶[0044]; image processing unit 20 may include a storage unit which is not illustrated, each component may store each information).
Regarding claim 6, the combination of Shibata, DeWeert, and Hu teach the method of Claim 4. Shibata further teaches further comprising: updating at least one of the one or more uncertainty metrics for f; and storing, after a final iteration, the updated at least one of the one or more uncertainty metrics for f for use in correcting a subsequent corrupted image captured by the camera (Shibata, See Fig 1 and ¶[0044]; image processing unit 20 may include a storage unit which is not illustrated, each component may store each information).
Regarding claim 7, the combination of Shibata, DeWeert, and Hu teach the method of Claim 2. Hu further teaches wherein accessing the one or more image priors for the corrupted image of the scene comprises: determining, based on the accessed corrupted image of the scene capture by the camera, one or more characteristics of the scene; and selecting, based on the determined one or more characteristics of the scene, at least one of the one or more image priors (Hu, [Col 14:5-9]; initial layers in the encoder network 306 are responsible for extracting scene contents and spatially down-sizing feature maps associated with the scene contents; [Col 15:32-39]; dataset could include hundreds or thousands of image sets. Each image set would typically include multiple images of the same scene captured using different camera exposures, and different image sets would be associated with different scenes).
Claims 10, 13, 18, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shibata in view of DeWeert, and further in view of Gupta et. al., US 10148893 B2, hereinafter Gupta.
Regarding claim 10, the combination of Shibata and DeWeert teaches the method of Claim 1. Shibata further teaches further comprising {accessing one or more image-capture parameters [Symbol font/0x71]}; and updating, based on (1) the determined difference between the predicted corrupted image, (2) the corrupted image captured by the camera, and {(3) at least one of the one or more image- capture parameters [Symbol font/0x71]}, the estimated corruption operator f, as similarly analyzed in claim 1. Shibata teaches the attribute may be an optical property of an object included in an image, such as brightness or color (¶[0047]), where the optical properties may be interpreted as depending upon image-capture parameters. The combination does not explicitly teach accessing one or more image-capture parameters [Symbol font/0x71] and updated based on (3) at least one of the one or more image-capture parameters [Symbol font/0x71].
However, Gupta teaches accessing one or more image-capture parameters [Symbol font/0x71] and updated based on (3) at least one of the one or more image-capture parameters [Symbol font/0x71]. (Gupta, [Col 17:40-54]; process 500 can use any suitable properties to select an exposure scheme, such as brightness of the scene (e.g., irradiance of light from the scene), scene/camera motion, and image sensor parameters (e.g., read-noise level, bit-depth, full-well capacity, read-out speed of the camera, whether the camera reads out image data destructively or non-destructively, and/or any other suitable image sensor parameters). For example, if the scene is relatively evenly illuminated (e.g., has a relatively low dynamic range on the order of <10.sup.3), process 500 can select to capture video using a low dynamic range scheme (e.g., single exposures)).
Shibata and Gupta are analogous art because they are from the same field of endeavor of high dynamic range imaging. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include capturing image parameters as taught by Gupta to the combined invention of Shibata and DeWeert. The motivation to do so would be to set an exposure scheme to use based on one or more properties of the image sensor.
Regarding claim 13, the combination of Shibata and DeWeert teaches the method of Claim 1. Shibata further teaches {wherein: the corrupted image of the scene comprises one of a plurality of images of the scene, each of the plurality of images associated with a different exposure time}; and the corruption operator f comprises a high-dynamic-range corruption operator (Shibata, ¶[0104]; reconstruct/generate an image by using image enhancement processing/high dynamic range imaging). The combination does explicitly disclose wherein: the corrupted image of the scene comprises one of a plurality of images of the scene, each of the plurality of images associated with a different exposure time.
However, Gupta teaches wherein: the corrupted image of the scene comprises one of a plurality of images of the scene, each of the plurality of images associated with a different exposure time (Gupta, [Col 5:2-3]; a series of frames of image data to be captured with different exposure times).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include multiple images associated with different exposure times as taught by Gupta to the combined invention of Shibata and DeWeert. The motivation to do so would be to mitigate undesirable artifacts due to registration errors (e.g., due to motion), or generate an image that does not have a significantly higher dynamic range than if a single low dynamic range image were captured .
Regarding claim 18, the combination of Shibata and DeWeert teaches the method of Claim 17. The combination does not explicitly disclose wherein the corrupted image of the known input comprises a plurality of corrupted images of the known input.
However, Gupta teaches wherein the corrupted image of the known input comprises a plurality of corrupted images of the known input (Gupta [Col 6:9-16]; capture multiple low dynamic range images (e.g., multiple images captured from multiple different exposures of the image sensor) such as would be captured, for example, in a burst mode of some digital cameras).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include multiple images of known input as taught by Gupta to the combined invention of Shibata and DeWeert. The motivation to do so would be to automatically determine settings.
Claim 19 is similarly analyzed as analogous claim 10.
Regarding claim 20, the combination of Shibata, DeWeert, and Gupta teach the method of Claim 19. Gupta further teaches further comprising updating the at least one of the one or more initial uncertainty metrics based on the one or more image-capture parameters [Symbol font/0x71] (Gupta, [Col 18:34-46]; the growth rate can be based on a parameter s that can be based on one or more parameters of the image sensor, and can be used to vary the growth rate with respect to both Î.sub.k−1 and {circumflex over (M)}.sub.k−1. For example, for a high quality sensor (e.g., a sensor having low read-noise, large bit depth and/or large full-well capacity), process 500 can implemented with a relatively large value s, while for a low quality sensor (e.g., a sensor having high read-noise, shallow bit depth and/or low full-well capacity) process 500 can implemented with a relatively small value s. In some embodiments, both Î.sub.k−1 and {circumflex over (M)}.sub.k−1 can be normalized such that they each lie in the range of [0,1]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Shibata in view of Zhou, US 20210152735 A1, hereinafter Zhou.
Regarding claim 14, the combination of Shibata and DeWeert teaches the method of Claim 1. The combination does not explicitly disclose wherein: the camera comprises a camera disposed behind a display structure of a device incorporating the camera; and the corruption operator f is based on an obstruction created by the display structure.
However, Zhou teaches wherein: the camera comprises a camera disposed behind a display structure of a device incorporating the camera (Zhou, ¶[0035]; capturing images through a display may be challenging, as positioning the display in front of a camera aperture may result in lower light transmission, lens occlusion, and diffraction, all of which may degrade image quality); and
the corruption operator f is based on an obstruction created by the display structure (Zhou, ¶[0065]; the PSF for an optical system comprising the example tOLED display positioned between the camera and an image scene is approximately 100 pixels).
Shibata and Zhou are analogous art because they are from the same field of endeavor of restoration of degraded images acquired via a behind-display camera. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a corruption operator based on an obstruction created by a camera disposed behind a display as taught by Zhou to the combined invention of Shibata and DeWeert. The motivation to do so would be to generate frequency information that is missing from images acquired via the behind-display camera.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANDHANA PEDAPATI whose telephone number is (571)272-5325. The examiner can normally be reached M-F 8:30am-6pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 5712727409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANDHANA PEDAPATI/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669