Prosecution Insights
Last updated: April 19, 2026
Application No. 18/579,149

BLIND IMAGE DENOISING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Jan 12, 2024
Examiner
FITZPATRICK, ATIBA O
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Zhejiang Uniview Technologies Co. Ltd.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
775 granted / 881 resolved
+26.0% vs TC avg
Minimal +5% lift
Without
With
+4.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
908
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 881 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2, 3, 11, 12, 17, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2, 11, and 17 recite the limitation "each candidate exposure gain value" in line 3. There is insufficient antecedent basis for this limitation in the claim. One of ordinary skill in the art cannot know which “each candidate exposure gain value” is/are being referred to. Dependent claims 3, 12, and 18 do not remedy this deficiency. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over machine-translation of CN 112513936 A (Zhang), which was submitted in IDS dated 01/12/2024, in view of US 20240386527 A1 (Zhang527). As per claim 1, Zhang teaches a determining a target noise parameter of a to-be-denoised image according to an image noise calibration result obtained by pre-performing an image noise calibration on an image acquisition device of the to-be-denoised image (Zhang: Fig. 2 (machine translation; shown below): S204: “calibration data; wherein the calibration data is obtained based on the target noise in a reference image”; abstract: “the calibration data is obtained based on the target noise in the reference image, for determining the target noise of the frequency band and the target noise of the grey value… The calibration data is used as a reference, which can effectively identify noise and real object from the image to be de-noised, and can accurately estimate the grey value of the noise, and obtain better de-noising effect”; Para 18; Para 33: “the calibration data is obtained based on the target noise in the reference image, for determining the target noise of the frequency band and the target noise of the grey value”;); performing a preliminary filtering process on the to-be-denoised image to obtain a preliminary filtered image of the to-be-denoised image (Zhang: para 44: “in order to determine whether each pixel point in the to-be-de-noised image comprises a target noise, firstly performing Fourier transform to the denoised image to obtain the spectral image of the to-be-de-noised image, or also can through the pre-designed high-pass filter to filter processing the denoised image”; para 45: “the to-be-de-noised image to Fourier change, or using a high-pass filter for filtering processing, can according to the reference frequency band in the calibration data from the to-be-de-noised image to determine the second pixel point, wherein the second pixel point is the pixel point in the reference frequency band”;); determining a noise level estimation result of the to-be-denoised image according to the target noise parameter and the preliminary filtered image (Zhang: Fig. 2 (machine translation; shown below): S204: “The grayscale value of the target noise contained in each pixel of the image to be denoised is determined according to predetermined calibration data”; Abstract: “determining the grey value of the target noise included in each pixel point of the image to be de-noised according to the predetermined calibration data”; Para 43: “grey value of the target noise according to the predetermined calibration data, and de-noising the image to be de-noised according to the grey value of each pixel point comprising noise”; Para 44: “in order to determine whether each pixel point in the to-be-de-noised image comprises a target noise, firstly performing Fourier transform to the denoised image to obtain the spectral image of the to-be-de-noised image, or also can through the pre-designed high-pass filter to filter processing the denoised image; and then determining whether each pixel point comprises the target noise according to the reference frequency band in the calibration data. for example, the image data of each row or each column of the to-be-de-noised image is subjected to Fourier change, obtaining the spectrum graph of each row or each column of image data, then according to the spectrum graph of each row or each column and the reference frequency band in the calibration data, determining whether the row or the column comprises the target noise”; : “grayscale value of the target noise contained in each pixel of the image to be denoised” is the noise level estimation result, which is also the noise level map); and performing a final denoising process on the to-be-denoised image according to the noise level estimation result to obtain a final (Zhang: Fig. 2 (machine translation; shown below): S206: “Based on the grayscale value of the target noise contained in each pixel, the… Denoising the image to be denoised”; Para 43: “grey value of the target noise according to the predetermined calibration data, and de-noising the image to be de-noised according to the grey value of each pixel point comprising noise”; Abstract: “denoising the image to be de-noised according to the grey value of the target noise included in each pixel point. The calibration data is used as a reference, which can effectively identify noise and real object from the image to be de-noised, and can accurately estimate the grey value of the noise, and obtain better de-noising effect”; Para 33: “S204, determining the grey value of the target noise included in each pixel point of the image to be de-noised according to the predetermined calibration data”; Para 34: “S206, according to the grey value of the target noise included in each pixel point to the to-be-de-noised image de-noising processing.”; PNG media_image1.png 679 909 media_image1.png Greyscale PNG media_image2.png 464 565 media_image2.png Greyscale ). Zhang is silent regarding blind image denoising; or blind denoising (emphasis added). Zhang527 teaches a blind image denoising method, comprising: performing a final denoising process on the to-be-denoised image according to the noise level estimation result to obtain a final blind denoising result of the to-be-denoised image (Zhang527: Abstract, paras 1, 6, 17, 26-28, 34, 35, 38, 44, 54, 56, 59, 60-62, 66, 68, 69: “blind denoising”; Para 14: “The estimated noise distribution map may be used as marker information to distinguish the noise level of the three-dimensional input image, so that the trained model has better denoising effect on all three-dimensional input images with different noise levels”; Para 29: “The estimated noise distribution map may be used as marker information to distinguish the noise level of the three-dimensional input image, so that the trained model has better denoising effect on all three-dimensional input images with different noise levels”; Para 52: “the first convolution layer in the encoding structure is used to extract 15 feature maps of the three-dimensional input image, and the concatenation layer is used to concatenate the 15 feature maps and the noise distribution map to obtain 16 feature maps of the three-dimensional input image”; Fig. 1 (shown below): “input noise map”, “noise distribution map”… “output denoised image” PNG media_image3.png 445 1205 media_image3.png Greyscale ). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Zhang527 into Zhang since both Zhang and Zhang527 suggest a practical solution and field of endeavor of determining a noise map and denoising an input image using noise map in general and Zhang527 additionally provides teachings that can be incorporated into Zhang in that the denoising is blind denoising as to “fully retain the detailed information of an image while removing a speckle noise in the three-dimensional ultrasound image in real time” (Zhang527: para 27) as to “accurately complete denoising” (Zhan527: para 56) as to “to achieve blind denoising of three-dimensional ultrasound images of different parts and different noise levels” (Zhang527: para 59). The teachings of Zhang527 can be incorporated into Zhang in that the denoising is blind denoising. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 9, arguments made in rejecting claim(s) 1 are analogous. Zhang also teaches an electronic device, comprising: at least one processor; and a storage apparatus configured to store at least one program (Zhang: Figs. 3-4; Paras 13, 17, 18, 63, 68, 71, 77, 80, 90, 94, 98, 108, 11, 112). As per claim(s) 10, arguments made in rejecting claim(s) 1 are analogous. Zhang teaches a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor (Zhang: Figs. 3-4; Paras 13, 17, 18, 63, 68, 71, 77, 80, 90, 94, 98, 108, 11, 112). Claim(s) 2, 4, 11, 13, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Zhang527 as applied to claims 1, 9, and 10 above, and further in view of machine translation of CN 112291447 A (Wang). As per 2, Zhang in view of Zhang527 teaches the method according to claim 1. Zhang in view of Zhang527 does not teach pre-performing the image noise calibration on the image acquisition device of the to-be-denoised image comprises: for each candidate exposure gain value, acquiring data of at least two to-be-calibrated images from a same shooting scene through the image acquisition device, wherein at least one candidate exposure gain value is provided; and for each candidate exposure gain value, determining a candidate noise parameter of a candidate exposure gain value according to the data of the at least two to-be-calibrated images associated with the candidate exposure gain value, and determining the image noise calibration result of the image acquisition device according to each candidate exposure gain value and the candidate noise parameter associated with each candidate exposure gain value. Wang teaches these limitations (Wang: Abstract: “The invention claims an endoscope camera system self-adaptive time domain noise reduction method based on noise calibration, the method comprises a noise calibration module and a self-adaptive time domain noise reduction module. noise calibration module automatically adjusting the gain and exposure of the camera system and continuously shooting multiple test target images; calculating the brightness average value and noise reduction threshold of each gray scale under different gain different exposure, and storing in the parameter file, wherein the noise reduction threshold is used for judging whether the pixel point belongs to static when time domain noise reduction. self-adaptive time domain noise reduction module reads the stored parameter file; according to the current gain and exposure value of the camera system, combining local brightness of each pixel point of the video frame; automatically selecting the most suitable noise reduction threshold for each pixel point to judge whether it belongs to static and performing time domain noise reduction.”; Paras 11-16: “The purpose of the present invention is achieved by the following technical schemes. An adaptive time domain noise reduction method of endoscope camera system based on noise calibration, comprising a noise calibration module and a self-adaptive time domain noise reduction module; the noise calibration module obtains the brightness average value and corresponding noise reduction threshold of each gray scale of the standard test board under different gain different exposure by the way of noise calibration; then storing the calculated brightness average value and noise reduction threshold value in the parameter file; the adaptive time domain noise reduction module uses the parameter file stored by the noise calibration module; performing adaptive time domain noise reduction to the endoscope camera system; when the endoscope camera system is in different gain and exposure, selecting the gain and exposure corresponding to the brightness average value and noise threshold value in the parameter file corresponding to noise reduction; aiming at all pixels in the current video frame to be noise-reduced, using the selected brightness average value and noise reduction threshold to judge whether the pixel point belongs to static; then respectively calculating the pixel output value of the static and motion pixel points after noise reduction. Further, the noise calibration module comprises: reading the gain and exposure parameter range of the endoscope camera system; traversing all the gain and exposure parameter setting combination; performing calibration calculation to each combination; obtaining the brightness average value of each gray scale of the standard test board and the corresponding noise reduction threshold value; for one of the combination, namely the gain and exposure parameter of the endoscope camera system is fixed, shooting system continuously shooting standard test board to obtain N images, wherein the standard test board comprises C grey scale;”; paras 26, 36, 43, 47, 60, 75; para 36: “The beneficial effects of the present invention are as follows: The invention comprises a noise calibration module and a self-adaptive time domain noise reduction module; the noise calibration module calculates the brightness average value and noise reduction threshold of each gray scale under different gain different exposure; the adaptive time domain noise reduction module combines the local brightness of the pixel point according to the current gain and exposure value; The most appropriate noise reduction threshold is automatically selected for each pixel point to determine whether or not to be stationary and to perform time-domain noise reduction”; para 60: “self-adaptive time domain noise reduction module applying noise calibration to obtain the parameter file to the endoscope camera system for adaptive time domain noise reduction; when the endoscope camera system is in different gain and exposure, automatically selecting different parameters for noise reduction; aiming at all pixels in the current video frame to be noise-reduced, using the selected specific parameter to judge whether it belongs to static; then respectively calculating the pixel output value of the noise reduction in different ways for the static and moving pixel points”; determine candidate noise parameter for reference calibration images for each exposure gain (e.g. ISO) and determine noise calibration for the camera based on each exposure gain and associated candidate noise parameter). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Wang into Zhang in view of Zhang527 since both Zhang in view of Zhang527 and Wang suggest a practical solution and field of endeavor of image noise reduction involving camera calibration in general and Wang additionally provides teachings that can be incorporated into Zhang in view of Zhang527 in that camera calibration involves relationship between gain and noise as for “effectively reduce the time domain noise of the static area, without manually setting the noise reduction threshold value, at the same time, considering the gain and exposure of the camera system, strong adaptability, good real time performance” (Wang: abstract). The teachings of Wang can be incorporated into Zhang in view of Zhang527 in that camera calibration involves relationship between gain and noise. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per 4, Zhang in view of Zhang527 teaches the method according to claim 1. Zhang in view of Zhang527 does not teach the image noise calibration result comprises at least one candidate exposure gain value and a candidate noise parameter associated with each of the at least one candidate exposure gain value; and determining the target noise parameter of the to-be-denoised image according to the image noise calibration result obtained by pre-performing the image noise calibration on the image acquisition device of the to-be-denoised image comprises: determining a target exposure gain value of the image acquisition device when the image acquisition device acquires the to-be-denoised image; and determining, based on a relationship between the target exposure gain value and the at least one candidate exposure gain value, the target noise parameter associated with the target exposure gain value according to the candidate noise parameter associated with each of the at least one candidate exposure gain value. Wang teaches these limitations (Wang See arguments and citations offered in rejecting claim 2 above: determine target noise parameter (e.g. imager/imaging dependent noise) based on relationship between target exposure gain (e.g. camera ISO) and candidate exposure gain (e.g. reference gain) according to candidate noise parameter). See rationale for combining provided in rejecting claim 2 above. As per claim(s) 11 and 13, arguments made in rejecting claim(s) 2 and 4 are analogous, respectively. Zhang also teaches an electronic device, comprising: at least one processor; and a storage apparatus configured to store at least one program (Zhang: Figs. 3-4; Paras 13, 17, 18, 63, 68, 71, 77, 80, 90, 94, 98, 108, 11, 112). As per claim(s) 17 and 19, arguments made in rejecting claim(s) 2 and 4 are analogous, respectively. Zhang teaches a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor (Zhang: Figs. 3-4; Paras 13, 17, 18, 63, 68, 71, 77, 80, 90, 94, 98, 108, 11, 112). Claim(s) 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Zhang527 as applied to claims 1 and 9 above, and further in view of machine translation of CN 111489303 A (Liu). As per 7, Zhang in view of Zhang527 teaches the method according to claim 1. Zhang in view of Zhang527 does not teach the preliminary filtering process uses Gaussian filtering, mean filtering, median filtering, bilateral filtering or guided filtering. Liu teaches these limitations (Liu: abstract: “then using the guide filtering to obtain the optimized brightness component; further using the Gamma transform to correct the contrast of the optimized brightness component; separating the reflection component of the original image according to the Retinex theory, using the convolution blind de-noising network to eliminate the noise in the reflection component”; Para 10, 16, 46, 53: “guide filtering”). Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Liu into Zhang in view of Zhang527 since both Zhang in view of Zhang527 and Liu suggest a practical solution and field of endeavor of blind denoising using estimated noise map from prefiltered image data in general and Liu additionally provides teachings that can be incorporated into Zhang in view of Zhang527 in that the prefiltering is guide filtering as “to obtain the optimized brightness component” (Liu: abstract). The teachings of Liu can be incorporated into Zhang in view of Zhang527 in that the prefiltering is guide filtering. Furthermore, one of ordinary skill in the art could have combined the elements as claimed by known methods and, in combination, each component functions the same as it does separately. One of ordinary skill in the art would have recognized that the results of the combination would be predictable. As per claim(s) 16, arguments made in rejecting claim(s) 7 are analogous. Zhang also teaches an electronic device, comprising: at least one processor; and a storage apparatus configured to store at least one program (Zhang: Figs. 3-4; Paras 13, 17, 18, 63, 68, 71, 77, 80, 90, 94, 98, 108, 11, 112). Allowable Subject Matter Claims 3, 12, and 18 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Limitations pertaining to determining candidate noise parameter based on pairs of pixels from an average calibrated image and a variance calibrated image that were statistically determined based on plural images associated with each candidate exposure gain, in conjunction with other limitations present in the listed claim and intervening and independent claim(s), distinguish over the prior art. Also, the equation above determines noise variance (i.e. V(x)) characterizing a noise level of the target pixel point based on a multiplication of the candidate noise parameter (i.e. a.sub.i) and target pixel luminance value (i.e. x)). Claims 5-6, 14-15, and 20-21 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Limitations pertaining to determining noise level map, using the equation above, that is a multiplication of the target noise parameter (imager/imaging-dependent noise; a.sub.x,i) and the prefiltered image at corresponding pixel locations; wherein the noise level map is the same size as the input image and preliminary filtered image, in conjunction with other limitations present in the listed claim and intervening and independent claim(s), distinguish over the prior art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Atiba Fitzpatrick whose telephone number is (571) 270-5255. The examiner can normally be reached on M-F 10:00am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for Atiba Fitzpatrick is (571) 270-6255. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Atiba Fitzpatrick /ATIBA O FITZPATRICK/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602854
SYSTEM AND METHOD FOR MEDICAL IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12586195
OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579649
RADIATION IMAGE PROCESSING APPARATUS AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555237
CLOSEUP IMAGE LINKING
2y 5m to grant Granted Feb 17, 2026
Patent 12548221
SYSTEMS AND METHODS FOR AUTOMATIC QUALITY CONTROL OF IMAGE RECONSTRUCTION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+4.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 881 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month