Prosecution Insights
Last updated: April 19, 2026
Application No. 18/655,908

ACCUMULATED NOISE MODEL FOR OPTIMAL NOISE REDUCTION OPERATIONS

Non-Final OA §102§103
Filed
May 06, 2024
Examiner
CAI, PHUONG HAU
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
87 granted / 107 resolved
+19.3% vs TC avg
Strong +21% interview lift
Without
With
+20.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 6-9, 11-12 and 16-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xiao Xiong et. a. (foreign patent document “CN 111861942 A” hereinafter as “Xiong”) (mapping is based on translation starting at page 22 of the included document as page 1 of the translation). Regarding claim 1, Xiong discloses a method comprising: receiving an input image frame captured by an image sensor (page 3, 2nd par., discloses the image to be denoised indicating an input image captured by a camera according to page 6, 2nd par.); receiving a value map corresponding to the input image frame (page 2, 3rd par., discloses before converting of the noise reduction mask, a determining of first area and second are of the image is conducted and further setting a first noise reduction intensity value and a second noise reduction intensity value to construct a noise reduction mask map; therefore, the determined image areas and their corresponding noise reduction intensity values indicates there is a mapping between the area and the values which together can be analogous to the recited value map as claimed, by BRI); processing the input image frame to determine a processed image frame (page 2, 4th par., discloses further processing takes place to the image which, resulted in a processed image frame determined), wherein the processing comprises determining an updated value map based on the processing of the input image frame (the constructed noise reduction map, as discussed above, according to page 2, 3rd par., is analogous to the recited updated value map as claimed which is based on processing of the input image frame); and applying a noise reduction filter to the processed image frame based on the updated value map (page 1 describes that the noise reduction map is used to denoise the image hence indicating a noise reduction filtering process such as, further explained in page 5, 3rd to the last par.), wherein a strength of the noise reduction filter applied to each pixel of the input image frame is based on a corresponding value of the updated value map (as page 1 discloses the noise reduction is based on the denoising intensity values of the pixels included in the image, therefore indicating the filtering is based on the pixel’s noise intensity or the strength of the filtering is applied to the pixels [each of the pixels] based on the intensity values of the updated value map). Regarding claim 2, Xiong discloses the method of claim 1, wherein the value map comprises a noise map characterizing (as discussed above in claim 1, and as disclosed in page 2, 2nd to 4th paragraphs, the image’s determined areas including corresponding intensity values which include the noise intensity value [according to page 2, 4th to last pars.] to denoise the image effectively corresponding to the noise intensity of each area therefore, indicating a noise map characterizing), for each value in the noise map, an accumulated noise variance for a corresponding pixel of the image sensor (as disclosed in page 2, 4th to the last par., for each noise intensity value of each pixel, a first drop is determined according to corresponding fusion degree/or variance of the pixels in the area of the noise intensity value of all the pixels contained in that area indicating an accumulated noise variance for containing all the pixels in that area such as further explained in page 7, 4th to the last par.). Regarding claim 6, Xiong discloses the method of claim 1, wherein the value map is received from the image sensor (since the input image is obtained from the image sensor and the value map is obtained from the input image processing therefore, it can be understood that the value map is received from the image sensor). Regarding claim 7, Xiong discloses the method of claim 1, wherein processing the input image frame comprises: adjusting pixel values of the input image frame when determining the processed image frame (page 1 discloses the denoising including determining chromaticity interval to each pixel and converting the noise reduction intensity value of each pixel into a chromaticity value which is analogous to adjusting pixel values as claimed); and adjusting corresponding values of the value map according to the adjusted pixel values of the input image frame when determining the processed image frame (page 8, last 6 paragraphs, discloses the determined chromaticity values are used to denoise the image corresponding to individual pixels, therefore, indicate each pixel is being denoised [adjusted pixel value] according to the chromaticity value [the adjusted pixel values of the input image]). Regarding claim 8, Xiong discloses the method of claim 7, wherein the input image frame is processed over a plurality of stages (figure 1 shows the image is processed over a plurality of stages), wherein the pixel values are adjusted based on processing operations performed at one or more stages of the plurality of stages (as discussed above in claim 7, the converting of the noise reduction intensity value into a chromaticity value is a separate step to the step of denoising by adjusting the pixel values according to the determined chromaticity value, therefore, it can be understood that the pixel values are adjusted based on steps [processing operations] at one of the stage such as the stages of figure 1), and wherein the corresponding values of the value map are adjusted at each of the one or more stages based on the adjusted pixel values at that stage (as discussed previously, the denoising by adjusting the pixel value according to the chromaticity value happens at an another step hence the corresponding values of the value map are adjusted stages based on the adjusted pixel value at that stage, since the intensity value is being adjusted as well therefore the values of the value map are also adjusted analogously). Regarding claim 9, Xiong discloses the method of claim 8, wherein the processing operations performed at each stage adjust the pixel values in one or more areas of the input image frame (page 8, 6th to the last par., discloses the noise reduction processing happens at different areas hence indicates the pixel values are being adjusted at each stage in one or more areas of the input image frame), and wherein adjusting the corresponding values of the value map at each stage comprises (as discussed above in claims 7 and 8): calculating an accumulated variance for each pixel of the one or more areas based on the adjusted pixel values (as disclosed in page 2, 4th to the last par., for each noise intensity value of each pixel, a first drop is determined according to corresponding fusion degree/or variance of the pixels in the area of the noise intensity value of all the pixels contained in that area indicating an accumulated noise variance for containing all the pixels in that area such as further explained in page 7, 4th to the last par.); and adjusting the corresponding value of the value map for each pixel based on the accumulated variance calculated for that pixel (as disclosed in page 2, the denoising is conducted according to the calculated variance of the pixels). Regarding claim 11, Xiong discloses an apparatus, comprising: a memory storing processor-readable code; and at least one processor coupled to the memory, the at least one processor configured to execute the processor-readable code to cause the at least one processor to perform operations including (page 1 discloses the processing is performed using a computer which can be understood to include a memory storing code to be executed by a processor for the invention): receiving an input image frame captured by an image sensor (page 3, 2nd par., discloses the image to be denoised indicating an input image captured by a camera according to page 6, 2nd par.); receiving a value map corresponding to the input image frame (page 2, 3rd par., discloses before converting of the noise reduction mask, a determining of first area and second are of the image is conducted and further setting a first noise reduction intensity value and a second noise reduction intensity value to construct a noise reduction mask map; therefore, the determined image areas and their corresponding noise reduction intensity values indicates there is a mapping between the area and the values which together can be analogous to the recited value map as claimed, by BRI); processing the input image frame to determine a processed image frame (page 2, 4th par., discloses further processing takes place to the image which, resulted in a processed image frame determined), wherein the processing comprises determining an updated value map based on the processing of the input image frame (the constructed noise reduction map, as discussed above, according to page 2, 3rd par., is analogous to the recited updated value map as claimed which is based on processing of the input image frame); and applying a noise reduction filter to the processed image frame based on the updated value map (page 1 describes that the noise reduction map is used to denoise the image hence indicating a noise reduction filtering process such as, further explained in page 5, 3rd to the last par.), wherein a strength of the noise reduction filter applied to each pixel of the input image frame is based on a corresponding value of the updated value map (as page 1 discloses the noise reduction is based on the denoising intensity values of the pixels included in the image, therefore indicating the filtering is based on the pixel’s noise intensity or the strength of the filtering is applied to the pixels [each of the pixels] based on the intensity values of the updated value map). Regarding claim 12, Xiong discloses the apparatus of claim 11, wherein the value map comprises a noise map characterizing (as discussed above in claim 11, and as disclosed in page 2, 2nd to 4th paragraphs, the image’s determined areas including corresponding intensity values which include the noise intensity value [according to page 2, 4th to last pars.] to denoise the image effectively corresponding to the noise intensity of each area therefore, indicating a noise map characterizing), for each value in the noise map, an accumulated noise variance for a corresponding pixel of the image sensor (as disclosed in page 2, 4th to the last par., for each noise intensity value of each pixel, a first drop is determined according to corresponding fusion degree/or variance of the pixels in the area of the noise intensity value of all the pixels contained in that area indicating an accumulated noise variance for containing all the pixels in that area such as further explained in page 7, 4th to the last par.). Regarding claim 16, Xiong discloses an image capture device, comprising: an image sensor; a memory storing processor-readable code; and at least one processor coupled to the memory and to the image sensor, the at least one processor configured to execute the processor-readable code to cause the at least one processor to (page 1 discloses the processing is performed using a computer which can be understood to include a memory storing code to be executed by a processor for the invention): receiving an input image frame captured by an image sensor (page 3, 2nd par., discloses the image to be denoised indicating an input image captured by a camera according to page 6, 2nd par.); receiving a value map corresponding to the input image frame (page 2, 3rd par., discloses before converting of the noise reduction mask, a determining of first area and second are of the image is conducted and further setting a first noise reduction intensity value and a second noise reduction intensity value to construct a noise reduction mask map; therefore, the determined image areas and their corresponding noise reduction intensity values indicates there is a mapping between the area and the values which together can be analogous to the recited value map as claimed, by BRI); processing the input image frame to determine a processed image frame (page 2, 4th par., discloses further processing takes place to the image which, resulted in a processed image frame determined), wherein the processing comprises determining an updated value map based on the processing of the input image frame (the constructed noise reduction map, as discussed above, according to page 2, 3rd par., is analogous to the recited updated value map as claimed which is based on processing of the input image frame); and applying a noise reduction filter to the processed image frame based on the updated value map (page 1 describes that the noise reduction map is used to denoise the image hence indicating a noise reduction filtering process such as, further explained in page 5, 3rd to the last par.), wherein a strength of the noise reduction filter applied to each pixel of the input image frame is based on a corresponding value of the updated value map (as page 1 discloses the noise reduction is based on the denoising intensity values of the pixels included in the image, therefore indicating the filtering is based on the pixel’s noise intensity or the strength of the filtering is applied to the pixels [each of the pixels] based on the intensity values of the updated value map). Regarding claim 17, Xiong discloses the apparatus of claim 16, wherein the value map comprises a noise map characterizing (as discussed above in claim 16, and as disclosed in page 2, 2nd to 4th paragraphs, the image’s determined areas including corresponding intensity values which include the noise intensity value [according to page 2, 4th to last pars.] to denoise the image effectively corresponding to the noise intensity of each area therefore, indicating a noise map characterizing), for each value in the noise map, an accumulated noise variance for a corresponding pixel of the image sensor (as disclosed in page 2, 4th to the last par., for each noise intensity value of each pixel, a first drop is determined according to corresponding fusion degree/or variance of the pixels in the area of the noise intensity value of all the pixels contained in that area indicating an accumulated noise variance for containing all the pixels in that area such as further explained in page 7, 4th to the last par.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3-4, 13-14 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Xiao Xiong et. a. (foreign patent document “CN 111861942 A” hereinafter as “Xiong”) in view of Laurent Blanquart et. al. (“US 9,191,598 B2” hereinafter as “Blanquart”). Regarding claim 3, Xiong discloses the method of claim 2 (as discussed above in claim 2). However, Xiong does not explicitly disclose wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame. In the same field of noise reduction in image (title and abstract, Blanquart) Banquart discloses wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame (FIG. 25 at step 2516 a noise map is stored which is used to setting/adjusting the photosensor scanning mode such as disclosed in column 3, 2nd to the last par., and further disclosed in column 5, 4th par., wherein the noise map is used to reference at which scanning mode of the photosensor the noise level is based on such as disclosed in column 5, 3rd par., therefore, it characterizes the photosensor scanning mode according to the noise map as claimed). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform determining a noise map wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame as taught by Banquart to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to perform noise removing and adjustment of photosensor more efficiently (abstract, Banquart). Regarding claim 4, Xiong in view of Banquart discloses the method of claim 3 (as discussed above in claim 3), wherein the first operating mode is one of a plurality of operating modes associated with the image sensor (Banquart, as disclosed in column 5, 4th par., the photosensor scanning is reseted and adjusted according to the noise map; therefore, each resetting or adjusting of the photosensor indicate that the cameras can be used in different settings/operating modes according to different noise levels being associated to as pertaining to the stored noise map generated), and wherein each of the operating modes varies at least one of a resolution, a frame rate, or a gain setting used to capture the input image frame (“at least one of….or….” indicates a selection therefore, only one option is the instant scope of the limitation, the examiner selects “a gain setting” for mapping which is disclosed in Banquart’s column 5, 4th par., wherein a gain application to the scanning of the photosensor as the scanning mode is carried out). The same motivation for combination of arts is applied for claim 4 as stated above in claim 3. Regarding claim 13, Xiong discloses the apparatus of claim 12 (as discussed above in claim 12). However, Xiong does not explicitly disclose wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame. In the same field of noise reduction in image (title and abstract, Blanquart) Banquart discloses wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame (FIG. 25 at step 2516 a noise map is stored which is used to setting/adjusting the photosensor scanning mode such as disclosed in column 3, 2nd to the last par., and further disclosed in column 5, 4th par., wherein the noise map is used to reference at which scanning mode of the photosensor the noise level is based on such as disclosed in column 5, 3rd par., therefore, it characterizes the photosensor scanning mode according to the noise map as claimed). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform determining a noise map wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame as taught by Banquart to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to perform noise removing and adjustment of photosensor more efficiently (abstract, Banquart). Regarding claim 14, Xiong in view of Banquart discloses the apparatus of claim 13 (as discussed above in claim 13), wherein the first operating mode is one of a plurality of operating modes associated with the image sensor (Banquart, as disclosed in column 5, 4th par., the photosensor scanning is reseted and adjusted according to the noise map; therefore, each resetting or adjusting of the photosensor indicate that the cameras can be used in different settings/operating modes according to different noise levels being associated to as pertaining to the stored noise map generated), and wherein each of the operating modes varies at least one of a resolution, a frame rate, or a gain setting used to capture the input image frame (“at least one of….or….” indicates a selection therefore, only one option is the instant scope of the limitation, the examiner selects “a gain setting” for mapping which is disclosed in Banquart’s column 5, 4th par., wherein a gain application to the scanning of the photosensor as the scanning mode is carried out). The same motivation for combination of arts is applied for claim 14 as stated above in claim 13. Regarding claim 18, Xiong discloses the image capture device of claim 17 (as discussed above in claim 17). However, Xiong does not explicitly disclose wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame. In the same field of noise reduction in image (title and abstract, Blanquart) Banquart discloses wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame (FIG. 25 at step 2516 a noise map is stored which is used to setting/adjusting the photosensor scanning mode such as disclosed in column 3, 2nd to the last par., and further disclosed in column 5, 4th par., wherein the noise map is used to reference at which scanning mode of the photosensor the noise level is based on such as disclosed in column 5, 3rd par., therefore, it characterizes the photosensor scanning mode according to the noise map as claimed). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform determining a noise map wherein the noise map characterizes pixel noise levels associated with a first operating mode of the image sensor used to capture the input image frame as taught by Banquart to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to perform noise removing and adjustment of photosensor more efficiently (abstract, Banquart). Regarding claim 19, Xiong in view of Banquart discloses the image capture device of claim 18 (as discussed above in claim 18), wherein the first operating mode is one of a plurality of operating modes associated with the image sensor (Banquart, as disclosed in column 5, 4th par., the photosensor scanning is reseted and adjusted according to the noise map; therefore, each resetting or adjusting of the photosensor indicate that the cameras can be used in different settings/operating modes according to different noise levels being associated to as pertaining to the stored noise map generated), and wherein each of the operating modes varies at least one of a resolution, a frame rate, or a gain setting used to capture the input image frame (“at least one of….or….” indicates a selection therefore, only one option is the instant scope of the limitation, the examiner selects “a gain setting” for mapping which is disclosed in Banquart’s column 5, 4th par., wherein a gain application to the scanning of the photosensor as the scanning mode is carried out). The same motivation for combination of arts is applied for claim 19 as stated above in claim 18. Claims 5, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xiao Xiong et. a. (foreign patent document “CN 111861942 A” hereinafter as “Xiong”) in view of Robin Diekmann et. al. (“Photo-free(s)CMOS Camera Characterization for Artifact Reduction in High-and Super-Resolution Microscopy, June 2022, Nature Communications 13, Article number 3362” hereinafter as “Diekmann”). Regarding claim 5, Xiong discloses the method of claim 1 (as discussed above in claim 1). However, Xiong does not explicitly disclose wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance as a function of gain for a corresponding pixel of the image sensor. In the same field of camera image noise determination (abstract, Diekmann) Diekmann discloses wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance (page 5, 2nd to the last par., discloses the calculation of the gain map for all pixels for noise determination and denoising including accumulated variance for the pixels as disclosed in figure 1) as a function of gain for a corresponding pixel of the image sensor (as a function of gain such as disclosed in page 2, 1st column, 2nd to the last par., as the gain is calculated as the ratio of the variance and mean signal [a formular of function of gain] for the pixels of the image obtained from the image sensor [including each individual pixel being processed]). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform generating of an updated map, wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance as a function of gain for a corresponding pixel of the image sensor as taught by Diekmann to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to denoise effectively on an image with camera characterization of the signal of the image (abstract, Diekmann). Regarding claim 15, Xiong discloses the apparatus of claim 11 (as discussed above in claim 11). However, Xiong does not explicitly disclose wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance as a function of gain for a corresponding pixel of the image sensor. In the same field of camera image noise determination (abstract, Diekmann) Diekmann discloses wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance (page 5, 2nd to the last par., discloses the calculation of the gain map for all pixels for noise determination and denoising including accumulated variance for the pixels as disclosed in figure 1) as a function of gain for a corresponding pixel of the image sensor (as a function of gain such as disclosed in page 2, 1st column, 2nd to the last par., as the gain is calculated as the ratio of the variance and mean signal [a formular of function of gain] for the pixels of the image obtained from the image sensor [including each individual pixel being processed]). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform generating of an updated map, wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance as a function of gain for a corresponding pixel of the image sensor as taught by Diekmann to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to denoise effectively on an image with camera characterization of the signal of the image (abstract, Diekmann). Regarding claim 20, Xiong discloses the image capture device of claim 16 (as discussed above in claim 16). However, Xiong does not explicitly disclose wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance as a function of gain for a corresponding pixel of the image sensor. In the same field of camera image noise determination (abstract, Diekmann) Diekmann discloses wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance (page 5, 2nd to the last par., discloses the calculation of the gain map for all pixels for noise determination and denoising including accumulated variance for the pixels as disclosed in figure 1) as a function of gain for a corresponding pixel of the image sensor (as a function of gain such as disclosed in page 2, 1st column, 2nd to the last par., as the gain is calculated as the ratio of the variance and mean signal [a formular of function of gain] for the pixels of the image obtained from the image sensor [including each individual pixel being processed]). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform generating of an updated map, wherein the updated value map comprises a gain map characterizing, for each value in the gain map, an accumulated noise variance as a function of gain for a corresponding pixel of the image sensor as taught by Diekmann to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to denoise effectively on an image with camera characterization of the signal of the image (abstract, Diekmann). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Xiao Xiong et. a. (foreign patent document “CN 111861942 A” hereinafter as “Xiong”) in view of Horng-Horng Lin et. al. (“Learning a Scene Background Model via Classification, May 2009, IEEE Transactions of Signal Processing, Vol. 57, No. 5” hereinafter as “Lin”). Regarding claim 10, Xiong discloses the method of claim 8 (as discussed above in claim 8), and wherein the corresponding values of the value map are further adjusted based on the content determined for each of the one or more areas (as discussed previously, the denoising by adjusting the pixel value according to the chromaticity value happens at an another step hence the corresponding values of the value map are adjusted stages based on the adjusted pixel value at that stage, since the intensity value is being adjusted as well therefore the values of the value map are also adjusted analogously, including determining of a background region and its corresponding intensity value for being adjusted through the noise reduction process for the area such as disclosed in page 3, 2nd par.). However, Xiong does not explicitly disclose wherein the plurality of stages includes at least one stage in which image recognition operations are performed to determine a content of a scene in one or more areas of the input image frame. In the same field of image background processing (title and abstract, Lin) Lin discloses wherein the plurality of stages includes at least one stage in which image recognition operations are performed to determine a content of a scene in one or more areas of the input image frame (section I, 2nd par., discloses the background region considered as the content of the scene of the image, which is determined according to an image processing/image recognition operations such as illustrated in figure 3 and algorithm 1). Thus, it would have been obvious for a person of ordinary skill in the art before the effective filing date to modify Xiong to perform image recognition operations are performed to determine a content of a scene in one or more areas of the input image frame, and wherein the corresponding values of the value map are further adjusted based on the content determined for each of the one or more areas as taught by Lin to arrive at the claimed invention discussed above. Such a modification is the result of combing prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to determine image blocks more effectively (abstract, Lin). Pertinent Prior Art(s) The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Suk Hwan Lim “US 20170061584 A1” discloses a pipeline for image processing and filtering based on filtering strength for image data (abstract) according to locations in the image where noise can be best estimated for a noise estimation process ([0050]) based on calculation of a giant variance according to the noise stand deviation ([0052]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG HAU CAI whose telephone number is (571)272-9424. The examiner can normally be reached M-F 8:30 am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHUONG HAU CAI/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

May 06, 2024
Application Filed
Feb 15, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602833
IMAGE ANALYSIS DEVICE AND IMAGE ANALYSIS METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602940
SINGLE CELL IDENTIFICATION FOR CELL SORTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597223
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12592064
METHOD AND APPARATUS FOR TRAINING TARGET DETECTION MODEL, METHOD AND APPARATUS FOR DETECTING TARGET
2y 5m to grant Granted Mar 31, 2026
Patent 12591616
METHOD, SYSTEM AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR SEARCHING SIMILAR PRODUCTS USING A MULTI TASK LEARNING MODEL
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+20.9%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month