Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,278

ELECTRONIC DEVICE AND METHOD WITH IMAGE NOISE REMOVAL

Non-Final OA §112
Filed
May 01, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Gwangju Institute of Science and Technology
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Korea patent application number Kr10-2023-0059863 filed on 05/09/2023 has been received and made of record. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/01/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Election/Restrictions Claim 13 is withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 02/04/2026. Applicant’s election without traverse of claims 1-12 and 14-20 in the reply filed on 02/04/2026 is acknowledged. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-12 and 14-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 32 recite limitation “ determine a bandwidth based on the common input and one of the third input or the fourth input”, as recited in independent claims 1 and 14. These limitation “bandwidth” are unclear because bandwidth refers to the maximum rate of data transfer across a network path, essentially the data-carrying capacity, often measured in bits per second (bps) (according the definition of bandwidth). Therefore the scope of the claim is rendered indefinite as it is not clear how to define bandwidth related to the common input and one of the third input or the fourth input and how to calculate it. Allowable Subject Matter Claims 1-12 and 13-20 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action. Regarding independent claims 1 and 14, the closest prior art Panteleev as modified by Li et al. disclose a method of operating an electronic device comprising processing hardware and storage hardware (Panteleev: Fig 11, par 0068, par 0085) and an electronic device comprising: one or more processors; and storage storing instructions configured to cause the one or more processors to (Panteleev: Fig 11, par 0068, par 0085), the method comprising: generating, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085), a common input comprising an initial image and geometry buffer (G-buffer) images rendered according to a view point of a current frame (Panteleev: abstract, “Backward projection can be performed for each stratum in a current frame, a stratum representing a set of adjacent pixels. A pixel from each stratum is selected that has a matching surface in the previous frame, using motion vectors generated during the rendering process. A comparison of the depth of the normals, or the visibility buffer data, can be used to determine whether a given surface is the same in the current frame and the previous frame, and if so then parameters of the surface from the previous frame G-buffer is used to patch the G-buffer for the current frame”, par 0023, “The location, size, and orientation of any of those objects may change, whether static or dynamic, based at least in part upon motion of a virtual camera that is used to determine factors, such as a point of view or zoom level, for the image. In such image or video sequences, these changes in view, location, size, and orientation can be viewed as a set of motions of individual pixels used to represent those objects”, par 0033, “G-buffer patching can involve taking certain parameters of the surface from the previous frame G-buffer (or other relevant buffer or cache) and writing them into the gradient pixel of the current frame G-buffer.”, par 0035-0036, “pixel data 402 for a current frame to be rendered (as may include G-buffer data for primary surfaces) can be received as input to a reflections and refractions component 404 of a rendering system. As mentioned, the reflections and refractions component 404 can use this data to attempt to determine data for any determined reflections and/or refractions in the pixel data, and can provide this data to a back-projection and G-buffer patching component 406, which can perform back-propagation as discussed herein to locate corresponding points for those reflections and refractions, and use this data to patch the G-buffer 418, which can provide updated input for a subsequent frame to be rendered”); generating, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085, Li et al.: Fig 18), a third input by reprojecting, onto the view point, a result obtained by adding an initial image, that is rendered in a prior frame that is prior to the current frame, to a first image, wherein the first image is one of a first input generated in the prior frame and a second input generated in the prior frame (Li et al.: page 2, lines 16-23, “A first aspect provides an image rendering method, the method comprising: acquiring a first image, a second image and a third image, where the first image, the second image and the third image are three consecutive images; An image updates the light map of the second image to obtain the updated light map of the second image; the updated light map of the second image is input into the super-division denoising network to obtain the super-division and denoising image of the second image; according to The second image updates the light map of the third image to obtain the updated light map of the third image; the updated light map of the third image is input into the super-division denoising network to obtain the super-division and denoising image of the third image”, page 2, lines 30-34, “With reference to the first aspect, in some implementations of the first aspect, updating the illumination map of the second image according to the first image to obtain the updated illumination map of the second image includes: acquiring the illumination map of the second image, The light map of the two images includes the color values of multiple pixels, and the light map of the second image is a direct light map or an indirect light map; obtain the second pixel corresponding to the first pixel in the first image, the first pixel is any one of a plurality of pixel points; according to the color value of the first pixel point and the color value of the second pixel point, update the color value of the first pixel point to obtain the updated light map”); generating, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085, Li et al.: Fig 18), a fourth input by reprojecting, onto the view point, a second image, wherein the second image is whichever of the first input and the second input is not the first image (Li et al.: “A first aspect provides an image rendering method, the method comprising: acquiring a first image, a second image and a third image, where the first image, the second image and the third image are three consecutive images; An image updates the light map of the second image to obtain the updated light map of the second image; the updated light map of the second image is input into the super-division denoising network to obtain the super-division and denoising image of the second image; according to The second image updates the light map of the third image to obtain the updated light map of the third image; the updated light map of the third image is input into the super-division denoising network to obtain the super-division and denoising image of the third image”, page 4, lines 24-38, “an acquisition module configured to acquire a first image, a second image and a third image, wherein the first image, the second image and the third image are three consecutive images a frame image; a processing module for updating the light map of the second image according to the first image to obtain the light map after the second image update; the processing module is also used for inputting the light map after the second image update into super-score denoising network to obtain the super-divided and denoised image of the second image; the processing module is also used to update the illumination map of the third image according to the second image to obtain the updated illumination map of the third image; the processing module is also used to The light map after the image update is input to the super-division denoising network to obtain the super-division and de-noising image of the third image; the processing module is also used for the super-division and de-noising image of the second image and the super-division and de-noising image of the third image. The initial frame insertion image at the target moment is obtained, and the target moment is the moment between the second image and the third image; the processing module is further configured to input the initial frame insertion image into the bidirectional frame insertion network to obtain the frame insertion image at the target moment. With reference to the second aspect, in some implementations of the second aspect, the processing module updates the illumination map of the second image according to the first image to obtain the updated illumination map of the second image, including: acquiring the illumination map of the second image , the light map of the second image includes the color values of multiple pixels, and the light map of the second image is a direct light map or an indirect light map; obtain the second pixel corresponding to the first pixel in the first image, the first The pixel point is any one of the plurality of pixel points; according to the color value of the first pixel point and the color value of the second pixel point, the color value of the first pixel point is updated to obtain the updated light map”); and outputting, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085, Li et al.: Fig 18), a target image obtained by removing noise from the initial image of the current frame based on the common input, the third input, the fourth input (Panteleev: par 0033, par 0037, “Using information from the lighting passes and the lighting results from the previous frame, gradients can be computed then filtered and used for history rejection. Such an approach can be used to compute robust temporal gradients between current and previous frames in a temporal denoiser for ray traced renderers. Such a backward projection-based approach can also work through reflections and refractions, and can work with rasterized G-buffers. Previous approaches for backward projection omitted any G-buffer patching and relied on the raw current G-buffer samples instead, which also results in false positive gradients. Patching the surface parameters can eliminate false positives in the vast majority of cases, making the denoised image very stable yet still quickly reacting to lighting changes”, Li et al.: page 5, lines 10-15, “updating the color value of the first pixel point according to the color value of the first pixel point and the color value of the second pixel point includes: after the first pixel point is updated The color value of is the sum of multiplying the color value of the first pixel by the first coefficient and multiplying the color value of the second pixel by the second coefficient. With reference to the second aspect, in some implementations of the second aspect, the processing module inputs the updated light map of the second image into the super-division denoising network”). Examiner has not discovered any additional prior art which fully teaches independent claims 1 and 14, either singly or in an obvious combination of references, in particular, “determining, by the processing hardware, a bandwidth, wherein the determining of the bandwidth is based on the common input and one of the third input or the fourth input; and outputting, by the processing hardware, and storing in the storage hardware, a target image obtained by removing noise from the initial image of the current frame based on the common input, the third input, the fourth input, and the bandwidth”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Mar 22, 2026
Non-Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month