DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Korea patent application number Kr10-2023-0059863 filed on 05/09/2023 has been received and made of record.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/01/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Election/Restrictions
Claim 13 is withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 02/04/2026.
Applicant’s election without traverse of claims 1-12 and 14-20 in the reply filed on 02/04/2026 is acknowledged.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-12 and 14-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 32 recite limitation “ determine a bandwidth based on the common input and one of the third input or the fourth input”, as recited in independent claims 1 and 14. These limitation “bandwidth” are unclear because bandwidth refers to the maximum rate of data transfer across a network path, essentially the data-carrying capacity, often measured in bits per second (bps) (according the definition of bandwidth). Therefore the scope of the claim is rendered indefinite as it is not clear how to define bandwidth related to the common input and one of the third input or the fourth input and how to calculate it.
Allowable Subject Matter
Claims 1-12 and 13-20 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action.
Regarding independent claims 1 and 14, the closest prior art Panteleev as modified by Li et al. disclose a method of operating an electronic device comprising processing hardware and storage hardware (Panteleev: Fig 11, par 0068, par 0085) and an electronic device comprising: one or more processors; and storage storing instructions configured to cause the one or more processors to (Panteleev: Fig 11, par 0068, par 0085), the method comprising: generating, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085), a common input comprising an initial image and geometry buffer (G-buffer) images rendered according to a view point of a current frame (Panteleev: abstract, “Backward projection can be performed for each stratum in a current frame, a stratum representing a set of adjacent pixels. A pixel from each stratum is selected that has a matching surface in the previous frame, using motion vectors generated during the rendering process. A comparison of the depth of the normals, or the visibility buffer data, can be used to determine whether a given surface is the same in the current frame and the previous frame, and if so then parameters of the surface from the previous frame G-buffer is used to patch the G-buffer for the current frame”, par 0023, “The location, size, and orientation of any of those objects may change, whether static or dynamic, based at least in part upon motion of a virtual camera that is used to determine factors, such as a point of view or zoom level, for the image. In such image or video sequences, these changes in view, location, size, and orientation can be viewed as a set of motions of individual pixels used to represent those objects”, par 0033, “G-buffer patching can involve taking certain parameters of the surface from the previous frame G-buffer (or other relevant buffer or cache) and writing them into the gradient pixel of the current frame G-buffer.”, par 0035-0036, “pixel data 402 for a current frame to be rendered (as may include G-buffer data for primary surfaces) can be received as input to a reflections and refractions component 404 of a rendering system. As mentioned, the reflections and refractions component 404 can use this data to attempt to determine data for any determined reflections and/or refractions in the pixel data, and can provide this data to a back-projection and G-buffer patching component 406, which can perform back-propagation as discussed herein to locate corresponding points for those reflections and refractions, and use this data to patch the G-buffer 418, which can provide updated input for a subsequent frame to be rendered”); generating, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085, Li et al.: Fig 18), a third input by reprojecting, onto the view point, a result obtained by adding an initial image, that is rendered in a prior frame that is prior to the current frame, to a first image, wherein the first image is one of a first input generated in the prior frame and a second input generated in the prior frame (Li et al.: page 2, lines 16-23, “A first aspect provides an image rendering method, the method comprising: acquiring a first image, a second image and a third image, where the first image, the second image and the third image are three consecutive images; An image updates the light map of the second image to obtain the updated light map of the second image; the updated light map of the second image is input into the super-division denoising network to obtain the super-division and denoising image of the second image; according to The second image updates the light map of the third image to obtain the updated light map of the third image; the updated light map of the third image is input into the super-division denoising network to obtain the super-division and denoising image of the third image”, page 2, lines 30-34, “With reference to the first aspect, in some implementations of the first aspect, updating the illumination map of the second image according to the first image to obtain the updated illumination map of the second image includes: acquiring the illumination map of the second image, The light map of the two images includes the color values of multiple pixels, and the light map of the second image is a direct light map or an indirect light map; obtain the second pixel corresponding to the first pixel in the first image, the first pixel is any one of a plurality of pixel points; according to the color value of the first pixel point and the color value of the second pixel point, update the color value of the first pixel point to obtain the updated light map”); generating, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085, Li et al.: Fig 18), a fourth input by reprojecting, onto the view point, a second image, wherein the second image is whichever of the first input and the second input is not the first image (Li et al.: “A first aspect provides an image rendering method, the method comprising: acquiring a first image, a second image and a third image, where the first image, the second image and the third image are three consecutive images; An image updates the light map of the second image to obtain the updated light map of the second image; the updated light map of the second image is input into the super-division denoising network to obtain the super-division and denoising image of the second image; according to The second image updates the light map of the third image to obtain the updated light map of the third image; the updated light map of the third image is input into the super-division denoising network to obtain the super-division and denoising image of the third image”, page 4, lines 24-38, “an acquisition module configured to acquire a first image, a second image and a third image, wherein the first image, the second image and the third image are three consecutive images a frame image; a processing module for updating the light map of the second image according to the first image to obtain the light map after the second image update; the processing module is also used for inputting the light map after the second image update into super-score denoising network to obtain the super-divided and denoised image of the second image; the processing module is also used to update the illumination map of the third image according to the second image to obtain the updated illumination map of the third image; the processing module is also used to The light map after the image update is input to the super-division denoising network to obtain the super-division and de-noising image of the third image; the processing module is also used for the super-division and de-noising image of the second image and the super-division and de-noising image of the third image. The initial frame insertion image at the target moment is obtained, and the target moment is the moment between the second image and the third image; the processing module is further configured to input the initial frame insertion image into the bidirectional frame insertion network to obtain the frame insertion image at the target moment. With reference to the second aspect, in some implementations of the second aspect, the processing module updates the illumination map of the second image according to the first image to obtain the updated illumination map of the second image, including: acquiring the illumination map of the second image , the light map of the second image includes the color values of multiple pixels, and the light map of the second image is a direct light map or an indirect light map; obtain the second pixel corresponding to the first pixel in the first image, the first The pixel point is any one of the plurality of pixel points; according to the color value of the first pixel point and the color value of the second pixel point, the color value of the first pixel point is updated to obtain the updated light map”); and outputting, by the processing hardware, and storing in the storage hardware (Panteleev: Fig 11, par 0068, par 0085, Li et al.: Fig 18), a target image obtained by removing noise from the initial image of the current frame based on the common input, the third input, the fourth input (Panteleev: par 0033, par 0037, “Using information from the lighting passes and the lighting results from the previous frame, gradients can be computed then filtered and used for history rejection. Such an approach can be used to compute robust temporal gradients between current and previous frames in a temporal denoiser for ray traced renderers. Such a backward projection-based approach can also work through reflections and refractions, and can work with rasterized G-buffers. Previous approaches for backward projection omitted any G-buffer patching and relied on the raw current G-buffer samples instead, which also results in false positive gradients. Patching the surface parameters can eliminate false positives in the vast majority of cases, making the denoised image very stable yet still quickly reacting to lighting changes”, Li et al.: page 5, lines 10-15, “updating the color value of the first pixel point according to the color value of the first pixel point and the color value of the second pixel point includes: after the first pixel point is updated The color value of is the sum of multiplying the color value of the first pixel by the first coefficient and multiplying the color value of the second pixel by the second coefficient. With reference to the second aspect, in some implementations of the second aspect, the processing module inputs the updated light map of the second image into the super-division denoising network”).
Examiner has not discovered any additional prior art which fully teaches independent claims 1 and 14, either singly or in an obvious combination of references, in particular, “determining, by the processing hardware, a bandwidth, wherein the determining of the bandwidth is based on the common input and one of the third input or the fourth input; and outputting, by the processing hardware, and storing in the storage hardware, a target image obtained by removing noise from the initial image of the current frame based on the common input, the third input, the fourth input, and the bandwidth”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIN . GE
Examiner
Art Unit 2619
/JIN GE/Primary Examiner, Art Unit 2619