Prosecution Insights
Last updated: April 19, 2026
Application No. 18/478,511

SYSTEM, DEVICES AND/OR PROCESSES FOR IMAGE FRAME UPSCALING

Non-Final OA §103
Filed
Sep 29, 2023
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Arm Limited
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 30th December, 2025 has been entered. Response to Amendment This action is in response to the amendment filed on 30th December, 2025. Claims 1, 7-8, and 20 have been amended. Claims 6, 9, and 12 have been cancelled. Claims 1-5, 7-8, 10-11, and 13-20 remain rejected in the application. Response to Arguments Applicant's arguments with respect to Claims 1 and 20 filed on 30th December, 2025, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "each portion of the different portions is upscaled by a trained neural network selected from among the one or more trained neural networks based, at least in part, on at least one shading rate applied in rendering the portion" has been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Yang and Bourd. Regarding arguments to Claims 2-5, 7-8, 10-11, and 13-19, they directly/indirectly depend on independent Claims 1 and 20 respectively. Applicant does not argue anything other than independent Claims 1 and 20. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd. Regarding Claim 1, Yang discloses a method (Yang, [0027]: teaches method 110 for motion adaptive rendering) comprising: rendering a current render output while varying a shading rate over portions of the current render output such that pixel values of different portions of the current render output are rendered at different associated shading rates (Yang, [0024]: teaches "a given rendered frame is organized into tiles (regions), and each tile <read on pixel values of different portions of current render output> may be rendered according to tile-specific shading rates <read on pixel values of different portions being rendered at different associated shading rates> in one or more dimensions," where "a shading rate for a tile may be calculated based on motion data (e.g., motion vectors) from pixel flow within the tile (for motion adaptive shading <read on varying a shading rate over portions of current render output>), or on content variation such as luminance and/or color frequency and/or contrast within the tile (for content adaptive shading)"); applying pixel values of the different portions of the current render output to an input tensor of one or more trained neural networks to upscale the current render output and/or a sequence of image frames (Yang, [0057]: teaches the weight of a current frame in the exponential averaging temporal filter that is used in Temporal Anti-Aliasing (TAA) "is increased to a predefined value whenever the shading rate in any direction is increased in a screen tile," which "ensures that the displayed result is immediately updated to a clear image (full shading rate) <read on upscale current render output and/or sequence of image frames> when any blurry appearance can no longer be masked by motion"; [0058]: teaches an adaptive de-blocking filter being used to smooth visible boundaries between pixel blocks, where "the adaptive de-blocking filter may receive shading rates <read on applying pixel values of different portions of current render output to input tensor> used in each screen tile as inputs, and may apply smoothing only to known shading pixel block or tile boundaries, rather than all discontinuities in the image"; [0111]: teaches SM 440 comprising L processing cores 550, which includes tensor cores that perform deep learning matrix operations <read on trained neural network>, and M SFUs 552 that includes a texture unit configured to perform texture map filtering operations; Note: it should be noted that a "tensor" is a multidimensional array); wherein [[each portion of the different portions is upscaled by a trained neural network selected from among the one or more trained neural networks based, at least in part, on at least one shading rate applied in rendering the portion.]] However, Yang does not expressly disclose each portion of the different portions is upscaled by a trained neural network selected from among the one or more trained neural networks based, at least in part, on at least one shading rate applied in rendering the portion. Bourd discloses each portion of the different portions is upscaled by a trained neural network selected from among the one or more trained neural networks based, at least in part, on at least one shading rate applied in rendering the portion (Bourd, [0046]: teaches utilizing a machine learning system <read on trained neural network> to determine a VRS/shading rate, where "the neural network can use machine learning to make decisions based on the predicted quality of the output image, as well as the computational power to render the image"; [0047]: teaches rendering certain portions of an image at a lower resolution to then be upscaled at a higher resolution to save battery life; Note: it should be noted that it is being interpreted that the machine learning system upscales the lower resolution portions of the image). Bourd is analogous art with respect to Yang because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to apply variable rate shading to small areas of a high resolution image as taught by Bourd into the teaching of Yang. The suggestion for doing so would allow the machine learning unit to determine shading rates for different regions of the image frame, thereby improving rendering efficiency of the rendering pipeline whilst achieving a clean, upscaled image. Therefore, it would have been obvious to combine Bourd with Yang. Regarding Claim 20, it recites the limitations that are similar in scope to Claim 1, but in a computing device. As shown in the rejection, the combination of Yang and Bourd discloses the limitations of Claim 1. Additionally, Yang discloses a computing device (Yang, [0078]: teaches a PPU 300 of an integrated circuit device <read on computing device> as shown in FIG. 3), comprising: PNG media_image1.png 651 457 media_image1.png Greyscale a memory (Yang, [0078]: teaches the PPU 300 being connected to a local memory 304); and one or more processors coupled to the memory to (Yang, [0078]: teaches the PPU 300 being connected to a host processor, where the host processor is connected with local memory 304 via a memory bridge):… Thus, Claim 20 is met by Yang according to the mapping presented in the rejection of Claim 1, given the method corresponds to a computing device. Regarding Claim 2, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses applying parameters of one or more previous render outputs to the input tensor (Yang, [0061]: teaches motion data being calculated according to optical flow of color image data between two frames rendered immediately prior to the frame <read on parameters of previous render output>, with motion vectors calculated for each pixel in the frame based on movement detected between the prior two frames <read on applying parameters of previous render output to input tensor>; [0060]: teaches computing motion vectors based on previous and current camera view positions of each pixel using matrix calculations; [0109]: teaches tensor cores being configured to perform matrix operations). Regarding Claim 3, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses applying motion vectors or optical flow parameters derived from one or more previous image frames, or a combination thereof, to the input tensor (Yang, [0061]: teaches motion data being calculated according to optical flow of color image data between two frames rendered immediately prior to the frame, with motion vectors calculated for each pixel in the frame based on movement detected between the prior two frames <read on applying a combination of motion vectors and optical flow parameters to input tensor>; [0060]: teaches computing motion vectors based on previous and current camera view positions of each pixel using matrix calculations; [0109]: teaches tensor cores being configured to perform matrix operations). Regarding Claim 4, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses applying pixel values of at least one portion to be upscaled by the one or more trained neural networks (Yang, [0132]: teaches the PPU 300 comprising a GPU, where it is configured to "process the graphics primitives to generate a frame buffer (e.g., pixel data for each of the pixels of the display) <read on applying pixel values of one portion to be processed by trained neural network>"; [0148]: teaches the PPU 300 being used for a deep neural network (DNN) <read on trained neural network>) and at least one portion to be upscaled independently of the one or more trained neural networks (Yang, [0132]: teaches the PPU 300 comprising a GPU, where it is configured to "process the graphics primitives to generate a frame buffer (e.g., pixel data for each of the pixels of the display) <read on applying pixel values of one portion to be processed independently by trained neural network>"; [0133]: teaches the vertex and pixel shader programs being executed concurrently <read on processed independently>, which processes different data from the same scene in a pipelined fashion until all of the model data for the scene has been rendered to the frame buffer; [0148]: teaches the PPU 300 being used for a deep neural network (DNN) <read on trained neural network>). Regarding Claim 5, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses wherein the pixel values comprise multi-color channel signal intensity values, image depth parameters or surface normal parameters, or a combination thereof, associated with pixel locations in the current render output (Yang, [0046]: teaches calculating a plurality of pixel motion throughout frame 130 for different pixels, where "a shader is configured to compute world-space positions <read on pixel locations of current render output> for each pixel based on a corresponding depth value <read on image depth parameters> and a view-projection matrix"). Regarding Claim 8, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses wherein the method further comprises: applying at least some of the pixel values of the portion of the current render output and the shading rate applied in rendering the portion of the current render output to the input tensor (Yang, [0057]: teaches the weight of a current frame in the exponential averaging temporal filter that is used in Temporal Anti-Aliasing (TAA) "is increased to a predefined value whenever the shading rate in any direction is increased in a screen tile," which "ensures that the displayed result is immediately updated to a clear image (full shading rate) when any blurry appearance can no longer be masked by motion"; [0058]: teaches an adaptive de-blocking filter being used to smooth visible boundaries between pixel blocks, where "the adaptive de-blocking filter may receive shading rates <read on applying pixel values of portions of current render output to input tensor> used in each screen tile as inputs, and may apply smoothing only to known shading pixel block or tile boundaries, rather than all discontinuities in the image"; [0111]: teaches SM 440 comprising L processing cores 550, which includes tensor cores that perform deep learning matrix operations <read on trained neural network>, and M SFUs 552 that includes a texture unit configured to perform texture map filtering operations), wherein the shading rate varies over the portion of the current render output (Yang, [0067]: teaches variable pixel shading rate varying the shading resolution in texture MIP-level, where "since this form of shading rate can be determined on a per texture-tile basis, there is enough flexibility to vary shading rate adaptively at each visible surface location <read on current render output> and respond to a shading rate determination based on screen-space motion"); and [[upscaling a sub portion of the portion of the current render output having a lowest shading rate prior to applying the pixel values of the portion of the current render output to the input tensor.]] However, Yang does not expressly disclose upscaling a sub portion of the portion of the current render output having a lowest shading rate prior to applying the pixel values of the portion of the current render output to the input tensor. Bourd discloses upscaling a sub portion of the portion of the current render output having a lowest shading rate prior to applying the pixel values of the portion of the current render output to the input tensor (Bourd, [0040]: teaches a neural network applying variable rate shading (VRS) to small areas of a high resolution image <read on upscaling sub portion of portion of current render output>, where "pixels may be shaded in a small area of an image and then upscaled into a larger area"; [0045]: teaches VRS rendering at lower rates <read on lowest shading rate>; [0050]: teaches the machine learning unit taking an image as input <read on current render output to input tensor>). Bourd is analogous art with respect to Yang because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to apply variable rate shading to small areas of a high resolution image as taught by Bourd into the teaching of Yang. The suggestion for doing so would allow the machine learning unit to determine shading rates for different regions of the image frame, thereby improving rendering efficiency of the rendering pipeline whilst achieving a clean, upscaled image. Therefore, it would have been obvious to combine Bourd with Yang. Regarding Claim 14, the combination of Yang and Bourd discloses the method of Claim 1. Yang does not expressly disclose the limitations of Claim 14; however, Bourd discloses wherein the current render output and/or the sequence of image frames to be enhanced by the one or more trained neural networks at least by upscaling a spatial resolution of the current render output (Bourd, [0047]: teaches rendering certain portions of an image at a low resolution, where these portions are then upscaled to a higher resolution <read on spatial resolution>). Bourd is analogous art with respect to Yang because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to apply variable rate shading to small areas of a high resolution image as taught by Bourd into the teaching of Yang. The suggestion for doing so would allow the machine learning unit to determine shading rates for different regions of the image frame, thereby improving rendering efficiency of the rendering pipeline whilst achieving a clean, upscaled image. Therefore, it would have been obvious to combine Bourd with Yang. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd as recited in Claim 1 above respectively, and further in view of Fuller et al. (US 20190005714 A1, previously cited), hereinafter referenced as Fuller. Regarding Claim 7, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses obtaining the shading rate applied in rendering the portion of the current render output [[from metadata stored in a shader core or iterator, or a combination thereof]] (Yang, [0037]: teaches determining shading rates for each tile in the current image <read on obtaining shading rate applied in rendering portion of current render output>); and applying the obtained shading rate applied in rendering the portion of the current render output to the input tensor (Yang, [0038]: teaches the processing unit executing shaders "that perform variable rate shading <read on applying obtained shading rate>, with a shading rate specified individually for each tile or each primitive"; [0132]: teaches the PPU 300 being configured to receive commands that specify shader programs for processing graphics data; [0148]: teaches the PPU 300 being used for DNNs, which includes tensor cores <read on input tensor>). However, the combination of Yang and Bourd does not expressly disclose obtaining the shading rate applied in rendering the portion of the current render output from metadata stored in a shader core or iterator, or a combination thereof. Fuller discloses obtaining the shading rate applied in rendering the portion of the current render output from metadata stored in a shader core or iterator, or a combination thereof (Fuller, [0018]: teaches determining characteristics of a previous fragment of a previous image "based on at least one of examining pixel values of the previous fragment (e.g., within the fragment or within the larger frame or other screen-space size) to detect the characteristics, metadata generated during the previous variable rate shading pass for the previous frame, etc."; [0040]: teaches "a compute shader 121 <read on shader core> can generate the SRP values based on one or more characteristics <read on metadata> detected for one or more fragments in a previous frame, and can accordingly generate the coarse map 116 that the rasterizer stage 94 uses to determine shading rates"). Fuller is analogous art with respect to Yang, in view of Bourd because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to determine the characteristics of an image frame based on previous image frames to generate a coarse map as taught by Fuller into the teaching of Yang, in view of Bourd. The suggestion for doing so would allow the rasterizer to use the generated coarse map to determine the shading rates, thereby enabling the system to understand temporal coherence between frames, which would result in a more stable sequence of image frames. Therefore, it would have been obvious to combine Fuller with Yang, in view of Bourd. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd as recited in Claim 1 above respectively, and further in view of Grossman et al. (US 20180047203 A1, previously cited), hereinafter referenced as Grossman. Regarding Claim 10, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses rendering a subsequent render output while varying a shading rate over portions of the subsequent render output (Yang, [0059]: teaches a rendering engine using a rendering pass for a subsequent second frame <read on rendering subsequent render output>, which uses "the motion vectors generated with the first frame to determine shading rates for the second frame"; [0024]: teaches "a shading rate for a tile may be calculated based on motion data (e.g., motion vectors) from pixel flow within the tile (for motion adaptive shading <read on varying a shading rate over portions of subsequent render output>), or on content variation such as luminance and/or color frequency and/or contrast within the tile (for content adaptive shading)"); applying pixel values of at least one portion of the current render output and pixel values of a corresponding at least one portion of the subsequent render output to an input tensor of at least one of the one or more trained neural networks to generate pixel values of a corresponding portion in a temporally upscaled image frame (Yang, [0057]: teaches the weight of a current frame in the exponential averaging temporal filter that is used in Temporal Anti-Aliasing (TAA) "is increased to a predefined value whenever the shading rate in any direction is increased in a screen tile," which "ensures that the displayed result is immediately updated to a clear image (full shading rate) <read on generate pixel values of corresponding portion in temporally upscaled image frame> when any blurry appearance can no longer be masked by motion"; [0058]: teaches an adaptive de-blocking filter being used to smooth visible boundaries between pixel blocks, where "the adaptive de-blocking filter may receive shading rates <read on applying pixel values of different portions of subsequent render output to input tensor> used in each screen tile as inputs, and may apply smoothing only to known shading pixel block or tile boundaries, rather than all discontinuities in the image"; [0111]: teaches SM 440 comprising L processing cores 550, which includes tensor cores that perform deep learning matrix operations <read on trained neural network>, and M SFUs 552 that includes a texture unit configured to perform texture map filtering operations); and [[affecting processing of the pixel values of the at least one portion of the current render output and pixel values of the at least one portion of the subsequent render output by the at least one of the one or more trained neural networks based, at least in part, on a highest shading rate applied in rendering the at least one portion of the current render output.]] However, the combination of Yang and Bourd does not expressly disclose affecting processing of the pixel values of the at least one portion of the current render output and pixel values of the at least one portion of the subsequent render output by the at least one of the one or more trained neural networks based, at least in part, on a highest shading rate applied in rendering the at least one portion of the current render output. Grossman discloses affecting processing of the pixel values of the at least one portion of the current render output and pixel values of the at least one portion of the subsequent render output by the at least one of the one or more trained neural networks based, at least in part, on a highest shading rate applied in rendering the at least one portion of the current render output (Grossman, [0028]: teaches "a computer device 10 includes a graphics processing unit (GPU) 12 configured to implement the described aspects of variable rate shading," where "GPU 12 is configured to determine and use different fragment shading rates for shading (i.e. calculating a color for) different fragments <read on pixel values of different portions of current render output> covered by a primitive of an image based on respective shading rate parameters for respective regions <read on shading rate applied in rendering portion of current render output> of the image" such that "GPU 12 can dynamically vary the rate <read on affecting processing of pixel values for portion of current and subsequent render outputs> at which fragment shading is performed on-the-fly during rendering of an image <read on rendering current render output>" based on a variability in level of detail (LOD) within the image; [0028]: further teaches the "GPU 12 can be configured to vary a number of samples (e.g., n S a m p l e s , such as color samples) for each pixel of the image based on the respective shading rate parameters <read on highest shading rate> for respective regions of the image"). Grossman is analogous art with respect to Yang, in view of Bourd because they are from the same field of endeavor, namely applying variable shading rates. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement fragment shading rates for different fragments of the image frame as taught by Grossman into the teaching of Yang, in view of Bourd. The suggestion for doing so would allow for on-the-fly fragment shading of an image frame based on variability in level-of-detail within the image frame, thereby selecting important regions of the image for variable shading rates that results in higher detail while keeping rendering costs low. Therefore, it would have been obvious to combine Grossman with Yang, in view of Bourd. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd, and further in view of Grossman et al. (US 20180047203 A1, previously cited), hereinafter referenced as Grossman as recited in Claim 10 above respectively, and further in view of Gedik et al. (US 20100245372 A1, previously cited), hereinafter referenced as Gedik. Regarding Claim 11, the combination of Yang, Bourd, and Grossman discloses the method of Claim 10. The combination of Yang, Bourd, and Grossman does not expressly disclose the limitations of Claim 11; however, Gedik discloses wherein the current render output and the subsequent render output are rendered according to a first image frame rate (Gedik, [0143]: teaches estimating 3D motion for frame interpolation between frames t and t + 1 <read on current and subsequent render outputs respectively>; [0146]: teaches a system receiving a first video signal at a first frame rate), and the temporally upscaled image frame is in a temporal sequence of image frames according to a second image frame rate that is higher than the first image frame rate (Gedik, [0146]: teaches a system "receiving a first video signal at a first frame rate, calculating an interpolated frames, and then preferably producing as an output a second video signal at a higher frame rate <read on second image frame rate> than the received input video signal"). Gedik is analogous art with respect to the combination of Yang, Bourd, and Grossman because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to calculate and insert interpolated frames between two generated frames as taught by Gedik into the combined teaching of Yang, Bourd, and Grossman. The suggestion for doing so would allow the system to output a video signal at a higher frame rate whilst improving visual quality via bi-directional interpolation of foreground moving objects, thereby yielding desirable results. Therefore, it would have been obvious to combine Gedik with the combination of Yang, Bourd, and Grossman. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd as recited in Claim 1 above respectively, and further in view of Guo (US 20210407183 A1, previously cited). Regarding Claim 13, the combination of Yang and Bourd discloses the method of Claim 1. The combination of Yang and Bourd does not expressly disclose the limitations of Claim 13; however, Guo discloses maintaining a plurality of buffers to provide pixel values to input tensors of an associated plurality of trained neural networks (Guo, [0113]: teaches coherency being maintained for data and instructions stored in the various caches 462A-462D <read on buffers>, 456 and system memory 441 via inter-core communication over a coherence bus 464; [0078]: teaches "the graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A-336B, tensor core 337A-337B <read on input tensors of associated trained neural networks>, ray-tracing core 338A-338B) and multiple sets of load/store units 340A-340B," where "the execution resource units have a common instruction cache 330, texture <read on provide pixel values> and/or data cache memory 342, and shared memory 346"); and selecting from among the plurality of buffers to load pixel values of respective ones of the one or more of different portions of the current render output based, at least in part, on shading rates applied in rendering the respective ones of the different portions (Guo, [0307]: teaches a return buffer state commands 2216 being used to configure a set of return buffers for the respective pipelines to write data, where "the return buffer state 2216 includes selecting the size and number of return buffers to use for a set of pipeline operations <read on select buffer to load pixel values>"; [0341]: teaches implementing a plurality of varied shading rates for different regions of a frame), wherein the associated plurality of trained neural networks to provide pixel values in a spatially upscaled image frame (Guo, [0341]: teaches performing a plurality of varied pixel shading rates in certain regions <read on provided pixel values> for a given frame, where the center region is determined by an encoder <read on associated trained neural network> to be an important region and is therefore shaded at full rate and the corners of the frame is determined to be less important and is therefore shaded at lower rates, thereby resulting in the center having more detail than the corners <read on spatially upscaled image frame> as shown in FIG. 27A). Guo is analogous art with respect to Yang, in view of Bourd because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to utilize an encoder to determine shading rates for certain regions of a given frame image as taught by Guo into the teaching of Yang, in view of Bourd. The suggestion for doing so would allow the neural network to determine which region should be shaded at full rate and others at lower rates, thereby resulting in an adaptive rendering system that provides a smooth gameplay/viewing experience. Therefore, it would have been obvious to combine Guo with Yang, in view of Bourd. Claims 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd as recited in Claim 1 above respectively, and further in view of Munkberg et al. (US 20200126192 A1, previously cited), hereinafter referenced as Munkberg. Regarding Claim 15, the combination of Yang and Bourd discloses the method of Claim 1. The combination of Yang and Bourd does not expressly disclose the limitations of Claim 15; however, Munkberg discloses wherein the current render output and/or the sequence of image frames to be upscaled by the one or more trained neural networks at least by upscaling a temporal resolution of the sequence of image frames (Munkberg, [0078]: teaches an external state, produced by the temporal adaptive sampling and denoising system 200, "including a reconstructed first rendered image frame that approximates the first rendered image frame <read on current render output and/or sequence of image frames> without artifacts is received by the sample map estimator neural network model 210," where the external state is warped, which "enables improved tracking over time by integrating information associated with changing features over multiple frames in a sequence, producing more temporally stable <read on upscaling temporal resolution> and higher quality <read on enhanced> reconstructed images"). Munkberg is analogous art with respect to Yang, in view of Liu because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement temporal adaptive sampling and a denoising system as taught by Munkberg into the teaching of Yang, in view of Liu. The suggestion for doing so would allow the system to track changing features over multiple frames in a sequence of frame images, thereby yielding more temporally stable image frames at a higher quality with reduced/eliminated unwanted noise artifacts. Therefore, it would have been obvious to combine Munkberg with Yang, in view of Liu. Regarding Claim 16, the combination of Yang and Bourd discloses the method of Claim 1. The combination of Yang and Bourd does not expressly disclose the limitations of Claim 16; however, Munkberg discloses wherein the current render output and/or the sequence of image frames to be enhanced by the one or more trained neural networks at least by denoising a portion of the current render output (Munkberg, [0078]: teaches an external state, produced by the temporal adaptive sampling and denoising system 200, "including a reconstructed first rendered image frame that approximates the first rendered image frame <read on current render output and/or sequence of image frames> without artifacts is received by the sample map estimator neural network model 210," where the external state is warped, which "enables improved tracking over time by integrating information associated with changing features over multiple frames in a sequence, producing more temporally stable and higher quality <read on enhanced> reconstructed images"; [0077]: teaches denoising a frame in a sequence of frames <read on denoising portion of current render output>). Munkberg is analogous art with respect to Yang, in view of Liu because they are from the same field of endeavor, namely applying variable shading rates on an image frame. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement temporal adaptive sampling and a denoising system as taught by Munkberg into the teaching of Yang, in view of Liu. The suggestion for doing so would allow the system to track changing features over multiple frames in a sequence of frame images, thereby yielding more temporally stable image frames at a higher quality with reduced/eliminated unwanted noise artifacts. Therefore, it would have been obvious to combine Munkberg with Yang, in view of Liu. Claims 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 20210166441 A1, previously cited), hereinafter referenced as Yang, in view of Bourd et al. (US 20200388022 A1, previously cited), hereinafter referenced as Bourd as recited in Claim 1 above respectively, and further in view of Liu (CN 115443487 A, previously cited). Regarding Claim 17, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses rendering one or more other rendered outputs while varying a shading rate over portions of at least one of the one or more other rendered outputs (Yang, [0024]: teaches "a given rendered frame is organized into tiles (regions), and each tile <read on portions of other render outputs> may be rendered according to tile-specific shading rates in one or more dimensions," where "a shading rate for a tile may be calculated based on motion data (e.g., motion vectors) from pixel flow within the tile (for motion adaptive shading <read on varying a shading rate over portions of other render outputs>), or on content variation such as luminance and/or color frequency and/or contrast within the tile (for content adaptive shading)"), the current render output and the one or more other rendered outputs corresponding with image frames in the sequence of image frames (Yang, [0061]: teaches calculating motion data according to optical flow of color image data <read on image frames> between two frames rendered immediately prior to the frame <read on current and other rendered outputs respectively>); and [[affecting processing of pixel values for at least one of the different portions of the current render output to upscale the current render output and/or the sequence of image frames further based, at least in part, on a shading rate applied in rendering a corresponding portion of the at least one of the one or more other rendered outputs.]] However, the combination of Yang and Bourd does not expressly disclose affecting processing of pixel values for at least one of the different portions of the current render output to upscale the current render output and/or the sequence of image frames further based, at least in part, on a shading rate applied in rendering a corresponding portion of the at least one of the one or more other rendered outputs. Liu discloses affecting processing of pixel values for at least one of the different portions of the current render output to upscale the current render output and/or the sequence of image frames further based, at least in part, on a shading rate applied in rendering a corresponding portion of the at least one of the one or more other rendered outputs (Liu, [0027]: teaches storing upscaled representations <read on upscale sequence of image frames> includes converting one of the samples of the rendered pixel and/or one of the shading values of the sample to a display pixel at the target resolution; [0029]: teaches the image processing system dividing the image into a plurality of tiles with each tile comprising a subset of rendered pixels of the image, where it performs a pixel processing pass <read on affecting processing of pixel values for different portions of current render output>; [0029]: further teaches determining pixel samples and their respective positions, where shading the rendered pixel at a first resolution provides a shading result for the rendered pixel <read on respective shading rates associated with different portions>, which then stores "an upscaled representation of the rendered pixel in the on-chip tile memory by determining, based on the shading values of the set of samples, a display pixel shading value for each of a plurality of display pixels that overlap the rendered pixel at a target resolution greater than the first resolution <read on upscale current render output>"). Liu is analogous art with respect to Yang because they are from the same field of endeavor, namely applying variable shading rates. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to perform anti-aliasing on determined pixels as taught by Liu into the teaching of Yang. The suggestion for doing so would allow the system to render the pixels at a lower resolution before being upscaled to a target higher resolution, thereby improving overall rendering performance. Therefore, it would have been obvious to combine Liu with Yang. Regarding Claim 18, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses rendering one or more other rendered outputs while varying a shading rate over portions of at least one of the one or more other rendered outputs (Yang, [0024]: teaches "a given rendered frame is organized into tiles (regions), and each tile <read on portions of other rendered outputs> may be rendered according to tile-specific shading rates in one or more dimensions," where "a shading rate for a tile may be calculated based on motion data (e.g., motion vectors) from pixel flow within the tile (for motion adaptive shading <read on varying a shading rate over portions of other rendered outputs>), or on content variation such as luminance and/or color frequency and/or contrast within the tile (for content adaptive shading)"), the current render output and the one or more other rendered outputs corresponding with image frames in the sequence of image frames (Yang, [0061]: teaches calculating motion data according to optical flow of color image data <read on image frames> between two frames rendered immediately prior to the frame <read on current and other rendered outputs respectively>); computing a quality metric based, at least in part, on shading rates applied in rendering corresponding portions of current rendered output and the at least one of the one or more other rendered outputs (Yang, [0066]: teaches adjusting the number of primary and secondary rays per pixels based on shading model and quality requirements <read on computing quality metric>, which changes the shading rate per pixel <read on shading rates applied in rendering corresponding portions of current and other rendered outputs>); and [[affecting processing of pixel values for at least one of the different portions of the current render output to upscale the current render output and/or the sequence of image frames further based, at least in part, on the computed quality metric.]] However, the combination of Yang and Bourd does not expressly disclose affecting processing of pixel values for at least one of the different portions of the current render output to upscale the current render output and/or the sequence of image frames further based, at least in part, on the computed quality metric. Liu discloses affecting processing of pixel values for at least one of the different portions of the current render output to upscale the current render output and/or the sequence of image frames further based, at least in part, on the computed quality metric (Liu, [0027]: teaches storing upscaled representations <read on upscale sequence of image frames> includes converting one of the samples of the rendered pixel and/or one of the shading values of the sample to a display pixel at the target resolution; [0029]: teaches the image processing system dividing the image into a plurality of tiles with each tile comprising a subset of rendered pixels of the image, where it performs a pixel processing pass <read on affecting processing of pixel values for different portions of current render output>; [0029]: further teaches determining pixel samples and their respective positions, where shading the rendered pixel at a first resolution provides a shading result for the rendered pixel <read on respective shading rates associated with different portions>, which then stores "an upscaled representation of the rendered pixel in the on-chip tile memory by determining, based on the shading values of the set of samples, a display pixel shading value for each of a plurality of display pixels that overlap the rendered pixel at a target resolution greater than the first resolution <read on upscale current render output>"). Liu is analogous art with respect to Yang because they are from the same field of endeavor, namely applying variable shading rates. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to perform anti-aliasing on determined pixels as taught by Liu into the teaching of Yang. The suggestion for doing so would allow the system to render the pixels at a lower resolution before being upscaled to a target higher resolution, thereby improving overall rendering performance. Therefore, it would have been obvious to combine Liu with Yang. Regarding Claim 19, the combination of Yang and Bourd discloses the method of Claim 1. Additionally, Yang further discloses rendering one or more other rendered outputs while varying a shading rate over portions of at least one of the one or more other rendered outputs (Yang, [0024]: teaches "a given rendered frame is organized into tiles (regions), and each tile <read on portions of other rendered outputs> may be rendered according to tile-specific shading rates in one or more dimensions," where "a shading rate for a tile may be calculated based on motion data (e.g., motion vectors) from pixel flow within the tile (for motion adaptive shading <read on varying a shading rate over portions of other render outputs>), or on content variation such as luminance and/or color frequency and/or contrast within the tile (for content adaptive shading)"), the current render output and the other rendered outputs corresponding with image frames in the sequence of image frames (Yang, [0061]: teaches calculating motion data according to optical flow of color image data <read on image frames> between two frames rendered immediately prior to the frame <read on current and other rendered outputs respectively>); and applying pixel values of at least one portion of the current render output and pixel values of a corresponding at least one portion of the at least one of the one or more other rendered outputs to an input tensor of at least one of the one or more trained neural networks to generate pixel values of a corresponding portion in a temporally upscaled image frame (Yang, [0057]: teaches the weight of a current frame in the exponential averaging temporal filter that is used in Temporal Anti-Aliasing (TAA) "is increased to a predefined value whenever the shading rate in any direction is increased in a screen tile," which "ensures that the displayed result is immediately updated to a clear image (full shading rate) <read on generate pixel values of corresponding portion in temporally upscaled image frame> when any blurry appearance can no longer be masked by motion"; [0058]: teaches an adaptive de-blocking filter being used to smooth visible boundaries between pixel blocks, where "the adaptive de-blocking filter may receive shading rates <read on applying pixel values of portion of current render output to input tensor> used in each screen tile as inputs, and may apply smoothing only to known shading pixel block or tile boundaries, rather than all discontinuities in the image"; [0111]: teaches SM 440 comprising L processing cores 550, which includes tensor cores that perform deep learning matrix operations <read on trained neural network>, and M SFUs 552 that includes a texture unit configured to perform texture map filtering operations); and [[affecting generation of the pixel values of the corresponding portion in the temporally upscaled image frame based, at least in part, on a shading rate applied in rendering the at least one portion of the current render output and a shading rate applied in rendering the corresponding at least one portion of the at least one of the other rendered outputs.]] However, the combination of Yang and Bourd does not expressly disclose affecting generation of the pixel values of the corresponding portion in the temporally upscaled image frame based, at least in part, on a shading rate applied in rendering the at least one portion of the current render output and a shading rate applied in rendering the corresponding at least one portion of the at least one of the other rendered outputs. Liu discloses affecting generation of the pixel values of the corresponding portion in the temporally upscaled image frame based, at least in part, on a shading rate applied in rendering the at least one portion of the current render output and a shading rate applied in rendering the corresponding at least one portion of the at least one of the other rendered outputs (Liu, [0027]: teaches storing upscaled representations <read on upscale sequence of image frames> includes converting one of the samples of the rendered pixel and/or one of the shading values of the sample to a display pixel at the target resolution; [0029]: teaches the image processing system dividing the image into a plurality of tiles with each tile comprising a subset of rendered pixels of the image, where it performs a pixel processing pass <read on affecting processing of pixel values for different portions of current render output>; [0029]: further teaches determining pixel samples and their respective positions, where shading the rendered pixel at a first resolution provides a shading result for the rendered pixel <read on respective shading rates associated with different portions>, which then stores "an upscaled representation of the rendered pixel in the on-chip tile memory by determining, based on the shading values of the set of samples, a display pixel shading value for each of a plurality of display pixels that overlap the rendered pixel at a target resolution greater than the first resolution <read on upscale current render output>"). Liu is analogous art with respect to Yang because they are from the same field of endeavor, namely applying variable shading rates. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to perform anti-aliasing on determined pixels as taught by Liu into the teaching of Yang. The suggestion for doing so would allow the system to render the pixels at a lower resolution before being upscaled to a target higher resolution, thereby improving overall rendering performance. Therefore, it would have been obvious to combine Liu with Yang. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Croxford et al. (US 20200410740 A1) discloses a graphics processing system that generates "space-warped" frames; and Nevraev et al. (US 20190172257 A1) discloses a GPU that includes a flexible, dynamic, application-directed mechanism for varying fragment shading rates. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
Jul 14, 2025
Non-Final Rejection — §103
Sep 11, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Nov 26, 2025
Response after Non-Final Action
Dec 30, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month