Prosecution Insights
Last updated: April 19, 2026
Application No. 18/882,497

SHADING METHOD, SHADING APPARATUS, AND ELECTRONIC DEVICE

Non-Final OA §103
Filed
Sep 11, 2024
Examiner
PARK, HYORIM NMN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
9 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
60.0%
+20.0% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/05/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 400 in FIG. 4, 500 in FIG. 5, 600 in FIG. 6, 700 in FIG. 7, 800 in FIG. 8, 900 in FIG. 9, 1200 in FIG. 12, 1400 in FIG. 14A and FIG. 14B, 1500 in FIG. 15. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-9, and 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. ("Visually lossless content and motion adaptive shading in games." Proceedings of the ACM on Computer Graphics and Interactive Techniques 2.1 (2019): 1-19.) (hereinafter referred to as Yang) in view of Dagani et al. (US 11763521 B2) (hereinafter referred to as Dagani). Regarding claim 1, Yang discloses A shading method, comprising: (Abstract, "We present a technique that adaptively adjusts the shading rate") obtaining rendering information of an image, wherein the rendering information comprises a main scene rendering texture, camera information, model view projection (MVP) matrix information of a rendering object, (5 IMPLEMENTATION, “It computes a shading-rate texture that is later used by the main scene rasterization”; 5.2 Motion Adaptation, “we compute its location in previous frame by first reconstructing its clip-space coordinate using screen coordinate and depth, and then back-projecting it to the previous frame using the camera view-projection matrices from the previous and the current frame. Note that this computation only considers camera motion, which typically applies to the majority of the screen.”) the image comprises N image regions, each of the N image regions comprises M pixels, N is a positive integer greater than or equal to 1, and M is a positive integer greater than or equal to 1; (3.1 Image Error with Half-Rate Shading, “across the entire image tile”; Fig. 3; 5.1 Content Adaptation, “For each 16 × 16 pixel tile”) PNG media_image1.png 293 675 media_image1.png Greyscale determining a first guide image based on the main scene rendering texture, the camera information, and the MVP matrix information of the rendering object; (4.1 Diminished Error Under Motion Blur, “we then determine the shading rate”; 5 IMPLEMENTATION, “It computes a shading-rate texture that is later used by the main scene rasterization”; 5.2 Motion Adaptation, “we compute its location in previous frame by first reconstructing its clip-space coordinate using screen coordinate and depth, and then back-projecting it to the previous frame using the camera view-projection matrices from the previous and the current frame. Note that this computation only considers camera motion, which typically applies to the majority of the screen.”) determining a third guide image based on the first guide image and the second guide image, wherein the third guide image comprises a lower-rate shading region of the third guide image or a higher-rate shading region of the third guide image; and (Abstract; “We determine per-screen-tile shading rate by testing an error estimate against a perceptually-corrected just-noticeable difference threshold. Our design features an effective and efficient error estimate using spatial and frequency analysis of half and quarter rate shading. We also study the effect of motion in reducing perceived error, a consequence of display-persistence and/or motion blur effects. Our implementation uses the computed per-tile shading rate with variable rate shading (a recent GPU feature) to lower shading cost.”) shading the third guide image. (Fig. 1) PNG media_image2.png 289 664 media_image2.png Greyscale However, Yang does not explicitly disclose and a user interface (UI) rendering texture, and determining a second guide image based on the UI rendering texture; Dagani more explicitly teaches, in the context of shading method, and a user interface (UI) rendering texture, (Claim 1, “detecting, by the GPU, user interface (UI) content in a draw call of an application”) determining a second guide image based on the UI rendering texture; (Claim 1, “generating, by the GPU, a variable-rate shader lookup map based on at least one location of detected UI content in the draw call”) As both Yang and Dagani are from the same field of endeavor, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to include and a user interface (UI) rendering texture, and determining a second guide image based on the UI rendering texture; in the context of shading method, by Yang according to the teaching of Dagani in order to improve performance without sacrificing visual quality (Abstract of Yang). Regarding claim 4, Yang in view of Dagani further teaches The method according to claim 1, wherein the determining a first guide image based on the main scene rendering texture, the camera information, and the MVP matrix information of the rendering object comprises: (see supra rejection of claim 1) determining a first reference value based on a main scene rendering texture of a first image region, wherein the N image regions comprise the first image region; (3.2 Frequency Domain Analysis of Yang, " Note that the box filter B2 is exactly the one used in image formation with half-rate shading. Transforming Eq.6 to the frequency domain and substituting into Eq.4, we have: … Eq.7 is important because it establishes a connection between the error estimator in spatial domain and the frequency response of the box filter implied by half-rate shading.”; PNG media_image3.png 33 467 media_image3.png Greyscale 5.1 Content Adaptation of Yang, “we bind the final image from the previous frame as a texture”; 3.1 Image Error with Half-Rate Shading of Yang, “We define an error term between I and IH across the entire image tile”) determining a second reference value based on camera information of the first image region and MVP matrix information of a rendering object in the first image region; (4.1 Diminished Error Under Motion Blur of Yang, “we define a motion-compensated error term in the frequency domain as: where F(Bv)is the Fourier transform of the motion blur filter. F(Bv) depends solely on the velocity v.”; 5.2 Motion Adaptation of Yang, “we compute its location in previous frame by first reconstructing its clip-space coordinate using screen coordinate and depth, and then back-projecting it to the previous frame using the camera view-projection matrices from the previous and the current frame. Note that this computation only considers camera motion, which typically applies to the majority of the screen.”) determining a third reference value based on the first reference value and the second reference value; and (4.1 Diminished Error Under Motion Blur of Yang, “Replacing the error terms in Eq.16 by the motion influenced ones, we then determine the shading rate using”; Eq.22 of Yang) determining the first guide image based on the third reference value and a first threshold, wherein the first threshold is determined based on luminance values of M pixels in the first image region. (Eq.22 of Yang; PNG media_image4.png 93 505 media_image4.png Greyscale 4.1 Diminished Error Under Motion Blur of Yang, “we then determine the shading rate”; 3.4 Shading Rate Adaptation with a Perceptually-Corrected Threshold of Yang, “we define a Just-Noticeable Difference (JND) threshold as:...where Iavg is the average (background) luma in the image tile… l is the environment luminance that affects the sensitivity especially on dark ranges.”) Regrading claim 5, Yang in view of Dagani further teaches The method according to claim 4, wherein the determining the first guide image based on the third reference value and a first threshold comprises: (see supra rejection of claim 4) when the third reference value is greater than or equal to the first threshold, determining that the first image region is a higher-rate shading region of the first guide image; or when the third reference value is less than the first threshold, determining that the first image region is a lower-rate shading region of the first guide image. (4.1 Diminished Error Under Motion Blur of Yang, “Replacing the error terms in Eq.16 by the motion influenced ones, we then determine the shading rate using”; E.q.22 by Yang) PNG media_image5.png 88 511 media_image5.png Greyscale Regrading claim 6, Yang in view of Dagani further teaches The method according to claim 4, wherein the determining a first reference value based on a main scene rendering texture of a first image region comprises: (see supra rejection of claim 4) determining frequency domain information in a horizontal direction and frequency domain information in a vertical direction based on the main scene rendering texture of the first image region; and (3.2 Frequency Domain Analysis of Yang, “We refer to D as the differencing filter. It is a high pass filter that extracts the high frequency contents of the image.”; 3 CONTENTADAPTIVESHADING of Yang, “The following analysis assumes a 1D image slice, although it can be trivially extended to 2D by computing the horizontal and vertical error estimates and shading rates separately.”; 3.4 Shading Rate Adaptation with a Perceptually-Corrected Threshold of Yang, “For 2D image tiles, we compute the detection filter in both horizontal and vertical directions, and determine the X and Y shading rate independently.) determining the first reference value based on the frequency domain information in the horizontal direction and the frequency domain information in the vertical direction. (3.2 Frequency Domain Analysis of Yang, Eq.7 is important because it establishes a connection between the error estimator in spatial domain and the frequency response of the box filter implied by half-rate shading.; Eq.7; PNG media_image6.png 33 490 media_image6.png Greyscale 3 CONTENTADAPTIVESHADING of Yang, “The following analysis assumes a 1D image slice, although it can be trivially extended to 2D by computing the horizontal and vertical error estimates and shading rates separately.”; 3.4 Shading Rate Adaptation with a Perceptually-Corrected Threshold of Yang, “For 2D image tiles, we compute the detection filter in both horizontal and vertical directions, and determine the X and Y shading rate independently.”) Regarding claim 7, Yang in view of Dagani further teaches The method according to claim 4, wherein the determining a second reference value based on camera information of the first image region and MVP matrix information of a rendering object in the first image region comprises: (see supra rejection of claim 4) determining motion rate information based on the camera information of the first image region and the MVP matrix information of the rendering object in the first image region; and determining the second reference value based on the motion rate information. (4.1 Diminished Error Under Motion Blur of Yang, “we define a motion-compensated error term in the frequency domain as: where F(Bv)is the Fourier transform of the motion blur filter. F(Bv) depends solely on the velocity v.”; 5.2 Motion Adaptation of Yang, “we compute its location in previous frame by first reconstructing its clip-space coordinate using screen coordinate and depth, and then back-projecting it to the previous frame using the camera view-projection matrices from the previous and the current frame. Note that this computation only considers camera motion, which typically applies to the majority of the screen.”) Regarding Claim 8, Yang in view of Dagani further teaches The method according to claim 1, wherein the determining a second guide image based on the UI rendering texture comprises: (see supra rejection of claim 1) determining a fourth reference value based on a UI rendering texture of a first image region; and (claim 1 of Dagani, “detecting, by the GPU, user interface (UI) content in a draw call of an application”) when the fourth reference value is greater than a second threshold, marking the first image region as a UI shield region of the second guide image. (claim 4 of Dagani, “wherein the at least one location in the 3D content comprises a location having a luminance value that is greater than the predetermined luminance-value threshold or a luminance spatial-frequency value that is greater than the predetermined luminance spatial-frequency threshold.”) As both Yang and Dagani are from the same field of endeavor, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to include determining a fourth reference value based on a UI rendering texture of a first image region; and when the fourth reference value is greater than a second threshold, marking the first image region as a UI shield region of the second guide image, in the context of shading method, by Yang according to the teaching of Dagani in order to improve performance without sacrificing visual quality (Abstract of Yang). Regarding claim 9 and 17, similar reasoning as discussed in claim 1 is applied. Regarding claim 12, similar reasoning as discussed in claim 4 is applied. Regarding claim 13, similar reasoning as discussed in claim 5 is applied. Regarding claim 14, similar reasoning as discussed in claim 6 is applied. Regarding claim 15, similar reasoning as discussed in claim 7 is applied. Regarding claim 16, similar reasoning as discussed in claim 8 is applied. Claims 2-3, 10-11, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. ("Visually lossless content and motion adaptive shading in games." Proceedings of the ACM on Computer Graphics and Interactive Techniques 2.1 (2019): 1-19.) (hereinafter referred to as Yang) in view of Dagani et al. (US 11763521 B2) (hereinafter referred to as Dagani), and further in view of Yang et al. (US 10930022 B2) (hereinafter referred to as Yang2). Regarding claim 2, Yang in view of Dagani further teaches the method according to claim 1, wherein the shading the third guide image comprises: (see supra rejection of claim 1) Yang in view of Dagani further teaches shading the higher-rate shading region of the third guide image; and supplementing the lower-rate shading region of the third guide image (5.3 Applying Adjusted Shading Rate of Yang, “The computed horizontal and vertical shading rate of the tile is converted into one of the defined shading rate patterns (Sec. 2.1), and the result is saved into the shading rate texture. After that, and before launching the shading passes, we bind the shading rate texture to the pipeline and enable VRS.”; 2.1 Variable-Rate Shading of Yang, “We implemented our adaptive shading algorithm on NVIDIA’s Turing GPUs. In Fig.2 we illustrate how VRS works on Turing. For each 16x16 tile of screen-space samples, the shading rate can be selected from 1×1, 1×2, 2×1, 2×2, 2×4, 4×2 and 4×4 samples per shade. Rate is specified using a shading rate texture that stores a byte per 16 × 16 sample tile to specify the shading rate.”) However, Yang in view of Dagani does not explicitly teach according to an interpolation algorithm. Yang2 more explicitly teach, in the context of shading method, according to an interpolation algorithm. (Col 26, Lines 56-60 of Yang2, “The fragment shading stage 670 may generate pixel data (e.g., color values) for the fragment such as by performing lighting operations or sampling texture maps using interpolated texture coordinates for the fragment.”) As Yang, Dagani and Yang2 are from the same field of endeavor, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to according to an interpolation algorithm, in the context of shading method, by Yang in view of Dagani according to the teaching of Yang2 in order to improve performance without sacrificing visual quality (Abstract of Yang). Regarding claim 3, Yang in view of Dagani, and in further view of Yang2 teaches The method according to claim 2, wherein the supplementing the lower-rate shading region of the third guide image according to an interpolation algorithm comprises: (see supra rejection of claim 2) Yang in view of Dagani and Yang2 teaches generating a depth checkerboard in the lower-rate shading region of the third guide image, wherein the depth checkerboard comprises a first region or a second region, and a depth of the first region is different from a depth of the second region; (5 IMPLEMENTATION of Yang, “The main algorithm can be implemented in one or a few compute passes that run at the start of the frame (right after a depth pre-pass, if it exists)”; Col 9 Lines 60-62 of Yang 2, “along with a rendered depth buffer of the current frame, which is commonly generated at the beginning of the frame by a depth-only pass.”) shading the first region; and supplementing the second region according to the interpolation algorithm (5.3 Applying Adjusted Shading Rate of Yang, “The computed horizontal and vertical shading rate of the tile is converted into one of the defined shading rate patterns (Sec. 2.1), and the result is saved into the shading rate texture. After that, and before launching the shading passes, we bind the shading rate texture to the pipeline and enable VRS.”; 2.1 Variable-Rate Shading of Yang, “We implemented our adaptive shading algorithm on NVIDIA’s Turing GPUs. In Fig.2 we illustrate how VRS works on Turing. For each 16x16 tile of screen-space samples, the shading rate can be selected from 1×1, 1×2, 2×1, 2×2, 2×4, 4×2 and 4×4 samples per shade. Rate is specified using a shading rate texture that stores a byte per 16 × 16 sample tile to specify the shading rate.”; Col 26, Lines 56-60 of Yang2, “The fragment shading stage 670 may generate pixel data (e.g., color values) for the fragment such as by performing lighting operations or sampling texture maps using interpolated texture coordinates for the fragment.”) Regarding claim 10 and 18, similar reasoning as discussed in claim 2 is applied. Regarding claim 11 and 19, similar reasoning as discussed in claim 3 is applied. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent 11,804,008 B2 (addressing low-rate shading). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hyorim Park whose telephone number is (571)272-3859. The examiner can normally be reached Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HP/Examiner, Art Unit 2619 /JASON CHAN/Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Sep 11, 2024
Application Filed
Mar 02, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month