Prosecution Insights
Last updated: April 19, 2026
Application No. 18/583,884

METHOD FOR PERFORMING AUTOMATIC ACTIVATION CONTROL REGARDING VARIABLE RATE SHADING, AND ASSOCIATED APPARATUS

Non-Final OA §103
Filed
Feb 22, 2024
Examiner
YANG, ANDREW GUS
Art Unit
2614
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
384 granted / 558 resolved
+6.8% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
583
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 558 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-8, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over du Bois et al. (U.S. PGPUB 20240282040) in view of Suzuki (U.S. Patent No. 6,891,970). With respect to claim 1, du Bois et al. disclose a method for performing automatic activation control regarding variable rate shading (VRS), the method being applied to a processing circuit within an electronic device (paragraph 202, The method 1700 may generally be implemented in a UMD such as, for example, the user mode graphics driver 1026 (FIG. 10), a host processor such as, for example, the processor 1030 (FIG. 10), the logic 1172 (FIGS. 11B and 11C) and/or the logic 1174 (FIGS. 11B and 11C), already discussed), the method comprising: utilizing a rendering classifier within the processing circuit to intercept at least one set of original graphic commands on a command path (paragraph 203, Illustrated processing block 1702 provides for intercepting one or more commands to set a first shading rate for offscreen regions of a scene, wherein the scene is associated with a plurality of graphics processors and a plurality of displays); utilizing the rendering classifier to classify the rendering corresponding to the at least one set of original graphic commands into at least one predetermined rendering type among multiple predetermined rendering types according to the at least one rendering property (paragraph 204, block 1704 identifies, on the per-graphics processor basis and the per-display basis, guard bands between the offscreen regions and onscreen regions, wherein the onscreen regions and the guard bands are rendered at the first (e.g., relatively high) shading rate. Additionally, block 1704 may identify the guard bands based on shading rate tile data, block 1704 further identifies the offscreen regions based on dispatch thread ID data, offscreen or onscreen type, paragraph 209, dispatch thread ID commands), in order to determine at least one shading rate corresponding to the at least one predetermined rendering type for the rendering corresponding to the at least one set of original graphic commands (paragraph 204, block 1704 identifies, on the per-graphics processor basis and the per-display basis, guard bands between the offscreen regions and onscreen regions, wherein the onscreen regions and the guard bands are rendered at the first (e.g., relatively high) shading rate); and utilizing a shading rate controller within the processing circuit to control the processing circuit to selectively activate a VRS function of the processing circuit, for rendering at the at least one shading rate corresponding to the at least one predetermined rendering type (paragraph 204, Block 1706 executes one or more compute shaders based on the second (e.g., relatively low) shading rate. The method 1700 therefore enhances performance at least to the extent that shading/overriding the offscreen regions at a lower shading rate reduces latency and/or power consumption). However, du Bois et al. do not expressly disclose to obtain at least one rendering property from the at least one set of original graphic commands, for classifying rendering corresponding to the at least one set of original graphic commands. Suzuki, who also deals with rendering commands, discloses a method to obtain at least one rendering property from the at least one set of original graphic commands, for classifying rendering corresponding to the at least one set of original graphic commands (column 4, lines 53-62, One rendering command is received in step S200, and is interpreted in step S210 to discriminate the type of object indicated by the rendering command. If the rendering command of interest is not an image rendering command, i.e., it is a text rendering command, graphics rendering command, or the like, it is determined in step S220 that the object is not a photo image, and the flow jumps to step S260. On the other hand, if the rendering command is an image rendering command, it is determined that the object is a photo image, and the flow advances to step S240). Du Bois et al. and Suzuki are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method to obtain at least one rendering property from the at least one set of original graphic commands, for classifying rendering corresponding to the at least one set of original graphic commands, as taught by Suzuki, to the du Bois et al. system, because an object is rasterized on the page memories on the basis of the rendering command (column 6, lines 13-18 of Suzuki), thus rendering an object based on the type associated with the rendering command. With respect to claim 5, du Bois et al. as modified by Suzuki disclose the method of claim 1, wherein the at least one set of original graphic commands comprise a first set of original graphic commands; the rendering classifier is arranged to intercept the first set of original graphic commands on the command path (du Bois et al.: paragraph 203, Illustrated processing block 1702 provides for intercepting one or more commands to set a first shading rate for offscreen regions of a scene, wherein the scene is associated with a plurality of graphics processors and a plurality of displays) to obtain a first rendering property from the first set of original graphic commands, for classifying rendering corresponding to the first set of original graphic commands (Suzuki: column 4, lines 53-62, One rendering command is received in step S200, and is interpreted in step S210 to discriminate the type of object indicated by the rendering command. If the rendering command of interest is not an image rendering command, i.e., it is a text rendering command, graphics rendering command, or the like, it is determined in step S220 that the object is not a photo image, and the flow jumps to step S260. On the other hand, if the rendering command is an image rendering command, it is determined that the object is a photo image, and the flow advances to step S240); and the rendering classifier is arranged to classify the rendering corresponding to the first set of original graphic commands into a first predetermined rendering type among the multiple predetermined rendering types according to the first rendering property (du Bois et al.: paragraph 204, block 1704 further identifies the offscreen regions based on dispatch thread ID data, offscreen or onscreen type, paragraph 209, dispatch thread ID commands), in order to determine a first shading rate corresponding to the first predetermined rendering type for the rendering corresponding to the first set of original graphic commands (du Bois et al.: paragraph 204, block 1704 identifies, on the per-graphics processor basis and the per-display basis, guard bands between the offscreen regions and onscreen regions, wherein the onscreen regions and the guard bands are rendered at the first (e.g., relatively high) shading rate). With respect to claim 6, du Bois et al. as modified by Suzuki disclose the method of claim 5, wherein the shading rate controller is arranged to control the processing circuit to activate the VRS function of the processing circuit, for rendering at the first shading rate corresponding to the first predetermined rendering type (du Bois et al.: paragraph 204, Block 1706 executes one or more compute shaders based on the second (e.g., relatively low) shading rate). With respect to claim 7, du Bois et al. as modified by Suzuki disclose the method of claim 5, wherein the at least one set of original graphic commands further comprise a second set of original graphic commands; the rendering classifier is arranged to intercept the second set of original graphic commands on the command path (du Bois et al.: paragraph 203, Illustrated processing block 1702 provides for intercepting one or more commands to set a first shading rate for offscreen regions of a scene, wherein the scene is associated with a plurality of graphics processors and a plurality of displays, as applied to a second command) to obtain a second rendering property from the second set of original graphic commands, for classifying rendering corresponding to the second set of original graphic commands (Suzuki: column 4, lines 53-62, One rendering command is received in step S200, and is interpreted in step S210 to discriminate the type of object indicated by the rendering command. If the rendering command of interest is not an image rendering command, i.e., it is a text rendering command, graphics rendering command, or the like, it is determined in step S220 that the object is not a photo image, and the flow jumps to step S260. On the other hand, if the rendering command is an image rendering command, it is determined that the object is a photo image, and the flow advances to step S240); and the rendering classifier is arranged to classify the rendering corresponding to the second set of original graphic commands into a second predetermined rendering type among the multiple predetermined rendering types according to the second rendering property (du Bois et al.: paragraph 204, block 1704 further identifies the offscreen regions based on dispatch thread ID data, offscreen or onscreen type as a second rendering property, paragraph 209, dispatch thread ID commands), in order to determine a second shading rate corresponding to the second predetermined rendering type for the rendering corresponding to the second set of original graphic command (du Bois et al.: paragraph 204, block 1704 identifies, on the per-graphics processor basis and the per-display basis, guard bands between the offscreen regions and onscreen regions, wherein the onscreen regions and the guard bands are rendered at the first (e.g., relatively high) shading rate). With respect to claim 8, du Bois et al. as modified by Suzuki disclose the method of claim 7, wherein the shading rate controller is arranged to control the processing circuit to activate the VRS function of the processing circuit, for rendering at the second shading rate corresponding to the second predetermined rendering type (du Bois et al.: paragraph 204, Block 1706 executes one or more compute shaders based on the second (e.g., relatively low) shading rate, as applied to the second type). With respect to claim 17, du Bois et al. as modified by Suzuki disclose the method of claim 1, wherein the rendering corresponding to the at least one set of original graphic commands comprises any rendering operation among multiple rendering operations respectively corresponding to multiple rendering control levels (du Bois et al.: paragraph 156, FIG. 9A is a block diagram illustrating a graphics processor command format 900 that may be used to program graphics processing pipelines according to some embodiments, du Bois et al.: paragraph 158, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations). With respect to claim 18, du Bois et al. as modified by Suzuki disclose the method of claim 17, wherein the multiple rendering control levels comprise a drawcall level (du Bois et al.: paragraph 156, FIG. 9A is a block diagram illustrating a graphics processor command format 900 that may be used to program graphics processing pipelines according to some embodiments), a renderpass level (du Bois et al.: paragraph 158, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations) and a thread-process level (du Bois et al.: paragraph 162, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations). With respect to claim 19, du Bois et al. as modified by Suzuki disclose an apparatus that operates according to the method of claim 1, wherein the apparatus comprises at least the processing circuit within the electronic device (du Bois et al.: paragraph 170, FIG. 10 illustrates an exemplary graphics software architecture for a data processing system 1000 according to some embodiments); see rationale for rejection of claim 1. Claim(s) 2-4, 9-10, and 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over du Bois et al. (U.S. PGPUB 20240282040) in view of Suzuki (U.S. Patent No. 6,891,970) and further in view of Li et al. (WO2022204920, referenced by corresponding U.S. PGPUB 20240320905). With respect to claim 2, du Bois et al. as modified by Suzuki disclose the method of claim 1. However, du Bois et al. do not expressly disclose utilizing the shading rate controller within the processing circuit to control the processing circuit to selectively activate the VRS function of the processing circuit for rendering at the at least one shading rate corresponding to the at least one predetermined rendering type further comprises: utilizing the shading rate controller to perform at least one operation among a command insertion operation and a command modifying operation on the at least one set of original graphic commands, in order to convert the at least one set of original graphic commands into at least one set of new graphic commands, for controlling the processing circuit to perform rendering corresponding to the at least one set of new graphic commands at the at least one shading rate, rather than performing the rendering corresponding to the at least one set of original graphic commands. Li et al., who also deal with variable rate shading, disclose a method wherein utilizing the shading rate controller within the processing circuit to control the processing circuit to selectively activate the VRS function of the processing circuit for rendering at the at least one shading rate corresponding to the at least one predetermined rendering type further comprises: utilizing the shading rate controller to perform at least one operation among a command insertion operation and a command modifying operation on the at least one set of original graphic commands, in order to convert the at least one set of original graphic commands into at least one set of new graphic commands, for controlling the processing circuit to perform rendering corresponding to the at least one set of new graphic commands at the at least one shading rate, rather than performing the rendering corresponding to the at least one set of original graphic commands (paragraph 21, Shading rate extensions may allow applications to indicate different fragment shading rates at a draw call level. An extension may be additional instructions for an application that increase a capability of the application and/or increase an availability of data to the application). Du Bois et al., Suzuki, and Li et al. are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein utilizing the shading rate controller within the processing circuit to control the processing circuit to selectively activate the VRS function of the processing circuit for rendering at the at least one shading rate corresponding to the at least one predetermined rendering type further comprises: utilizing the shading rate controller to perform at least one operation among a command insertion operation and a command modifying operation on the at least one set of original graphic commands, in order to convert the at least one set of original graphic commands into at least one set of new graphic commands, for controlling the processing circuit to perform rendering corresponding to the at least one set of new graphic commands at the at least one shading rate, rather than performing the rendering corresponding to the at least one set of original graphic commands, as taught by Li et al., to the du Bois et al. system, because computing device efficiencies may be improved based on determining when and to what degree (e.g., half shading rate, quarter shading rate, etc.) to reduce shading rates (paragraph 21 of Li et al.). With respect to claim 3, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 2, wherein the at least one set of original graphic commands comprise a first set of original graphic commands and a second set of original graphic commands (du Bois et al.: paragraph 203, the command(s) are issued by a graphics application such as, for example, the 3D graphics application 1010 (FIG. 10), already discussed); the rendering classifier is arranged to intercept the first set of original graphic commands on the command path to obtain a first rendering property from the first set of original graphic commands, for classifying rendering corresponding to the first set of original graphic commands (Suzuki: column 4, lines 53-62, One rendering command is received in step S200, and is interpreted in step S210 to discriminate the type of object indicated by the rendering command. If the rendering command of interest is not an image rendering command, i.e., it is a text rendering command), and to intercept the second set of original graphic commands on the command path to obtain a second rendering property from the second set of original graphic commands, for classifying rendering corresponding to the second set of original graphic commands (Suzuki: column 4, lines 53-62, One rendering command is received in step S200, and is interpreted in step S210 to discriminate the type of object indicated by the rendering command. On the other hand, if the rendering command is an image rendering command, it is determined that the object is a photo image, and the flow advances to step S240); and the rendering classifier is arranged to classify the rendering corresponding to the first set of original graphic commands into a first predetermined rendering type among the multiple predetermined rendering types according to the first rendering property, in order to determine a first shading rate corresponding to the first predetermined rendering type for the rendering corresponding to the first set of original graphic commands (du Bois et al.: paragraph 204, block 1704 identifies, on the per-graphics processor basis and the per-display basis, guard bands between the offscreen regions and onscreen regions, wherein the onscreen regions and the guard bands are rendered at the first (e.g., relatively high) shading rate. Additionally, block 1704 may identify the guard bands based on shading rate tile data); and the rendering classifier is arranged to classify the rendering corresponding to the second set of original graphic commands into a second predetermined rendering type among the multiple predetermined rendering types according to the second rendering property, in order to determine a second shading rate corresponding to the second predetermined rendering type for the rendering corresponding to the second set of original graphic commands (du Bois et al.: paragraph 204, block 1704 further identifies the offscreen regions based on dispatch thread ID data. Block 1706 executes one or more compute shaders based on the second (e.g., relatively low) shading rate). It would have been obvious to apply the teachings of Suzuki to intercept the first set and second set of rendering commands because an object is rasterized on the page memories on the basis of the rendering command (column 6, lines 13-18 of Suzuki), thus rendering an object based on the type associated with the rendering command. With respect to claim 4, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 3, wherein the shading rate controller is arranged to perform at least one first operation among the command insertion operation and the command modifying operation on the first set of original graphic commands, in order to convert the first set of original graphic commands into a first set of new graphic commands among the at least one set of new graphic commands, for controlling the processing circuit to perform rendering corresponding to the first set of new graphic commands at the first shading rate, rather than performing the rendering corresponding to the first set of original graphic commands (Li et al.: paragraph 21, Shading rate extensions may allow applications to indicate different fragment shading rates at a draw call level. An extension may be additional instructions for an application that increase a capability of the application and/or increase an availability of data to the application); and the shading rate controller is arranged to perform at least one second operation among the command insertion operation and the command modifying operation on the second set of original graphic commands, in order to convert the second set of original graphic commands into a second set of new graphic commands among the at least one set of new graphic commands, for controlling the processing circuit to perform rendering corresponding to the second set of new graphic commands at the second shading rate, rather than performing the rendering corresponding to the second set of original graphic commands (Li et al.: paragraph 21, Shading rate extensions may allow applications to indicate different fragment shading rates at a draw call level. An extension may be additional instructions for an application that increase a capability of the application and/or increase an availability of data to the application, as applied to the second set of commands). With respect to claim 9, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 5, wherein the at least one set of original graphic commands further comprise another set of original graphic commands; the rendering classifier is arranged to intercept the other set of original graphic commands on the command path (du Bois et al.: paragraph 203, Illustrated processing block 1702 provides for intercepting one or more commands to set a first shading rate for offscreen regions of a scene, wherein the scene is associated with a plurality of graphics processors and a plurality of displays, as applied to another command) to obtain another rendering property from the other set of original graphic commands, for classifying rendering corresponding to the other set of original graphic commands (Suzuki: column 4, lines 53-62, One rendering command is received in step S200, and is interpreted in step S210 to discriminate the type of object indicated by the rendering command. If the rendering command of interest is not an image rendering command, i.e., it is a text rendering command, graphics rendering command, or the like, it is determined in step S220 that the object is not a photo image, and the flow jumps to step S260. On the other hand, if the rendering command is an image rendering command, it is determined that the object is a photo image, and the flow advances to step S240); and the rendering classifier is arranged to classify the rendering corresponding to the other set of original graphic commands into another predetermined rendering type among the multiple predetermined rendering types according to the other rendering property, in order to determine another shading rate corresponding to the other predetermined rendering type for the rendering corresponding to the other set of original graphic commands, wherein the other shading rate is equal to a default shading rate of the processing circuit (Li et al.: paragraph 47, The shading rate may be set/determined, at 328. For example, a half-shading rate or a quarter-shading rate may be applied to draws that satisfy the preconditions and the rules. In further aspects, the shading rate may be set/determined, at 328, to remain as a default shading rate (e.g., full shading rate) for draws that do not satisfy the preconditions and the rules). It would have been obvious to apply the teachings of Li et al. to set the other shading rate to a default shading rate because render flows of some mobile/computing games and applications may be more efficient than render flows of other mobile/computing games and applications, the computing device may be configured to apply a preconfigured game-specific/application-specific rule mask that balances performance and visual quality (paragraph 47 of Li et al.). With respect to claim 10, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 9, wherein the shading rate controller is arranged to control the processing circuit to deactivate the VRS function of the processing circuit, for rendering at the other shading rate corresponding to the other predetermined rendering type (du Bois et al.: paragraph 204, Block 1706 executes one or more compute shaders based on the second (e.g., relatively low) shading rate, as applied to the other type). With respect to claim 14, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 1, wherein any rendering property among the at least one rendering property comprises one or a combination of at least one pipeline state, at least one buffer setting, at least one texture setting (Li et al.: paragraph 38, the CPU 202 may determine, at 220, whether the rule(s) are satisfied for reducing the shading rate for a draw of the one or more subsequent frames. The rule(s) may be associated with vertex counts, texture sampling), at least one shader code and at least one enabled extension. It would have been obvious to apply the method wherein the rendering property comprises a texture setting, because by improving fragment shading techniques, a number of heavy draw instances may be reduced (paragraph 41 of Li et al.). With respect to claim 15, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 1, wherein the multiple predetermined rendering types comprise a first predetermined rendering type regarding rendering at least one normal three-dimensional (3D) object (du Bois et al.: paragraph171, The application also includes graphics objects 1016 defined by vertex data), a second predetermined rendering type regarding rendering at least one vegetation object (Li et al.: paragraph 47, the computing device may set a reduced shading rate (e.g., at 328) for heavy draws, such as draws associated with foliage, terrain, buildings, characters, etc.), and at least one other predetermined rendering type regarding at least one predetermined image processing operation (Li et al.: paragraph 37, The precondition(s) may be used to identify draws that can have a reduced shading rate without significantly impacting a user experience, as descried in further detail below. The precondition(s) may be based on a determination that draws are not depth-only draws, that the draws are not user interface (UI) surface draws, etc.). It would have been obvious to include additional rendering types because this would implement details for rendering for a mobile game. With respect to claim 16, du Bois et al. as modified by Suzuki and Li et al. disclose the method of claim 15, wherein the at least one predetermined image processing operation comprises any operation among operations of post-processing, rendering visual effects (VFX), rendering at least one user interface (UI) (Li et al.: paragraph 37, The precondition(s) may be used to identify draws that can have a reduced shading rate without significantly impacting a user experience, as descried in further detail below. The precondition(s) may be based on a determination that draws are not depth-only draws, that the draws are not user interface (UI) surface draws, etc.), rendering based on a depth image and rendering at least one protected object. Claim(s) 11 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over du Bois et al. (U.S. PGPUB 20240282040) in view of Suzuki (U.S. Patent No. 6,891,970) and further in view of Fitzgibbon (U.S. Patent No. 11,402,812). With respect to claim 11, du Bois et al. as modified by Suzuki disclose the method of claim 1. However, du Bois et al. as modified by Suzuki do not expressly disclose the command path starts from an upper layer among multiple layers of a graphic control architecture within the processing circuit and reaches a lower layer among the multiple layers. Fitzgibbon, who also deals with instructions for computer graphics, discloses a method wherein the command path starts from an upper layer among multiple layers of a graphic control architecture within the processing circuit and reaches a lower layer among the multiple layers (column 7, lines 2-16, A high-level view of the layers of the software stack of the user device 100 are shown in FIG. 4B. The user device 100 includes hardware 127 at the bottom of the stack. The hardware includes the processor 110, memory 115, communication circuitry 120, and the user interface 125. The operating system 130 may be the lowest layer of software in the stack and, among other functions, interfaces with the hardware of the user device 100. The operating system 130 may be run or instantiated on the processor 110. The operating system 130 may include various hardware drivers and/or firmware used to interface with the hardware 127 of the user device 100. These drivers and/or firmware provide the instructions and/or code for communicating with and/or controlling the various hardware 127 components of the user device 100). Du Bois et al., Suzuki, and Fitzgibbon are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein the command path starts from an upper layer among multiple layers of a graphic control architecture within the processing circuit and reaches a lower layer among the multiple layers, as taught by Fitzgibbon, to the du Bois et al. as modified by Suzuki system, because the application programming interface 165 may include various plugins or instructions for interfacing or communicating information and operational tasks with the operating system 130 and the processor 110. The application programming interface 165 makes data and functions available to the higher-level software including applications of the user device 100 (column 7, lines 35-42 of Fitzgibbon), thus facilitating communication of tasks. With respect to claim 13, du Bois et al. as modified by Suzuki and Fitzgibbon disclose the method of claim 11, wherein the multiple layers of the graphic control architecture comprise at least one software layer, at least one firmware layer and at least one hardware layer (Fitzgibbon: Fig. 4B); and the rendering classifier and the shading rate controller are implemented in either a same layer or different layers among the multiple layers (du Bois et al.: paragraph 158, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over du Bois et al. (U.S. PGPUB 20240282040) in view of Suzuki (U.S. Patent No. 6,891,970), Fitzgibbon (U.S. Patent No. 11,402,812), and further in view of Li et al. (WO2022204920, referenced by corresponding U.S. PGPUB 20240320905). With respect to claim 12, du Bois et al. as modified by Suzuki, Fitzgibbon, and Li et al. disclose the method of claim 11, wherein the at least one set of original graphic commands are sent from the upper layer among the multiple layers toward the lower layer among the multiple layers (Fitzgibbon: column 7, lines 9-13, the operating system 130 may be the lowest layer of software in the stack and, among other functions, interfaces with the hardware of the user device 100. The operating system 130 may be run or instantiated on the processor 110, Fig. 4B) and are intercepted by the rendering classifier (du Bois et al.: paragraph 203, Illustrated processing block 1702 provides for intercepting one or more commands to set a first shading rate for offscreen regions of a scene, wherein the scene is associated with a plurality of graphics processors and a plurality of displays), for being converted into at least one set of new graphic commands by the shading rate controller (Li et al.: paragraph 21, Shading rate extensions may allow applications to indicate different fragment shading rates at a draw call level. An extension may be additional instructions for an application that increase a capability of the application and/or increase an availability of data to the application). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPUB 20240394937 to Shen et al. for a method of intercepting rendering commands U.S. PGPUB 20110096366 to Oka for a method of interpreting rendering commands to determine the type of rendering command. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW GUS YANG whose telephone number is (571)272-5514. The examiner can normally be reached M-F 9 AM - 5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW G YANG/Primary Examiner, Art Unit 2614 1/2/26
Read full office action

Prosecution Timeline

Feb 22, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602856
DICING ORACLE FOR TEXTURE SPACE SHADING
2y 5m to grant Granted Apr 14, 2026
Patent 12602872
DRIVABLE IMPLICIT THREE-DIMENSIONAL HUMAN BODY REPRESENTATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12592023
INTERSECTION TESTING FOR RAY TRACING
2y 5m to grant Granted Mar 31, 2026
Patent 12579728
MEMORY ALLOCATION FOR RECURSIVE PROCESSING IN A RAY TRACING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567207
THREE-DIMENSIONAL MODELING AND RECONSTRUCTION OF CLOTHING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
77%
With Interview (+8.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 558 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month