Prosecution Insights
Last updated: April 18, 2026
Application No. 18/621,768

TILE-BASED IMMEDIATE MODE RENDERER GRAPHICS PIPELINE WITH PER-TILE DEPTH PRE-PASSES

Non-Final OA §103
Filed
Mar 29, 2024
Examiner
CHIN, MICHELLE
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Advanced Micro Devices, Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
540 granted / 634 resolved
+23.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
70.6%
+30.6% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 634 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on 07/03/2024, 07/17/2025 and 10/21/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections 3. Claim 1 is objected to because of the following informalities: claim 1 recite “A acceleration unit (AU) … ” The words “A” is a typo, should be An acceleration unit (AU) … ”. Appropriate correction is required. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 7. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bolz et al. (US 2014/0267376 A1) in view of Brigg et al. (US 2021/0110510 A1). 8. With reference to claim 1, Bolz teaches A acceleration unit (AU), comprising: a plurality of per-tile queues each allocated to a tile of a plurality of tiles of a frame to be rendered; (“the L2 cache 265 may be configured to transmit the multi-sample pixel data in tile-sized increments (1SPP format or not) to the Load/Store unit 290 via the crossbar 260. Accordingly, the Load/Store unit 290 may be configured to store multi-sample pixel data in tile-sized increments. In other embodiments, the L2 cache 265 is configured to transmit a subset of the samples for a multi-sample pixel based on a request received from the Load/Store unit 290. The Load/Store unit 290 is configured to provide multi-sample pixel data to the processing unit 250 when a load request is received from the processing unit 250. The Load/Store unit 290 is also configured to receive processed multi-sample pixel data from the processing unit 250 and store the multi-sample pixel data. The Load/Store unit 290 may include a buffer for storing the processed multi-sample data temporarily before outputting the processed multi-sample data to the frame buffer memory 270 via the crossbar 260 and L2 cache 265. In one embodiment, the Load/Store unit 290 functions, at least in part, as a cache that is configured to buffer multi-sample pixel data received from the L2 cache 265 and processed multi-sample pixel data received from the processing unit 250 in a single buffer.” [0031-0032]) Bolz teaches store multi-sample pixel data in tile-sized increments and a buffer for storing the processed multi-sample data temporarily before outputting the processed multi-sample data to the frame buffer memory. Bolz also teaches one or more processor cores configured to: for each tile of the plurality of tiles: write geometry data of one or more primitives of the frame to be rendered at least partially visible in the tile to a per-tile queue of the plurality of per-tile queues allocated to the tile; (“The multi-sample data for each sample may include z (depth), color, texture coordinates, or other attributes associated with graphics primitives. “ [0028] “The Load/Store unit 290 is also configured to receive processed multi-sample pixel data from the processing unit 250 and store the multi-sample pixel data. The Load/Store unit 290 may include a buffer for storing the processed multi-sample data temporarily before outputting the processed multi-sample data to the frame buffer memory 270 via the crossbar 260 and L2 cache 265.” [0032] “the PPU 700 is configured to execute a plurality of threads concurrently in two or more streaming multi-processors (SMs) 750. In one embodiment, the processing unit 250 and 550 are implemented as SMs 750. A thread (i.e., a thread of execution) is an instantiation of a set of instructions executing within a particular SM 750. Each SM 750, described below in more detail in conjunction with FIG. 8, may include, but is not limited to, one or more processing cores, a level-one (L1) cache, shared memory, and the like.” [0107] “a primitive includes data that specifies a number of vertices for the primitive (e.g., in a model-space coordinate system) as well as attributes associated with each vertex of the primitive. The PPU 700 can be configured to process the graphics primitives to generate a frame buffer (i.e., pixel data for each of the pixels of the display). … An application writes model data for a scene (i.e., a collection of vertices and attributes) to memory. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed. The driver kernel reads the model data and writes commands to the buffer to perform one or more operations to process the model data. The commands may encode different shader programs including one or more of a vertex shader, hull shader, geometry shader, pixel shader, etc. … a first subset of SMs 750 may be configured to execute a vertex shader program while a second subset of SMs 750 may be configured to execute a pixel shader program. The first subset of SMs 750 processes vertex data to produce processed vertex data and writes the processed vertex data to the L2 cache 265 and/or the memory 704 via the LoadStore units 290 and the crossbar 260. After the processed vertex data is rasterized (i.e., transformed from three-dimensional data into two-dimensional data in screen space) to produce fragment data, the second subset of SMs 750 executes a pixel shader to produce processed fragment data, which is then blended with other processed fragment data and written to the frame buffer in memory 704. The vertex shader program and pixel shader program may execute concurrently, processing different data from the same scene in a pipelined fashion until all of the model data for the scene has been rendered to the frame buffer. Then, the contents of the frame buffer are transmitted to a display controller for display on a display device.” [0116-0118]) PNG media_image1.png 667 500 media_image1.png Greyscale Bolz does not explicitly teach based on the geometry data, perform a first depth sub-pass operation using a first threshold and a second depth sub-pass operation. This is what Brigg teaches (“The tiling engine 210 stores the transformed geometry data in a local transformed geometry buffer 211 and generates a list, for each tile, of the transformed primitives in the local transformed geometry buffer 211 that fall, at least partially within that tile. The list may be referred to as a partial display list. In some cases, the partial display list may comprise pointers or links to the transformed geometry data (e.g. vertex data) in the local transformed geometry buffer 211 related to the primitives that, at least partially, fall within the tile. The local transformed geometry buffer is not intended to be necessarily large enough to store all of the transformed geometry data to render a frame, so periodically (e.g. from time to time, e.g., at regular intervals, or when the local transformed geometry buffer 211 becomes full or when the fullness of the local transformed geometry buffer 211 is above a threshold) the tiling engine 210 sends one or more partial display lists to the rasterization logic 206 to thereby free up space in the local transformed geometry buffer 211.” [0065] “he HSR logic 314 may comprise two sub-stages—a first sub-stage in which depth testing is performed on primitive fragments related to a tile, and a second sub-stage in which the primitive fragments that survive the depth testing are stored in a tag buffer. For example, the HSR logic 314 may comprise depth testing logic and a tag buffer. The depth testing logic receives primitive fragments and compares the depth values (e.g. Z value or Z co-ordinate) of the primitive fragments to the corresponding depth value in a depth buffer for the tile. Specifically, the depth buffer stores the ‘best’ depth value (e.g. the one that is closest to the viewer) for each sample of the tile. If the received primitive fragment has a ‘worse’ depth value (e.g. a depth value that indicates it is further from the viewer) than the corresponding depth value in the depth buffer, then the primitive fragment will be hidden by another primitive and so the primitive fragment ‘fails’ the depth test and is not output to the tag buffer. If, however, the received primitive fragment has a ‘better’ depth value (e.g. a depth value that indicates it is closer to the viewer) than the corresponding depth value in the depth buffer, the primitive fragment ‘passes’ the depth test. The primitive fragment is then output to the tag buffer and the corresponding depth value in the depth buffer is updated to indicate there is a new ‘best’ depth value. The tag buffer receives primitive fragments that have passed the depth test stage and for each received primitive fragment updates the tag buffer to identify that received primitive fragment as the primitive fragment that is visible at its sample position. For example, if the tag buffer receives a primitive fragment x at sample location a then the tag buffer stores information indicating that the primitive fragment x is visible at sample location a. If the tag buffer subsequently receives a primitive fragment y at sample location, a then the tag buffer updates the information for sample location a to indicate that in fact it is primitive fragment y that is visible. Accordingly, in a simple case where all of the primitives are opaque, after a set of primitive fragments associated with a tile (e.g. the primitive fragments associated with a partial display list) have been processed by the depth testing logic, the tag buffer comprises the identity of the primitive fragments (to date) that are visible at each sample location. At this point the tag buffer may be flushed to the texturing/shading logic 316 where texturing and shading are performed on the primitive fragments that are visible.” [0151-0152]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Brigg into Bolz, in order to keep bandwidth requirements for the memory low. 9. With reference to claim 2, Bolz teaches the one or more processor cores are configured to: for each tile, render, to a buffer, pixel attribute data of the one or more primitives at least partially visible in the tile based on the geometry data. (“The multi-sample data for each sample may include z (depth), color, texture coordinates, or other attributes associated with graphics primitives. “ [0028] “the coalesce unit 256 is configured to snoop the writes to the cache 281 that are received from the processing unit 250 and update the tile coverage mask maintained by the coverage tracking unit 276.” [0051] “the PPU 700 is configured to execute a plurality of threads concurrently in two or more streaming multi-processors (SMs) 750. In one embodiment, the processing unit 250 and 550 are implemented as SMs 750. A thread (i.e., a thread of execution) is an instantiation of a set of instructions executing within a particular SM 750. Each SM 750, described below in more detail in conjunction with FIG. 8, may include, but is not limited to, one or more processing cores, a level-one (L1) cache, shared memory, and the like.” [0107] “a primitive includes data that specifies a number of vertices for the primitive (e.g., in a model-space coordinate system) as well as attributes associated with each vertex of the primitive. The PPU 700 can be configured to process the graphics primitives to generate a frame buffer (i.e., pixel data for each of the pixels of the display). … An application writes model data for a scene (i.e., a collection of vertices and attributes) to memory. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed. The driver kernel reads the model data and writes commands to the buffer to perform one or more operations to process the model data. The commands may encode different shader programs including one or more of a vertex shader, hull shader, geometry shader, pixel shader, etc. … a first subset of SMs 750 may be configured to execute a vertex shader program while a second subset of SMs 750 may be configured to execute a pixel shader program. The first subset of SMs 750 processes vertex data to produce processed vertex data and writes the processed vertex data to the L2 cache 265 and/or the memory 704 via the LoadStore units 290 and the crossbar 260. After the processed vertex data is rasterized (i.e., transformed from three-dimensional data into two-dimensional data in screen space) to produce fragment data, the second subset of SMs 750 executes a pixel shader to produce processed fragment data, which is then blended with other processed fragment data and written to the frame buffer in memory 704. The vertex shader program and pixel shader program may execute concurrently, processing different data from the same scene in a pipelined fashion until all of the model data for the scene has been rendered to the frame buffer. Then, the contents of the frame buffer are transmitted to a display controller for display on a display device.” [0116-0118]) 10. With reference to claim 3, Bolz teaches the one or more processor cores are configured to: for each tile of the plurality of tiles, based on the pixel attribute data of the one or more primitives at least partially visible in the tile, determine data of the one or more primitives at least partially visible in the tile. (“The multi-sample data for each sample may include z (depth), color, texture coordinates, or other attributes associated with graphics primitives. “ [0028] “the coalesce unit 256 is configured to snoop the writes to the cache 281 that are received from the processing unit 250 and update the tile coverage mask maintained by the coverage tracking unit 276.” [0051] “the PPU 700 is configured to execute a plurality of threads concurrently in two or more streaming multi-processors (SMs) 750. In one embodiment, the processing unit 250 and 550 are implemented as SMs 750. A thread (i.e., a thread of execution) is an instantiation of a set of instructions executing within a particular SM 750. Each SM 750, described below in more detail in conjunction with FIG. 8, may include, but is not limited to, one or more processing cores, a level-one (L1) cache, shared memory, and the like.” [0107] “a primitive includes data that specifies a number of vertices for the primitive (e.g., in a model-space coordinate system) as well as attributes associated with each vertex of the primitive. The PPU 700 can be configured to process the graphics primitives to generate a frame buffer (i.e., pixel data for each of the pixels of the display). … An application writes model data for a scene (i.e., a collection of vertices and attributes) to memory. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed. The driver kernel reads the model data and writes commands to the buffer to perform one or more operations to process the model data. The commands may encode different shader programs including one or more of a vertex shader, hull shader, geometry shader, pixel shader, etc. … a first subset of SMs 750 may be configured to execute a vertex shader program while a second subset of SMs 750 may be configured to execute a pixel shader program. The first subset of SMs 750 processes vertex data to produce processed vertex data and writes the processed vertex data to the L2 cache 265 and/or the memory 704 via the LoadStore units 290 and the crossbar 260. After the processed vertex data is rasterized (i.e., transformed from three-dimensional data into two-dimensional data in screen space) to produce fragment data, the second subset of SMs 750 executes a pixel shader to produce processed fragment data, which is then blended with other processed fragment data and written to the frame buffer in memory 704. The vertex shader program and pixel shader program may execute concurrently, processing different data from the same scene in a pipelined fashion until all of the model data for the scene has been rendered to the frame buffer. Then, the contents of the frame buffer are transmitted to a display controller for display on a display device.” [0116-0118]) Bolz does not explicitly teach lighting data of the one or more primitives. This is what Brigg teaches (“The geometry processing logic 204 comprises transformation logic 208 and a tiling engine 210. The transformation logic 208 operates in the same manner as the transformation logic 108 of FIG. 1. Specifically, the transformation logic 208 receives geometry data (e.g. vertices, primitives and/or patches) from an application (e.g. a game application) and transforms the geometry data into the rendering space (e.g. screen space). The transformation logic 208 may also perform functions such as clipping and culling to remove geometry data (e.g. primitives or patches) that falls outside of a viewing frustum, and/or apply lighting/attribute processing as is known to those of skill in the art. The transformed geometry data (e.g. vertices, primitives and/or patches) is provided to the tiling engine 210.” [0064]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Brigg into Bolz, in order to keep bandwidth requirements for the memory low. 11. With reference to claim 4, Bolz teaches the one or more processor cores are configured to: release, from the buffer, pixel attribute data of the one or more primitives at least partially visible in a first tile of the plurality of tiles; and concurrently with releasing the pixel attribute data, (“The multi-sample data for each sample may include z (depth), color, texture coordinates, or other attributes associated with graphics primitives. “ [0028] “the coalesce unit 256 is configured to snoop the writes to the cache 281 that are received from the processing unit 250 and update the tile coverage mask maintained by the coverage tracking unit 276.” [0051] “the PPU 700 is configured to execute a plurality of threads concurrently in two or more streaming multi-processors (SMs) 750. In one embodiment, the processing unit 250 and 550 are implemented as SMs 750. A thread (i.e., a thread of execution) is an instantiation of a set of instructions executing within a particular SM 750. Each SM 750, described below in more detail in conjunction with FIG. 8, may include, but is not limited to, one or more processing cores, a level-one (L1) cache, shared memory, and the like.” [0107] “a primitive includes data that specifies a number of vertices for the primitive (e.g., in a model-space coordinate system) as well as attributes associated with each vertex of the primitive. The PPU 700 can be configured to process the graphics primitives to generate a frame buffer (i.e., pixel data for each of the pixels of the display). … An application writes model data for a scene (i.e., a collection of vertices and attributes) to memory. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed. The driver kernel reads the model data and writes commands to the buffer to perform one or more operations to process the model data. The commands may encode different shader programs including one or more of a vertex shader, hull shader, geometry shader, pixel shader, etc. … a first subset of SMs 750 may be configured to execute a vertex shader program while a second subset of SMs 750 may be configured to execute a pixel shader program. The first subset of SMs 750 processes vertex data to produce processed vertex data and writes the processed vertex data to the L2 cache 265 and/or the memory 704 via the LoadStore units 290 and the crossbar 260. After the processed vertex data is rasterized (i.e., transformed from three-dimensional data into two-dimensional data in screen space) to produce fragment data, the second subset of SMs 750 executes a pixel shader to produce processed fragment data, which is then blended with other processed fragment data and written to the frame buffer in memory 704. The vertex shader program and pixel shader program may execute concurrently, processing different data from the same scene in a pipelined fashion until all of the model data for the scene has been rendered to the frame buffer. Then, the contents of the frame buffer are transmitted to a display controller for display on a display device.” [0116-0118]) Bolz does not explicitly teach perform the first depth sub- pass operation based on pixel depth data of primitives at least partially visible in a second tile of the plurality of tiles. This is what Brigg teaches (“he HSR logic 314 may comprise two sub-stages—a first sub-stage in which depth testing is performed on primitive fragments related to a tile, and a second sub-stage in which the primitive fragments that survive the depth testing are stored in a tag buffer. For example, the HSR logic 314 may comprise depth testing logic and a tag buffer. The depth testing logic receives primitive fragments and compares the depth values (e.g. Z value or Z co-ordinate) of the primitive fragments to the corresponding depth value in a depth buffer for the tile. Specifically, the depth buffer stores the ‘best’ depth value (e.g. the one that is closest to the viewer) for each sample of the tile. If the received primitive fragment has a ‘worse’ depth value (e.g. a depth value that indicates it is further from the viewer) than the corresponding depth value in the depth buffer, then the primitive fragment will be hidden by another primitive and so the primitive fragment ‘fails’ the depth test and is not output to the tag buffer. If, however, the received primitive fragment has a ‘better’ depth value (e.g. a depth value that indicates it is closer to the viewer) than the corresponding depth value in the depth buffer, the primitive fragment ‘passes’ the depth test. The primitive fragment is then output to the tag buffer and the corresponding depth value in the depth buffer is updated to indicate there is a new ‘best’ depth value. The tag buffer receives primitive fragments that have passed the depth test stage and for each received primitive fragment updates the tag buffer to identify that received primitive fragment as the primitive fragment that is visible at its sample position. For example, if the tag buffer receives a primitive fragment x at sample location a then the tag buffer stores information indicating that the primitive fragment x is visible at sample location a. If the tag buffer subsequently receives a primitive fragment y at sample location, a then the tag buffer updates the information for sample location a to indicate that in fact it is primitive fragment y that is visible. Accordingly, in a simple case where all of the primitives are opaque, after a set of primitive fragments associated with a tile (e.g. the primitive fragments associated with a partial display list) have been processed by the depth testing logic, the tag buffer comprises the identity of the primitive fragments (to date) that are visible at each sample location. At this point the tag buffer may be flushed to the texturing/shading logic 316 where texturing and shading are performed on the primitive fragments that are visible.” [0151-0152]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Brigg into Bolz, in order to keep bandwidth requirements for the memory low. 12. With reference to claim 5, Bolz does not explicitly teach the first depth sub-pass operation is different from the second depth sub-pass operation. This is what Brigg teaches (“The HSR logic 314 may comprise two sub-stages—a first sub-stage in which depth testing is performed on primitive fragments related to a tile, and a second sub-stage in which the primitive fragments that survive the depth testing are stored in a tag buffer. For example, the HSR logic 314 may comprise depth testing logic and a tag buffer. The depth testing logic receives primitive fragments and compares the depth values (e.g. Z value or Z co-ordinate) of the primitive fragments to the corresponding depth value in a depth buffer for the tile. Specifically, the depth buffer stores the ‘best’ depth value (e.g. the one that is closest to the viewer) for each sample of the tile. If the received primitive fragment has a ‘worse’ depth value (e.g. a depth value that indicates it is further from the viewer) than the corresponding depth value in the depth buffer, then the primitive fragment will be hidden by another primitive and so the primitive fragment ‘fails’ the depth test and is not output to the tag buffer. If, however, the received primitive fragment has a ‘better’ depth value (e.g. a depth value that indicates it is closer to the viewer) than the corresponding depth value in the depth buffer, the primitive fragment ‘passes’ the depth test. The primitive fragment is then output to the tag buffer and the corresponding depth value in the depth buffer is updated to indicate there is a new ‘best’ depth value. The tag buffer receives primitive fragments that have passed the depth test stage and for each received primitive fragment updates the tag buffer to identify that received primitive fragment as the primitive fragment that is visible at its sample position. For example, if the tag buffer receives a primitive fragment x at sample location a then the tag buffer stores information indicating that the primitive fragment x is visible at sample location a. If the tag buffer subsequently receives a primitive fragment y at sample location, a then the tag buffer updates the information for sample location a to indicate that in fact it is primitive fragment y that is visible. Accordingly, in a simple case where all of the primitives are opaque, after a set of primitive fragments associated with a tile (e.g. the primitive fragments associated with a partial display list) have been processed by the depth testing logic, the tag buffer comprises the identity of the primitive fragments (to date) that are visible at each sample location. At this point the tag buffer may be flushed to the texturing/shading logic 316 where texturing and shading are performed on the primitive fragments that are visible.” [0151-0152]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Brigg into Bolz, in order to keep bandwidth requirements for the memory low. 13. With reference to claim 6, Bolz does not explicitly teach the first depth sub-pass operation is based on a first set of pixel states and the second depth sub-pass operation is based on a second set of pixel states that is different from the first set of pixel states. This is what Brigg teaches (“The rendering space is a two-dimensional, often, but not necessarily, rectangular (where rectangle includes square) grid of pixels. A region of the rendering space is the portion of the rendering space corresponding to a set of pixels and may be defined by the number of pixels covered by the region. For example, an n×m region of the rendering space is a portion of the rendering space corresponding to an n×m set of pixels where n and m are integers. As described in more detail below, a region of the rendering space may be a portion of the rendering space corresponding to a contiguous block of pixels or a non-contiguous set of pixels.” [0075] “The HSR logic 314 may comprise two sub-stages—a first sub-stage in which depth testing is performed on primitive fragments related to a tile, and a second sub-stage in which the primitive fragments that survive the depth testing are stored in a tag buffer. For example, the HSR logic 314 may comprise depth testing logic and a tag buffer. The depth testing logic receives primitive fragments and compares the depth values (e.g. Z value or Z co-ordinate) of the primitive fragments to the corresponding depth value in a depth buffer for the tile. Specifically, the depth buffer stores the ‘best’ depth value (e.g. the one that is closest to the viewer) for each sample of the tile. If the received primitive fragment has a ‘worse’ depth value (e.g. a depth value that indicates it is further from the viewer) than the corresponding depth value in the depth buffer, then the primitive fragment will be hidden by another primitive and so the primitive fragment ‘fails’ the depth test and is not output to the tag buffer. If, however, the received primitive fragment has a ‘better’ depth value (e.g. a depth value that indicates it is closer to the viewer) than the corresponding depth value in the depth buffer, the primitive fragment ‘passes’ the depth test. The primitive fragment is then output to the tag buffer and the corresponding depth value in the depth buffer is updated to indicate there is a new ‘best’ depth value. The tag buffer receives primitive fragments that have passed the depth test stage and for each received primitive fragment updates the tag buffer to identify that received primitive fragment as the primitive fragment that is visible at its sample position. For example, if the tag buffer receives a primitive fragment x at sample location a then the tag buffer stores information indicating that the primitive fragment x is visible at sample location a. If the tag buffer subsequently receives a primitive fragment y at sample location, a then the tag buffer updates the information for sample location a to indicate that in fact it is primitive fragment y that is visible. Accordingly, in a simple case where all of the primitives are opaque, after a set of primitive fragments associated with a tile (e.g. the primitive fragments associated with a partial display list) have been processed by the depth testing logic, the tag buffer comprises the identity of the primitive fragments (to date) that are visible at each sample location. At this point the tag buffer may be flushed to the texturing/shading logic 316 where texturing and shading are performed on the primitive fragments that are visible.” [0151-0152]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Brigg into Bolz, in order to keep bandwidth requirements for the memory low. 14. With reference to claim 7, Bolz does not explicitly teach the one or more processor cores are configured to: for each tile of the plurality of tiles, perform a scissor operation on pixels of the one or more primitives at least partially visible in the tile. This is what Brigg teaches (“The geometry processing logic 204, like the geometry processing logic 104 of FIG. 1, implements the geometry processing phase. The geometry processing logic 204 comprises transformation logic 208 and a tiling engine 210. The transformation logic 208 operates in the same manner as the transformation logic 108 of FIG. 1. Specifically, the transformation logic 208 receives geometry data (e.g. vertices, primitives and/or patches) from an application (e.g. a game application) and transforms the geometry data into the rendering space (e.g. screen space). The transformation logic 208 may also perform functions such as clipping and culling to remove geometry data (e.g. primitives or patches) that falls outside of a viewing frustum, and/or apply lighting/attribute processing as is known to those of skill in the art. The transformed geometry data (e.g. vertices, primitives and/or patches) is provided to the tiling engine 210.” [0064] “The rendering space is a two-dimensional, often, but not necessarily, rectangular (where rectangle includes square) grid of pixels. A region of the rendering space is the portion of the rendering space corresponding to a set of pixels and may be defined by the number of pixels covered by the region. For example, an n×m region of the rendering space is a portion of the rendering space corresponding to an n×m set of pixels where n and m are integers. As described in more detail below, a region of the rendering space may be a portion of the rendering space corresponding to a contiguous block of pixels or a non-contiguous set of pixels.” [0075] “The HSR logic 314 may comprise two sub-stages—a first sub-stage in which depth testing is performed on primitive fragments related to a tile, and a second sub-stage in which the primitive fragments that survive the depth testing are stored in a tag buffer. For example, the HSR logic 314 may comprise depth testing logic and a tag buffer. The depth testing logic receives primitive fragments and compares the depth values (e.g. Z value or Z co-ordinate) of the primitive fragments to the corresponding depth value in a depth buffer for the tile. Specifically, the depth buffer stores the ‘best’ depth value (e.g. the one that is closest to the viewer) for each sample of the tile. If the received primitive fragment has a ‘worse’ depth value (e.g. a depth value that indicates it is further from the viewer) than the corresponding depth value in the depth buffer, then the primitive fragment will be hidden by another primitive and so the primitive fragment ‘fails’ the depth test and is not output to the tag buffer. If, however, the received primitive fragment has a ‘better’ depth value (e.g. a depth value that indicates it is closer to the viewer) than the corresponding depth value in the depth buffer, the primitive fragment ‘passes’ the depth test. The primitive fragment is then output to the tag buffer and the corresponding depth value in the depth buffer is updated to indicate there is a new ‘best’ depth value. The tag buffer receives primitive fragments that have passed the depth test stage and for each received primitive fragment updates the tag buffer to identify that received primitive fragment as the primitive fragment that is visible at its sample position. For example, if the tag buffer receives a primitive fragment x at sample location a then the tag buffer stores information indicating that the primitive fragment x is visible at sample location a. If the tag buffer subsequently receives a primitive fragment y at sample location, a then the tag buffer updates the information for sample location a to indicate that in fact it is primitive fragment y that is visible. Accordingly, in a simple case where all of the primitives are opaque, after a set of primitive fragments associated with a tile (e.g. the primitive fragments associated with a partial display list) have been processed by the depth testing logic, the tag buffer comprises the identity of the primitive fragments (to date) that are visible at each sample location. At this point the tag buffer may be flushed to the texturing/shading logic 316 where texturing and shading are performed on the primitive fragments that are visible.” [0151-0152]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Brigg into Bolz, in order to keep bandwidth requirements for the memory low. 15. Claims 8-14 are similar in scope to claims 1-7, and they are rejected under similar rationale. 16. Claim 15 is similar in scope to the combination of claims 1 and 2, and thus is rejected under similar rationale. Bolz additionally teaches one or more caches; (“the L2 cache 265 may be configured to transmit the multi-sample pixel data in tile-sized increments (1SPP format or not) to the Load/Store unit 290 via the crossbar 260. Accordingly, the Load/Store unit 290 may be configured to store multi-sample pixel data in tile-sized increments. In other embodiments, the L2 cache 265 is configured to transmit a subset of the samples for a multi-sample pixel based on a request received from the Load/Store unit 290.” [0031]) 17. Claim 16 is similar in scope to claim 3, and thus is rejected under similar rationale. 18. Claim 17 is similar in scope to claim 5, and thus is rejected under similar rationale. 19. Claim 18 is similar in scope to claim 6, and thus is rejected under similar rationale. 20. Claim 19 is similar in scope to claim 4, and thus is rejected under similar rationale. 21. Claim 20 is similar in scope to claim 7, and thus is rejected under similar rationale. Conclusion 22. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Chin whose telephone number is (571)270-3697. The examiner can normally be reached on Monday-Friday 8:00 AM-4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE CHIN/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Dec 12, 2025
Non-Final Rejection — §103
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602870
COMPUTER-AIDED TECHNIQUES FOR DESIGNING 3D SURFACES BASED ON GRADIENT SPECIFICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597205
HYBRID GPU-CPU APPROACH FOR MESH GENERATION AND ADAPTIVE MESH REFINEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12592041
MIXED SHEET EXTENSION
2y 5m to grant Granted Mar 31, 2026
Patent 12586287
Method of Operating Shared GPU Resource and a Shared GPU Device
2y 5m to grant Granted Mar 24, 2026
Patent 12579700
METHODS OF IMPERSONATION IN STREAMING MEDIA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
97%
With Interview (+11.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 634 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month