Prosecution Insights
Last updated: April 19, 2026
Application No. 18/390,876

HYBRID HASH FUNCTION FOR ACCESS LOCALITY

Non-Final OA §103
Filed
Dec 20, 2023
Examiner
DU, HAIXIA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Ati Technologies Ulc
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
477 granted / 553 resolved
+24.3% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
22 currently pending
Career history
575
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
20.2%
-19.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 553 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is in response to Applicant’s Request for Continued Examination (RCE) filed on 2/6/2026 with Amendments and Remarks filed on 1/29/2026. Claims 1, 10, and 19 have been amended. Claims 21-25 have been added. Claims 2, 4, 11, 13, and 20 have been previously canceled. Claims 1, 3, 5-10, 12, 14-19, and 21-25 are present for examination. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 10, and 19 regarding the limitation “the randomization portion includes an object identifier” have been fully considered but they are not persuasive. Applicant submits: “claims 1, 10, and 19 recite that ‘the randomization portion includes a mip level and an object identifier.’ The cited references do not each or suggest these features.” (See Remarks filed on 1/29/2026, p. 8, 3rd para.) The examiner respectfully disagrees. Although Baker or Yadav does not expressly disclose “the randomization portion includes a mip level”, Baker discloses “the randomization portion includes an object identifier.” See Baker, p. 5, col. 1, 4th para., disclosing virtualized shadel space consists of two memory buffers, implemented as two 2D textures: a 2D remap buffer and a shadel storage buffer, the remap buffer contain three values: a shadel block start offset which marks the beginning index location of the shadel storage buffer, the object instance ID representing which object the shadels belong to, and the occupancy field which represents 1 bit of each of the chunks of shadels if it is occupied or not occupied. Therefore, the shadel block start offset and the object instance ID can be considered as the randomization portion which includes an object instance ID as an object identifier. Applicant’s arguments with respect to claim(s) 1, 10, and 19 regarding the limitation “the randomization portion includes a mip level” have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5-8, 10, 12, 14-17, 19, and 21-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Baker (Baker et al., Generalized Decoupled and Object Space Shading System) in view of US Patent Publication No. 20190206111 A1 to Yadav et al. and US Patent No. 6362824 B1 to Thayer. Regarding claim 1, Baker discloses A method (Baker, Abstract) comprising: mapping a randomization portion of an item of identifying information of a tile of a shade space texture for a decoupled shading technique to a block of an address space, wherein the randomization portion includes an object identifier (Baker, Abstract, disclosing creating a generalized solution to decoupled shading, p. 3, col. 1, Sec. 3, 1st para., disclosing the generalized decoupled shading engine interacting to render a provided scene including the allocation, generation, processing, and management of shade elements, which are called shadels, p. 5, col. 1, 4th para., disclosing virtualized shadel space consists of two memory buffers, implemented as two 2D textures: a 2D remap buffer and a shadel storage buffer, the remap buffer contain three values: a shadel block start offset which marks the beginning index location of the shadel storage buffer, the object instance ID representing which object the shadels belong to, and the occupancy field which represents 1 bit of each of the chunks of shadels if it is occupied or not occupied, 5th para., disclosing to address a particular shadel, the 2D shadel location is indexed to the remap buffer which corresponds to it, and the sub index of the shadel chunk is also calculated, which will correspond to 1 of the 64 bits in the remap buffer, Figure 6, showing an entry (middle block) of the 2D Remap Buffer (left block) mapped to a block of entries to the Shadel Storage Buffer (right block), indicating an entry of the remap buffer can correspond to an item of the identifying information of a shadel block as a tile of a shade space texture for a decoupled shading technique, the shadel block start offset and the object instance ID can correspond to a randomization portion of the item being mapped to a block of the shadel storage buffer as the address space with the start offset presenting the start of the block and the object instance ID corresponding to an object identifier); mapping a linear portion of the item of identifying information to an element within the block (Baker, p. 5, col. 1, 4th para., disclosing virtualized shadel space consists of two memory buffers, implemented as two 2D textures: a 2D remap buffer and a shadel storage buffer, the remap buffer contain three values: a shadel block start offset which marks the beginning index location of the shadel storage buffer, the object instance ID representing which object the shadels belong to, and the occupancy field which represents 1 bit of each of the chunks of shadels if it is occupied or not occupied, 5th para., disclosing to address a particular shadel, the 2D shadel location is indexed to the remap buffer which corresponds to it, and the sub index of the shadel chunk is also calculated, which will correspond to 1 of the 64 bits in the remap buffer, Figure 6, showing an entry (middle block) of the 2D Remap Buffer (left block) mapped to a block of entries to the Shadel Storage Buffer (right block), indicating the occupancy field of an entry in the remap buffer can correspond to the linear portion of the item of identifying information corresponding being mapped to an element within the block starting from the start offset in the shadel storage buffer that correspond to a shadel); and performing decoupled shading operations with the element (Baker, Abstract, disclosing creating a generalized solution to decoupled shading, p. 3, col. 1, Sec. 3, 1st para., disclosing the generalized decoupled shading engine interacting to render a provided scene including the allocation, generation, processing, and management of shade elements, which are called shadels, p. 5, col. 1, 5th para., disclosing the address of the shadel chunk is then calculated, giving the location of the shadel chunk, where each shadel can easily be sub-indexed, p. 6, col. 2, sec. 3.5, 1st para., disclosing shadels are allocated in the shadel storage buffer, p. 8, col. 1, sec. 3.9, 1st para., disclosing the shadels are read from the shadel storage buffer using the shadel remap buffer). PNG media_image1.png 364 514 media_image1.png Greyscale However, Baker does not expressly disclose the block of an address space is a random block of an address space, wherein the randomization portion includes a mip level, wherein the linear portion includes at least a portion of texture coordinates of texels in the shade space texture and the randomization portion include identification information of the tile other than the at least portion of texture coordinates. On the other hand, Yadav discloses mapping a randomization portion of an item of identifying information of a tile of an image to a random block of an address space (Yadav, para. [0059], disclosing storing graphics data in arbitrary portions of application buffer, para. [0062], disclosing a GPU pixel block table that indicates which lines of image content are associated with which portions of the application buffer, para. [0072], disclosing storing image content in arbitrary locations of the display buffer, the display pixel block table may indicate the order in which the image content stored in the display buffer is to be arranged, FIG. 3B, paras. [0096]-[0097], disclosing the GPU pixel block table 38 in FIG. 3B having LINE RANGE and OFFSET, LINE RANGE showing the locations of the image content within the block stored in the buffer, and the OFFSET showing the starting of a virtual address in the buffer, indicating the buffer can correspond to an address space, the arbitrary locations storing the portion of the image content can correspond to a random block of the address space, the OFFSET can correspond to a randomization portion of an item of identifying information of a tile of an image (the entry for the portion of the image content in the GPU pixel block table as an item of identifying information of a tile of an image), the offset value corresponding to the starting position of the block randomly/arbitrarily located in the buffer); mapping a linear portion of the item of identifying information to an element within the block (Yadav, FIG. 3B, paras. [0096]-[0097], disclosing the GPU pixel block table 38 in FIG. 3B having LINE RANGE and OFFSET, LINE RANGE showing the locations of the image content within the block stored in the buffer, and the OFFSET showing the starting of a virtual address in the buffer, indicating the LINE RANGE can correspond to a linear portion of an item of identifying information of a tile of an image being mapped to line of image as an element within the portion of the image content stored in the buffer block), wherein the linear portion includes at least a portion of pixel coordinates of pixels in the image content and the randomization portion include identification information of the tile other than the at least portion of pixel coordinates (Yadav, FIG. 3B, paras. [0096]-[0097], disclosing the GPU pixel block table 38 in FIG. 3B having LINE RANGE and OFFSET, LINE RANGE showing the locations of the image content within the block stored in the buffer, and the OFFSET showing the starting of a virtual address in the buffer, indicating the LINE RANGE can correspond to a linear portion of an item of identifying information of a tile of an image being mapped to lines of image including at least a portion of the pixel coordinates of pixels in the image content (a pixel is located by a line value and a column value in an image, the line value is a portion of pixel coordinates, the OFFSET can correspond to the randomization portion include identification information of the tile (image block) (offset value in the buffer as the address space) other than the line value as the at least portion of pixel coordinates). Because Baker discloses shade space texture having shadels as tiles and texels within the shadels, and pixels in an image can correspond to texels in a texture, combining Baker and Yadav would have the linear portion includes at least a portion of texture coordinates of texels in the shade space texture and the randomization portion include identification information of the tile other than the at least portion of texture coordinates. Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Baker and Yadav. The suggestion/motivation would have been to allow the GPU to render graphics data at a relatively high throughput rate, as suggested by Yadav (see Yadav, para. [0059]). However, Thayer or Yadav does not expressly disclose wherein the randomization portion includes a mip level. On the other hand, Thayer discloses mapping a randomization portion of an item of identifying information of a texture to a random block of an address space (Thayer, col. 3, lines 10-14, disclosing storing the pages of a mipmap at random locations within the system memory or frame buffer memory or texture memory, lines 41-45, disclosing because the mipmap page base address is generated response only to the mipmap page number, the mipmap pages may be stored at random locations within the computer system, indicating the mipmap page base address can correspond to a randomization portion of the mipmap as an item of identifying information of a texture being mapped to a random block of a memory as an address space), wherein the randomization portion includes a mip level (Thayer, FIG. 7, showing step 704 generating mipmap page base address by applying mipmap page number as input, col. 2, lines 42-52, disclosing pages of a mipmap are used to represent one texture at varying levels of detail, col. 3, lines 41-45, disclosing because the mipmap page base address is generated response only to the mipmap page number, the mipmap pages may be stored at random locations within the computer system, indicating the mipmap page base address can correspond to the randomization portion that can include a mipmap page indicating a mip level). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Baker in view of Yadav and Thayer. The suggestion/motivation would have been for achieving improved mipmapped texture mapping performance in computer graphics systems, as suggested by Thayer (see Thayer, Abstract). PNG media_image2.png 448 636 media_image2.png Greyscale Regarding claim 3, the combination of Baker, Yadav, and Thayer discloses the method of claim 1, wherein the linear portion includes bits of texture coordinates of the tile (Baker, p. 5, col. 1, 4th para., disclosing virtualized shadel space consists of two memory buffers, implemented as two 2D textures: a 2D remap buffer and a shadel storage buffer, the remap buffer contain three values: a shadel block start offset which marks the beginning index location of the shadel storage buffer, the object instance ID representing which object the shadels belong to, and the occupancy field which represents 1 bit of each of the chunks of shadels if it is occupied or not occupied, 5th para., disclosing to address a particular shadel, the 2D shadel location is indexed to the remap buffer which corresponds to it, and the sub index of the shadel chunk is also calculated, which will correspond to 1 of the 64 bits in the remap buffer, Figure 6, showing an entry (middle block) of the 2D Remap Buffer (left block) mapped to a block of entries to the Shadel Storage Buffer (right block), indicating the occupancy field of an entry in the remap buffer can correspond to the linear portion of the item of identifying information includes bits of each of the chunks of shadel, each chunk has texture coordinates of the chunk as the tile). Regarding claim 5, the combination of Baker, Yadav, and Thayer discloses the method of claim 1, wherein the decoupled shading operations include a shade space shading operation that shades a texel of the tile (Baker, Figure 1, showing rendered castle model and associated virtualized shadel sheet, Figure 8, showing the pipeline of the pixel shader, p. 3, col. 1, Sec. 3, 1st para., disclosing the generalized decoupled shading engine interacting to render a provided scene including the allocation, generation, processing, and management of shade elements, which are called shadels, p. 6, col. 2, last para., disclosing allocating shaders from the shadel texel storage buffer, indicating the shadels in the shadel texel storage buffer can correspond to texels of the tile and are shaded in the shade space as part of the decoupled shading operations). Regarding claim 6, the combination of Baker, Yadav, and Thayer discloses the method of claim 1, wherein the decoupled shading operations include a reconstruction operation that generates a final image based on a texel of the tile (Baker, Figure 3, showing the raster and shade frames, where the raster frame collects the scene and prepare it for the GPU, then dispatches it to the shadel mark prepass, the shadel mark prepass marks which shadels need to be rasterized and sends this data to the shade frame loop, once the shadels have been processed in the shade frame loop, they are sent back to the render frame for rasterization, the shade frame includes compute layers for the objects, Figure 8, showing the pipeline of the pixel shader, Figure 9, showing a final rendered image where the skin shade uses multiple layers, p. 3, col. 1, Sec. 3, 1st para., disclosing the generalized decoupled shading engine interacting to render a provided scene including the allocation, generation, processing, and management of shade elements, which are called shadels, p. 6, col. 2, last para., disclosing allocating shaders from the shadel texel storage buffer, indicating the shading with multiple layers can correspond to a reconstruction operation that generates a final image based on the shadel in the shadel texel storage buffer corresponding to a texel of the tile from each layer shading). Regarding claim 7, the combination of Baker, Yadav, and Thayer discloses the method of claim 1, wherein mapping the linear portion to the element is performed in a linear manner (Baker, p. 5, col. 1, 5th para., disclosing the address of the shadel chunk is calculated by Formula 1 (ShadelChunkStartIndex+countbits(∼(ShadelSubIndex−1) & OccupancyField) (1)) that gives the location of the shadel chunk, the OccupancyField corresponding to the linear portion and the mapping using Formular 1 is a linear manner with addition (+). Also, Yadav, FIG. 3B, showing the LINE RANGE correspond to the line portions of an image, which is a linear mapping). Regarding claim 8, the combination of Baker, Yadav, and Thayer discloses the method of claim 7, wherein mapping the linear portion in the linear manner includes adding a value derived from the linear portion to an address of the block (Baker, p. 5, col. 1, 5th para., disclosing the address of the shadel chunk is calculated by Formula 1 (ShadelChunkStartIndex+countbits(∼(ShadelSubIndex−1) & OccupancyField) (1)) that gives the location of the shadel chunk, the OccupancyField corresponding to the linear portion and the mapping using Formular 1 is a linear manner with addition (+), and it adds a value derived from the linear portion (countbits(∼(ShadelSubIndex−1) & OccupancyField) to an address of the block (ShadelChunkStartIndex)). Regarding claim 10, it recites similar limitations of claim 1 but in a system form. The rationale of claim 1 rejection is applied to reject claim 10. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 12, it recites similar limitations of claim 3 but in a system form. The rationale of claim 3 rejection is applied to reject claim 12. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 14, it recites similar limitations of claim 5 but in a system form. The rationale of claim 5 rejection is applied to reject claim 14. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 15, it recites similar limitations of claim 6 but in a system form. The rationale of claim 6 rejection is applied to reject claim 15. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 16, it recites similar limitations of claim 7 but in a system form. The rationale of claim 7 rejection is applied to reject claim 16. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 17, it recites similar limitations of claim 8 but in a system form. The rationale of claim 8 rejection is applied to reject claim 17. In addition, Baker discloses a CPU, system memory, GPU and GPU memory (see Baker, Abstract, p. 6, col. 2, 1st para., p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 19, it recites similar limitations of claim 1 but in a non-transitory computer-readable medium form. The rationale of claim 1 rejection is applied to reject claim 19. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 21, it recites similar limitations of claim 3 but in a non-transitory computer-readable medium form. The rationale of claim 3 rejection is applied to reject claim 21. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 22, it recites similar limitations of claim 5 but in a non-transitory computer-readable medium form. The rationale of claim 5 rejection is applied to reject claim 22. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 23, it recites similar limitations of claim 6 but in a non-transitory computer-readable medium form. The rationale of claim 6 rejection is applied to reject claim 23. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 24, it recites similar limitations of claim 7 but in a non-transitory computer-readable medium form. The rationale of claim 7 rejection is applied to reject claim 24. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Regarding claim 25, it recites similar limitations of claim 8 but in a non-transitory computer-readable medium form. The rationale of claim 8 rejection is applied to reject claim 25. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Baker, Yadav, and Thayer as applied to claims 1 and 10 above, and further in view of US Patent Publication 20200136971 A1 to Cohen. Regarding claim 9, the combination of Baker, Yadav, and Thayer discloses the method of claim 1. However, Baker, Yadav, or Thayer does not expressly disclose performing an additional lookup in response to a collision. On the other hand, Cohen discloses performing an additional lookup in response to a collision (Cohen, para. [0018], disclosing in a case of a collision, the linked list is traversed to search for values, para. [0024], disclosing in the event of a collision, relevant tables are accessed, indicating in response to a collision, additional lookup can be performed). Before the effective filing date of the claimed invention, it would have been obvious for a person skilled in the art to combine Cohen with the combination of Baker, Yadav, and Thayer. The suggestion/motivation would have been to reduce memory access latency, as suggested by Cohen (see Cohen, para. [0023]-[0024]). Regarding claim 18, it recites similar limitations of claim 9 but in a system form. The rationale of claim 9 rejection is applied to reject claim 18. In addition, Baker discloses a CPU, GPU and GPU memory (see Baker, Abstract, p. 8, col. 2, Sec. 4.1, 4th para.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAIXIA DU whose telephone number is (571)270-5646. The examiner can normally be reached Monday - Friday 8:00 am-4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAIXIA DU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Jul 09, 2025
Non-Final Rejection — §103
Sep 29, 2025
Response Filed
Nov 17, 2025
Final Rejection — §103
Jan 29, 2026
Response after Non-Final Action
Feb 06, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602857
GENERATING IMAGE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597204
MODEL GENERATING DEVICE, MODEL GENERATING SYSTEM, MODEL GENERATING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12573137
System and Method for Unsupervised and Autonomous 4D Dynamic Scene and Objects Interpretation, Segmentation, 3D Reconstruction, and Streaming
2y 5m to grant Granted Mar 10, 2026
Patent 12561882
IMAGE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555304
RAY TRACING USING INDICATIONS OF RE-ENTRY POINTS IN A HIERARCHICAL ACCELERATION STRUCTURE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+18.0%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 553 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month