Prosecution Insights
Last updated: April 19, 2026
Application No. 17/561,652

DISTRIBUTED COMPRESSION/DECOMPRESSION SYSTEM

Non-Final OA §102§103
Filed
Dec 23, 2021
Examiner
RUIZ, ARACELIS
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
709 granted / 814 resolved
+32.1% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
22 currently pending
Career history
836
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
55.1%
+15.1% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 814 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 1-3, 6-7, 9 and 12-19 are present for examination. Claims 1-2 and 16 have been amended. Claims 4-5, 8, 10-11 and 20 have been cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendment Applicant's request for reconsideration of the finality of the rejection of the last Office action is persuasive and, therefore, the finality of that action is withdrawn. Allowable Subject Matter The indicated allowability of claims 9 and 12-15 are withdrawn in view of the newly discovered reference(s) to Appu et al. (US 10,803,650). Rejections based on the newly cited reference(s) follow. Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the timing fee set forth in 37 CFR 1.17(p) on 10/24/2025 prompted the new ground(s) of rejection presented in this Office action. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/24/2025 was filed after the mailing date of the Final Office Action on 09/05/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 3, 9 and 16-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Appu et al. (US 10,803,650). With respect claim 1, Appu et al. teaches a shared L2 (level two) cache device shared by multiple devices (see Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B), the shared L2 cache device to store both compressed and uncompressed data (see column 24, lines 42-44; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache); an L1 (level one) cache device (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors); and a compression module between the shared L2 cache device and the L1 cache device, the compression module to selectively perform compression of write data when the write data is moved from the L1 cache device to the shared L2 cache device (see column 24, lines 42-44; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache), and to selectively perform decompression of read data when the read data is moved from the shared L2 cache device to the L1 cache device (see column24, lines 49-53; whether data is stored in a compressed or uncompressed format at a given location in memory may be determined based on whether graphics processor components that will consume the data from a given memory unit support reading data in a compressed format (i.e., if L1 cache at the graphics multiprocessor only stores uncompressed data, data will be uncompressed before being written)), the compression module includes a compression bypass path to optionally move uncompressed data between the shared L2 cache device and the L1 cache device (see Fig. 6 and column 24, lines 53-56 and column 24, lines 44-53; memory bus 629 moves uncompressed data from cache 424 to cache in graphic processors). With respect claim 3, Appu et al. teaches wherein the cache device comprises a shared L2 (level two) cache shared by multiple computation units (see Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B). With respect claim 9, Appu et al. teaches a central processing unit to execute general operations (see Fig. 2A, column 12, lines 1-4; one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202); a graphics processor (see Fig. 6, and column including 23, lines 45-47; graphics multiprocessor clusters 610A-610B includes graphics processing and computational logic, such as the logic illustrated in FIG. 2C) including multiple graphics components having associated L1 (level one) caches (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; graphics multiprocessor 234 includes an internal cache memory to perform load and store operations…the graphics multiprocessor 234 use a cache memory (e.g., L1 cache 248)); and shared L2 (level two) cache coupled to the L1 caches (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors… Also, Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B), the shared L2 cache device to store both compressed and uncompressed data (see column 24, lines 42-44; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache); a compression module between a first L1 cache of the associated L1 caches and the shared L2 cache, the compression module to selectively perform compression of write data when the write data is moved from the first L1 cache device to the shared L2 cache device (see column 24, lines 42-44; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache), and to selectively perform decompression of read data when the read data is moved from the shared L2 cache device to the first L1 cache device (see column 24, lines 49-53; whether data is stored in a compressed or uncompressed format at a given location in memory may be determined based on whether graphics processor components that will consume the data from a given memory unit support reading data in a compressed format (i.e., if L1 cache at the graphics multiprocessor only stores uncompressed data, data will be uncompressed before being written)), the compression module includes a compression bypass path to optionally move uncompressed data between the shared L2 cache device and first the L1 cache device (see Fig. 6 and column 24, lines 53-56 and column 24, lines 44-53; memory bus 629 moves uncompressed data from render cache 424 to L1 caches in graphic processors). With respect claim 16, Wegener teaches receiving data at a compression module between a L2 (level two) cache device shared by multiple devices (see Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B), the shared L2 cache device to store both compressed and uncompressed data (see column 24, lines 42-44; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache); selectively performing compression with the compression module when the data received is write data to move from the L1 cache device to the shared L2 unit (see column 24, lines 42-44; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache), including optionally move uncompressed data between the shared L2 cache device and the L1 device (see Fig. 6 and column 24, lines 53-56 and column 24, lines 44-53; memory bus 629 moves uncompressed data between render cache 424 and L1 caches in graphic processors); and selectively performing decompression with the compression module when the data received is read data to move from the L1 cache unit to the shared L2 cache device (see column 24, lines 49-53; whether data is stored in a compressed or uncompressed format at a given location in memory may be determined based on whether graphics processor components that will consume the data from a given memory unit support reading data in a compressed format (i.e., if L1 cache at the graphics multiprocessor only stores uncompressed data, data will be uncompressed before being written)), including optionally move uncompressed data between the L1 cache device and the shared L2 device (see Fig. 6 and column 24, lines 53-56 and column 24, lines 44-53; memory bus 629 moves uncompressed data between render cache 424 and L1 caches in graphic processors). With respect claim 17, Appu et al. teaches wherein the shared L2 cache device comprises a cache shared by multiple computation units (see Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B) and the L1 cache device comprises a local memory device of a graphics processor (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors). With respect claim 18, Appu et al. does not explicitly teach wherein performing compression comprises: determining whether the write data is to be stored as compressed write data or uncompressed write data (see column 24, lines 42-53; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache); and bypassing the compression module when the write data is to be stored as uncompressed write data to avoid performing compression; else, performing compression with the compression module when the write data is to be stored as compressed write data (see column 24, lines 42-53; data can be stored in render cache 624 in a compressed format or can be decompressed before being written to the render cache… whether data is stored in a compressed or uncompressed format at a given location in memory may be determined based on whether graphics processor components that will consume the data from a given memory unit support reading data in a compressed format (i.e., data may be compressed/decompressed when stored in cache)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 7 and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Appu et al. (US 10,803,650) as applied to claims 1, 3 and 9 above, and further in view of Wegener (US2013/0262538). With respect claim 2, Appu et al. does not explicitly teach wherein the shared L2 cache device is to store compressed data and an associated compression control surface (CCS) to indicate compression for the compressed data, wherein the compression module comprises a dedicated CCS cache to store CCS information for decompression on a read from the L1 cache device, and to store CCS information for compression on a write to the L1 cache device. However, Wegener teaches wherein during data reads from lower level cache 1406, the configurable decompressor 720 decodes control information from each compressed packet header and decompresses integer or floating-point values using decompression operations in accordance with the control parameters. The decompressed output samples are output to higher level cache 1404…Core 1402 provides control parameters for the configurable compressor to compress integer or floating-point data, to compress in a lossless or lossy mode, to specify desired compressed block size, and other compression-specific parameters during data writes to lower level cache 1406. The configurable compressor 620 may include the control information in the headers of compressed packets stored in lower level cache 1406) (see paragraph 55). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Appu et al. to include the above mentioned to better meet the requirements of higher speed data transfer, reduced memory utilization (see Wegener, paragraphs 11 and 57). With respect claim 7, Appu et al. teaches an L1 (level one) cache (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors); and Appu et al. does not explicitly teach wherein the compression module comprises a first compression module, and further comprising a second compression module between the L1 cache and the shared L2 cache, the second compression module to perform compression of write data when the write data is moved from the L1 cache to the shared L2 cache, and to perform decompression of read data when the read data is moved from the shared L2 cache to the L1 cache. However, Wegener teaches cores 1402a-1402d requests compressed accesses (reads or writes) to the lower level cache 1406 via cache controllers 1408. Cache controllers 1408 is comprised of a configurable compressors 620 and a configurable decompressors 720a configurable compressor 620 and a configurable decompressor 720… During data reads from lower level cache 1406, the configurable decompressor 720 decodes control information from each compressed packet header and decompresses integer or floating-point values using decompression operations in accordance with the control parameters. The decompressed output samples are output to higher level cache 1404. Likewise, for transfers to/from off-chip memory or storage, the DMA, I/O or memory controller may include a configurable compressor 620 and a configurable decompressor 720 (see paragraph 55). Also, Wegener teaches wherein memory writes from a faster memory to a slower memory will usually be compressed, while memory reads from a slower memory to a faster memory will usually be decompressed (paragraph 57 and 93). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Appu et al. to include the above mentioned to better meet the requirements of higher speed data transfer, reduced memory utilization (see Wegener, paragraphs 11 and 57). With respect claim 13, Appu et al. teaches wherein the compression module comprises a first compression module (see Fig. 6 and 44-47; compression/decompression unit 628), and further comprising: memory device (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors); and Appu et al. does not explicitly teach wherein the compression module comprises a first compression module, and further comprising a second compression module between the L1 cache and the shared L2 cache, the second compression module to perform compression of write data when the write data is moved from the L1 cache to the shared L2 cache, and to perform decompression of read data when the read data is moved from the shared L2 cache to the L1 cache. However, Wegener teaches cores 1402a-1402d requests compressed accesses (reads or writes) to the lower level cache 1406 via cache controllers 1408. Cache controllers 1408 is comprised of a configurable compressors 620 and a configurable decompressors 720a configurable compressor 620 and a configurable decompressor 720… During data reads from lower level cache 1406, the configurable decompressor 720 decodes control information from each compressed packet header and decompresses integer or floating-point values using decompression operations in accordance with the control parameters. The decompressed output samples are output to higher level cache 1404. Likewise, for transfers to/from off-chip memory or storage, the DMA, I/O or memory controller may include a configurable compressor 620 and a configurable decompressor 720 (see paragraph 55). Also, Wegener teaches wherein memory writes from a faster memory to a slower memory will usually be compressed, while memory reads from a slower memory to a faster memory will usually be decompressed (paragraph 57 and 93). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Appu et al. to include the above mentioned to better meet the requirements of higher speed data transfer, reduced memory utilization (see Wegener, paragraphs 11 and 57). With respect claim 14, Appu et al. teaches wherein the second compression module includes a compression bypass path to optionally move uncompressed data between the L1 cache device and the shared L2 cache (see Fig. 6 and column 24, lines 53-56 and column 24, lines 44-53; memory bus 629 moves uncompressed data from cache 424 to cache in graphic processors). With respect claim 15, Appu et al. does not explicitly teach wherein the memory device is to store compressed data and an associated compression control surface (CCS) to indicate compression for the compressed data, wherein the compression module comprises a dedicated CCS cache to store CCS information for decompression on a read from the memory device, and to store CCS information for compression on a write to the memory device. However, Wegener teaches wherein during data reads from lower level cache 1406, the configurable decompressor 720 decodes control information from each compressed packet header and decompresses integer or floating-point values using decompression operations in accordance with the control parameters. The decompressed output samples are output to higher level cache 1404…Core 1402 provides control parameters for the configurable compressor to compress integer or floating-point data, to compress in a lossless or lossy mode, to specify desired compressed block size, and other compression-specific parameters during data writes to lower level cache 1406. The configurable compressor 620 may include the control information in the headers of compressed packets stored in lower level cache 1406) (see paragraph 55). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Appu et al. to include the above mentioned to better meet the requirements of higher speed data transfer, reduced memory utilization (see Wegener, paragraphs 11 and 57). Claim(s) 6, 12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Appu et al. (US 10,803,650) as applied to claims 1, 3, 9 and 16-17 above, and further in view of Iourcha et al. (US2016/0300320). With respect claim 6, Appu et al. teaches a second L1 (level one) cache coupled to the shared L2 cache (see) (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors… Also, Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B), wherein the second L1 cache is to store uncompressed data (see column 24, lines 49-53; whether data is stored in a compressed or uncompressed format at a given location in memory may be determined based on whether graphics processor components that will consume the data from a given memory unit support reading data in a compressed format (i.e., if L1 cache at the graphics multiprocessor only stores uncompressed data, data will be uncompressed before being written)). Appu et al. do not explicitly teach move uncompressed data between the shared L2 cache and the second L1 cache. However, Iourcha et al. teaches the first shader may request the block from the cache and convey the virtual address with the request (block 720). Next, the cache may determine if an uncompressed version of the block is stored in the cache (conditional block 725). If the uncompressed version of the block is stored in the cache, the first shader may receive the uncompressed version of the block from the cache (i.e., decompression is bypassed) (see paragraph 68). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the processor taught by Appu et al. to include the above mentioned to improve performance of the device (see Iourcha, paragraphs 8 and 11). With respect claim 12, Appu et al. teaches a second L1 (level one) cache coupled to the shared L2 cache (see) (see Fig. 2C and column 9, lines59-63 and column 23, lines 45-47; L1 cache 248 in graphic multiprocessors… Also, Fig. 6 and column 24, lines 15-19; render cache 624 is shared by clusters 601A-610B), wherein the second L1 cache is to store uncompressed data (see column 24, lines 49-53; whether data is stored in a compressed or uncompressed format at a given location in memory may be determined based on whether graphics processor components that will consume the data from a given memory unit support reading data in a compressed format (i.e., if L1 cache at the graphics multiprocessor only stores uncompressed data, data will be uncompressed before being written)). Appu et al. do not explicitly teach move uncompressed data between the shared L2 cache and the second L1 cache. However, Iourcha et al. teaches the first shader may request the block from the cache and convey the virtual address with the request (block 720). Next, the cache may determine if an uncompressed version of the block is stored in the cache (conditional block 725). If the uncompressed version of the block is stored in the cache, the first shader may receive the uncompressed version of the block from the cache (i.e., decompression is bypassed) (see paragraph 68). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the system taught by Appu et al. to include the above mentioned to improve performance of the device (see Iourcha, paragraphs 8 and 11). With respect claim 19, Appu et al. does not explicitly teach teaches wherein performing decompression comprises: determining whether the read data is compressed read data or uncompressed read data; and bypassing the compression module when the read data is uncompressed read data to avoid performing decompression; else, performing decompression with the compression module when the read data is compressed read data. However, Iourcha teaches wherein the cache may determine if an uncompressed version of the block is stored in the cache (conditional block 725). If the uncompressed version of the block is stored in the cache, the first shader may receive the uncompressed version of the block from the cache and process the block (block 770) (see paragraph 68)… If the uncompressed version of the block is not stored in the cache, a second shader of the plurality of shaders may be initiated as a decompressing shader (block 730) (see paragraphs 69-72). It would have been obvious to a person having ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to have modified the method taught by Appu et al. to include the above mentioned to improve performance of the device (see Iourcha, paragraphs 8 and 11). Response to Arguments Applicant’s arguments, see pages 7-8, filed 12/02/2025, with respect to the rejection(s) of claim(s) 1-3, 6-7, 9 and 16-19 under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Appu et al. (US 10,803,650). Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the timing fee set forth in 37 CFR 1.17(p) on 10/24/2025 prompted the new ground(s) of rejection presented in this Office action. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wegener et al. (US 2013/0262538) teaches compressing data when it is transferred from higher level cache to a lower level cache, and decompressing data when it is transferred from the lower level cache to the higher level cache (see paragraphs 55, 57 and 93). Dye et al. (US 7,190,284) teaches wherein switch logic 261 controls read and write data to and from the parallel compression and decompression unit 251 and the compression control unit 281. In addition, for data that is not to be compressed or decompressed (normal or bypass data), the switch logic 261 controls an interface directly to the memory interface logic 221 (see column 14, lines 36-48). Brink et al. (US7,606,954) teaches wherein data from the host, over host bus 401 and arriving at the host disk adapter 408, is stored in an outbound data buffer 428, before being sent under control of a compressor controller 414, to either the compression engine 416 or, bypassing compression, directly to an outbound data buffer 426 (see column 4, lines 45-51). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARACELIS RUIZ whose telephone number is (571)270-1038. The examiner can normally be reached Monday-Friday 11:00am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G. Bragdon can be reached on (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARACELIS RUIZ/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Dec 23, 2021
Application Filed
May 13, 2022
Response after Non-Final Action
Feb 21, 2025
Non-Final Rejection — §102, §103
May 27, 2025
Response Filed
Sep 03, 2025
Final Rejection — §102, §103
Nov 06, 2025
Response after Non-Final Action
Dec 02, 2025
Response after Non-Final Action
Jan 14, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554649
Profile Guided Memory Trimming
2y 5m to grant Granted Feb 17, 2026
Patent 12536104
USING SPECIAL DATA STORAGE PARAMETERS WHEN STORING COLD STREAM DATA IN A DATA STORAGE DEVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12524353
CANCELLING CACHE ALLOCATION TRANSACTIONS
2y 5m to grant Granted Jan 13, 2026
Patent 12517833
ELECTRONIC DEVICES, INCLUDING MEMORY DEVICES, AND OPERATING METHODS THEREOF
2y 5m to grant Granted Jan 06, 2026
Patent 12499051
METHOD OF DETERMINING A CACHE SIZE USING AN ESTIMATION OF A NUMBER OF REQUESTS FOR CACHE MEMORY AND A SIZE OF THE REQUESTS
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.5%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 814 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month