Prosecution Insights
Last updated: April 19, 2026
Application No. 18/544,166

ACCELERATED DATA DECOMPRESSION USING PARALLEL PROCESSORS

Final Rejection §102§103
Filed
Dec 18, 2023
Examiner
COCHRAN, BRIANNA RENAE
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
2 (Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
2y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
2 granted / 5 resolved
-22.0% vs TC avg
Minimal -40% lift
Without
With
+-40.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
29 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 18th, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments This is in response to applicant’s amendment/response filed on 01/05/2026 which have been entered and made of record. Applicant’s amendments regarding claim objections are persuasive. Claim objections for claims 1 and 13 have been withdrawn. Applicant’s amendments regarding 35 U.S.C. 112(b) for claims 2 and 8 are persuasive. Claim rejections under 35 U.S.C. 112(b) have been withdrawn. Applicant’s arguments regarding claim rejections under 35 U.S.C 102 and 103 have been fully considered but they are not persuasive. Applicant argues Claim 1 recites "transcod[ing].. . data into transcoded data in a supported compression format" and "decompress[ing] the transcoded data on fixed-function hardware ... customized for decompression of the supported compression format." The Office Action cites a variety of disjoint passages of Kothandaraman, including 145 to show dedicated or fixed-function hardware, 302- 04 to show decompression, and 35, 145, and 222 to show that the dedicated or fixed-function hardware is customized for the supported decompression format. Office Action at p. 4. However, The Office Action's mapping to the claimed "supported compression format" is internally inconsistent. For the "transcoding" element, the Office Action cites Kothandaraman's discussion of a video codec engine transcoding between media encoding formats such as MIVPEG or H.264. Office Action at p. 4 (citing Kothandaraman at 105). Under this interpretation, the Office Action necessarily maps the claimed "supported compression format" to one of these media encoding formats, so the Office Action would then have to show fixed-function hardware "customized for decompression of [one of these media encoding formats]." However, the Office Action abandons its first mapping and instead asserts that the dedicated or fixed-function hardware maps to Kothandaraman's systolic array 612 which is described as "hardware to accelerate sparse matrix operations." Office Action at p. 4 (citing Kothandaraman at 145). This shift in mapping means the Office Action is no longer referring to the same "supported compression format" it relied on in the preceding element, and therefore fails to show that Kothandaraman discloses hardware customized for decompression of the same compression format the claimed data was transcoded into. During the December 9, 2025 interview, the Examiner suggested this mapping might not be inconsistent because Kothandaraman describes dictionary encoding and LZ encoding, which are part of the media encoding format. However, this assertion is not supported by Kothandaraman's disclosure and actually introduces a new inconsistency. First, Applicant has found no indication in Kothandaraman that dictionary or LZ encoding is part of the video codec engine. For example, Applicant has found no cross-reference between 35-38 (which describe dictionary encoding and entropy encoding) and 105, no statement that Kothandaraman's video codec engine uses LZ, no statement that dictionary encoding is part of MPEG/H.264, or no statement that LZ decoding is required to decompress those formats. As such, the proposed interpretation is not supported by Kothandaraman's disclosure. Even if dictionary/LZ encoding were present in Kothandaraman's video codec engine, that would not solve the inconsistent mapping problem. Under the proposed interpretation, the "supported compression format" maps to the media formats (MPEG/H.264/VP9) described in105. Kothandaraman's systolic array of 145 is not described as decompressing those formats, so remapping to dictionary/LZ encoding does not solve the internal inconsistency. Applicant has been unable to locate a corresponding disclosure elsewhere in the cited references. As such, claim 1 and its dependent claims should be found patentable over the cited references. For analogous reasons that apply mutatis mutandis, independent claims 13 and 19 should also be found patentable over the cited references. Examiner respectable disagrees. Kothandaraman teaches several supported compression formats that perform lossless data compression. (Dictionary coding that includes LZ77 Para. 0037-0038 and Entropy coding that includes Huffman coding Para. 0037 and 0038-0039) Kothandaraman teaches a GPU that can compress/decompress data. (Specifically the Systolic Array 612 can generate output in compressed/decompressed formats Para. 0145 and can contain hardware to support machine-specific lossless data compression formats Para. 0146. The Systolic Array 612 is a part of the Execution Unit 600 Para. 0142. Execution Units are within the graphics parallel engines Para. 0068 which are a part of the graphic processors Para. 0056. As well as several Execution Units Para. 0128 support single instruction multiple data(SIMD) execution used in the decompression pipeline for Dictionary and Entropy decoding Para. 0227, 0238, and 0302-0304. ) Kothandaraman teaches transcoding data to, from, or between several data formats. (The data is transcoded using a video codec engine 306 which is in a graphic processor Para. 0105. One of the listed data formats is MPEG. Both Huffman Coding and LZ77 can compress MPEG data.) Thus, Kothandaraman teaches transcode, on general-purpose hardware of the one or more parallel processors, data into transcoded data in a supported compression format. Applicant argues claim 1 recites "fixed-function hardware .customized for decompression of the supported compression format." As noted above, the Office Action cites Kothandaraman at 145 to show dedicated or fixed-function hardware, 302-04 to show decompression, and 35, 145, and 222 to show that the dedicated or fixed-function hardware is customized for the supported decompression format. Office Action at p. 4. In citing 145, the Office Action relies on a passage that states that the output of the matrix operations "can be generated in a compressed (dense format, with associated decompression or decoding metadata." Kothandaraman 145. However, the cited passage does not state that the hardware performing the matrix operations is "customized for decompression," let alone "customized for decompression of [any of the media encoding formats cited in 105]." To the contrary, 145 describes systolic array 612 as hardware that accelerates sparse matrix operations, not decompression. The inclusion of metadata to enable future decompression does not transform a matrix-acceleration array into decompression hardware. In fact, the cited matrix operations are characteristic of general-purpose computing, and nothing in 145 suggests that this hardware is "fixed-function hardware ... customized for decompression of the supported compression format." During the December 9, 2025 interview, the Examiner suggested that hardware could be considered "customized for decompression" under the broadest reasonable interpretation if the hardware is simply programmed to perform decompression. Initially, Applicant respectfully disagrees that a person of ordinary skill in the art would equate fixed-function hardware with general-purpose hardware executing program instructions, particularly where the claim separately recites "general-purpose hardware" and "fixed-function hardware," and where the Specification consistently distinguishes between the two. This demands more than simply a change in the software executing on the same general-purpose hardware. The Office Action also cites 35 and 222 to show "Processors/Encoders/Decoders/Hardware," but neither passage describes fixed-function hardware customized for decompression. Paragraph 35 introduces entropy coders for GPUs and does not address decompression at all. Paragraph 222 references entropy decoders, but entropy decoding simply recovers uncompressed symbols or tokens and does not itself constitute decompression. Even if it did, that passage does not describe any fixed-function hardware customized for decompression of the supported compression format. Applicant has been unable to locate a corresponding disclosure elsewhere in the cited references. Claim 1 recites "decompress[ing] the transcoded data on fixed-function hardware ... customized for decompression of the supported compression format." The Office Action cites 302-04 to show decompression but provides no explanation of how the cited decompression is performed on the hardware it identifies as dedicated or fixed-function. Office Action at p. 4. Paragraphs 302-04 discuss super-decompression in general terms, make no reference to "transcoded data," and do not identify any hardware or specify what hardware performs the decompression. Furthermore, 145 (the Office Action's purported evidence of dedicated or fixed-function) actually describes how to output compressed data, so it is not even describing "decompress[ion]" at all. The relevant sentence states that the "[o]utput can be generated in a compressed (e.g., dense) format, with associated decompression or decoding metadata." This is describing an encoding stage in which the hardware produces compressed results and merely includes metadata to enable future decompression. Accordingly, 145 does not disclose any hardware performing decompression, let alone decompression of "transcoded data." Applicant has been unable to locate a corresponding disclosure elsewhere in the cited references. As such, claim 1 and its dependent claims should be found patentable over the cited references. For analogous reasons that apply mutatis mutandis, independent claims 13 and 19 should also be found patentable over the cited references. Examiner respectable disagrees. As stated previously, Kothandaraman teaches a GPU that can compress/decompress data. (Specifically the Systolic Array 612 can generate output in compressed/decompressed formats Para. 0145 and can contain hardware to support machine-specific lossless data compression formats Para. 0146. The Systolic Array 612 is a part of the Execution Unit 600 Para. 0142. Execution Units are within the graphics parallel engines Para. 0068 which are a part of the graphic processors Para. 0056. As well as several Execution Units Para. 0128 support single instruction multiple data(SIMD) execution used in the decompression pipeline for Dictionary and Entropy decoding Para. 0227, 0238, and 0302-0304. Furthermore, the graphic processor cores can include fixed function blocks Para. 0064-0065 and 0069. Fixed function blocks can include general-purpose and fixed function logic. These blocks include fixed function pipelines, SOC Interface, and a graphics microcontroller 233, Para. 0065-0066. The graphics microcontroller 233 Para. 0068 contains the graphics parallel engines associated with the Systolic Array. ) Thus, the GPU which contains the Systolic Array and decompression pipeline contains fixed-function hardware customized for decompression of the supported compression formats. Therefore, Kothandaraman teaches decompress the transcoded data on fixed-function hardware of the one or more parallel processors, the fixed-function hardware being customized for decompression of the supported compression format. Regarding the remaining arguments applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 7-8, and 10-20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Kothandaraman et al, U.S. Patent Application Publication US 20230057492 A1(hereinafter Kothandaraman) Regarding claim 1, Kothandaraman teaches one or more parallel processors (Any Processor core(s) such as Parallel Processors, Para. 0196) comprising one or more processing units (GPU/Processors, Para. 0228) to: Transcode (Transcode media to, from, and between one or more media encoding formats, Para. 0105), on general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors (Parallel Processors, Para. 0196), data into transcoded data in a supported compression format; (Para. 0037-0038) Kothandaraman teaches compressing data using dictionary, entropy, and Huffman coding and transcoding media between encoding formats. Kothandaraman teaches several supported compression formats that perform lossless data compression. (Dictionary coding that includes LZ77 Para. 0037-0038 and Entropy coding that includes Huffman coding Para. 0037 and 0038-0039) Kothandaraman teaches a GPU that can compress/decompress data. (Specifically the Systolic Array 612 can generate output in compressed/decompressed formats Para. 0145 and can contain hardware to support machine-specific lossless data compression formats Para. 0146. The Systolic Array 612 is a part of the Execution Unit 600 Para. 0142. Execution Units are within the graphics parallel engines Para. 0068 which are a part of the graphic processors Para. 0056. As well as several Execution Units Para. 0128 support single instruction multiple data(SIMD) execution used in the decompression pipeline for Dictionary and Entropy decoding Para. 0227, 0238, and 0302-0304. ) Kothandaraman teaches transcoding data to, from, or between several data formats. (The data is transcoded using a video codec engine 306 which is in a graphic processor Para. 0105. One of the listed data formats is MPEG. Both Huffman Coding and LZ77 can compress MPEG data.) Thus, Kothandaraman teaches transcode, on general-purpose hardware of the one or more parallel processors, data into transcoded data in a supported compression format. and decompress (Para. 0302-0304) the transcoded data on fixed-function hardware (Hardware to Decompress Output, Para. 0145) of the one or more parallel processors (Parallel Processors, Para. 0196), the fixed-function hardware (Processors/Encoders/Decoders/Hardware, Para. 0035, 0145, and 0222) being customized for decompression of the supported compression formats. (The output has the associated decompression or decoding metadata. Para. 0145) . As stated previously, Kothandaraman teaches a GPU that can compress/decompress data. (Specifically the Systolic Array 612 can generate output in compressed/decompressed formats Para. 0145 and can contain hardware to support machine-specific lossless data compression formats Para. 0146. The Systolic Array 612 is a part of the Execution Unit 600 Para. 0142. Execution Units are within the graphics parallel engines Para. 0068 which are a part of the graphic processors Para. 0056. As well as several Execution Units Para. 0128 support single instruction multiple data(SIMD) execution used in the decompression pipeline for Dictionary and Entropy decoding Para. 0227, 0238, and 0302-0304. Furthermore, the graphic processor cores can include fixed function blocks Para. 0064-0065 and 0069. Fixed function blocks can include general-purpose and fixed function logic. These blocks include fixed function pipelines, SOC Interface, and a graphics microcontroller 233, Para. 0065-0066. The graphics microcontroller 233 Para. 0068 contains the graphics parallel engines associated with the Systolic Array. ) Thus, the GPU which contains the Systolic Array and decompression pipeline contains fixed-function hardware customized for decompression of the supported compression formats. Therefore, Kothandaraman teaches decompress the transcoded data on fixed-function hardware of the one or more parallel processors, the fixed-function hardware being customized for decompression of the supported compression format. Regarding claim 2, Kothandaraman teaches the one or more parallel processors of claim 1, wherein the supported compression format (LZ77) is a sliding window dictionary-based compression format. (Para. 0038) LZ77 is a sliding window compression format that is dictionary-based. Regarding claim 3, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to generate the data based at least on executing one or more entropy decoding (Gap Array Entropy Encoders/Decoders, Para. 0222) operations on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors (Parallel Processors, Para. 0196). (Para. 0226) Regarding claim 4, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to decode a plurality of streams (bitstreams from LZ77, Para. 0227) of compressed data from a block of literals in parallel using a common set of shared instructions on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors. (Para. 0040, 0226-0227, and 0238) Kothandaraman utilizes Encoders/Decoders for Huffman coding in LZ77, Para. 0038. One of ordinary skill in the art would know that Huffman coding can be applied to the sequence of literals from LZ77. Regarding claim 5, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to decode a plurality of blocks of literals in parallel using a common set of shared instructions on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors (Parallel Processors, Para. 0196). (Para. 0040, 0226-0227, and 0238) Regarding claim 7, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to execute one or more independent reads associated with each of a plurality of blocks of at least one of literals or dictionary references (Dictionary Decoding Stages, Para. 0238) in parallel using a common set of shared instructions on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors (Parallel Processors, Para. 0196). Regarding claim 8, Kothandaraman teaches the one or more parallel processors of claim 1, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to decode a plurality of blocks of dictionary references (Dictionary Decoding Stages, Para. 0238) in parallel using a common set of shared instructions on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors (Parallel Processors, Para. 0196). Regarding claim 10, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to decompress the transcoded data in response to querying associated compressed data (Para. 0145 and 0302-0304) or loading the associated compressed data onto the one or more parallel processors (Parallel Processors, Para. 0196). Kothandaraman has gaming and streaming use cases to enable fast loading of textures to be decompressed when needed. Regarding claim 11, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the fixed-function hardware (Processors/Encoders/Decoders/Hardware, Para. 0035, 0145, and 0222) of the one or more parallel processors is a copy engine (Copy Engines 304, Para. 0112) of the one or more parallel processors (Parallel Processors, Para. 0196). Regarding claim 12, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1, wherein the one or more parallel processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; (Para. 0314) a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; (Para. 0295, Simulated Video Game assets) a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; (Para. 302, Streaming Video Games) a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system for generating synthetic data; a system for generating synthetic data using Al; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. One of ordinary skill in the art would recognize that a processor could be found in any of the above systems. Regarding claim 13, Kothandaraman teaches a system comprising one or more processing units (GPU/Processors, Para. 0228) to decompress (Para. 0302-0304) transcoded (Para. 0105) data using specialized hardware (Processors/Encoders/Decoders/Hardware, Para. 0035, 0145, and 0222) of a graphics processing unit (GPU), the specialized hardware being dedicated to decompression of a supported compression format. (The output has the associated decompression or decoding metadata. Para. 0145) Regarding claim 14, Kothandaraman teaches the system of claim 13, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to generate, on general-purpose hardware (Entropy Coders for GPU’s, Para. 0035), the transcoded (Para. 0105) data in the supported compression format. (Para. 0037-0038) Kothandaraman teaches compressing data using dictionary, entropy, and Huffman coding. Regarding claim 15, Kothandaraman teaches the system of claim 13, wherein the one or more processing units (GPU/Processors, Para. 0228) are further to generate the transcoded (Para. 0105) data based at least on executing one or more entropy decoding (Gap Array Entropy Encoders/Decoders, Para. 0222) operations on general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the GPU to generate decoded data and transcoding the decoded data on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the GPU. (Para. 0226) Regarding claim 16, has similar limitations as of claim 2, therefore it is rejected under the same rationale as claim 2. Regarding claim 17, has similar limitations as of claim 10, therefore it is rejected under the same rationale as claim 10. Regarding claim 18, has similar limitations as of claim 12, therefore it is rejected under the same rationale as claim 12. Regarding claim 19, Kothandaraman teaches a method comprising: Transcoding (Transcode media to, from, and between one or more media encoding formats, Para. 0105), on a parallel processor (GPU/Processors, Para. 0228), data into transcoded data in a supported compression format; (Para. 0037-0038) Kothandaraman teaches compressing data using dictionary, entropy, and Huffman coding and transcoding media between encoding formats. and decompressing the transcoded data on fixed-function hardware (Processors/Encoders/Decoders/Hardware, Para. 0035, 0145, and 0222 or Copy Engines 304, Para. 0112) of the parallel processor (GPU/Processors, Para. 0228), the fixed-function hardware (Processors/Encoders/Decoders/Hardware, Para. 0035, 0145, and 0222 or Copy Engines 304, Para. 0112) being customized for decompression of the supported compression format. (Para. 0037-0038) Kothandaraman teaches compressing data using dictionary, entropy, and Huffman coding. Regarding claim 20, has similar limitations as of claim 12, therefore it is rejected under the same rationale as claim 12. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kothandaraman et al, U.S. Patent Application Publication US 20230057492 A1(hereinafter Kothandaraman) in further view of Bo et al, U.S. Patent Application Publication US 20220360279 A1 (hereinafter Bo). Regarding claim 6, Kothandaraman fails to teach the one or more parallel processors of claim 1, wherein the one or more processing units are further to decode one or more blocks of dictionary references based at least on executing one or more operations common to extra bit decoding and asymmetric numeral system (ANS) decoding in parallel using a common set of shared instructions on the general- purpose hardware of the one or more parallel processors. Kothandaraman and Bo are analogous to the claimed invention because both of them are in the same field of data compression using a GPU. However, Bo teaches the one or more parallel processors (Processor/Accelerator, Para. 0024-0026) of claim 1, wherein the one or more processing units are further to decode one or more blocks of dictionary references (Dictionary Matching, Para. 0022) based at least on executing one or more operations common to extra bit decoding and asymmetric numeral system (ANS) (Asymmetric Numeral Systems, Para. 0023)decoding in parallel using a common set of shared instructions on the general- purpose hardware (Discrete Device or Integrated into a Processor, Para. 0024) of the one or more parallel processors (Processor/Accelerator, Para. 0024-0026). One of ordinary skill in the art would recognize Zstd decoding has operations for ANS and extra bit. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kothandaraman’s parallel processors to incorporate Bo’s hardware that decodes data using dictionary references, ANS, and extra bits. Since doing so would provide the benefit of encoding/decoding Zstandard compression formats. Regarding claim 9, Kothandaraman teaches the one or more parallel processors (Parallel Processors, Para. 0196) of claim 1 wherein the one or more processing units (GPU/Processors, Para. 0228) are further to decode compressed data in Zstandard to generate the data and transcode (Para. 0105) the data to Snappy on the general-purpose hardware (Entropy Coders for GPU’s, Para. 0035) of the one or more parallel processors (Parallel Processors, Para. 0196). However, Kothandaraman fails to explicitly teach decode compressed data in Zstandard to generate the data and transcode the data to Snappy. Bo teaches the one or more parallel processors (Processor/Accelerator, Para. 0024-0026) of claim 1 wherein the one or more processing units are further to decode compressed data in Zstandard (Para. 0016) to generate the data and transcode the data to Snappy (Para. 0016) on the general-purpose hardware (Discrete Device or Integrated into a Processor, Para. 0024) of the one or more parallel processors (Processor/Accelerator, Para. 0024-0026). Both Zstandard and Snappy are lossless data compression algorithms. In Para. 0036-0038, Bo teaches converting LZ77 to Zstandard. Snappy is based on LZ77, hence one of ordinary skill in the art would be able to convert Zstd to Snappy. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kothandaraman’s Parallel Processors that compress LZ77 to incorporate Bo’s Hardware that can compress Zstd and convert Zstd to LZ77. Since doing so would provide the benefit of creating hardware that can use different compression formats and convert between them, which increases flexibility of the hardware. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIANNA R COCHRAN whose telephone number is (571)272-4671. The examiner can normally be reached Mon-Fri. 7:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIANNA RENAE COCHRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Dec 18, 2023
Application Filed
Aug 21, 2025
Non-Final Rejection — §102, §103
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary
Jan 05, 2026
Response Filed
Mar 17, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541922
METHOD FOR GENERATING A MODEL FOR REPRESENTING RELIEF BY PHOTOGRAMMETRY
2y 5m to grant Granted Feb 03, 2026
Patent 12482144
METHOD AND APPARATUS OF ENCODING/DECODING POINT CLOUD GEOMETRY DATA USING AZIMUTHAL CODING MODE
2y 5m to grant Granted Nov 25, 2025
Patent 12417567
METHOD FOR GENERATING SIGNED DISTANCE FIELD IMAGE, METHOD FOR GENERATING TEXT EFFECT IMAGE, DEVICE AND MEDIUM
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
0%
With Interview (-40.0%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month