Prosecution Insights
Last updated: April 19, 2026
Application No. 18/817,142

Techniques for Massively Parallel Graphics Processing Unit (GPU) Based Compression

Final Rejection §101§103
Filed
Aug 27, 2024
Examiner
PHAN, TUANKHANH D
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Regents of the University of Michigan
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
448 granted / 569 resolved
+23.7% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
30 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment, filed on 11/18/2025, has been entered and acknowledged by the Examiner. Claims 1-20 are pending. Response to Arguments Applicant's arguments with respect to amended and not amended claims 1-20 have been considered but are moot. Applicant's arguments filed 11/18/2025 have been fully considered but they are not persuasive. Issue: The applicant argues that by reciting an improvement to the functioning of a computer or to any other technology or technical field, the pending claims are "directed to" patent eligible subject matter at least because they integrate any alleged abstract idea into a practical application. Accordingly, Applicant respectfully submits that, when the claims are properly and fairly considered as a whole, the claims demonstrate subject matter eligibility of the pending claims per Step 2A, Prong Two at least as an improvement to the functioning of a computer or to any other technology or technical field… GPU-based data compression implementation that decouples the recurrence search depth from the number of CPU search transactions… The data encoder may keep this window reference and iteratively compare the current end of file sequences with sequences present within the window representing prior data within the compressed data block. This reverse order compression and decompression ensures that back- references within the data block remain valid, such that as the data block is decompressed, the data block is reconstructed from the beginning and back- references become valid as they are retrieved. Response: The examiner respectfully disagrees and asserts that the elaboration by the Applicant of how claim 1 could improve the exist technology; however, the claim’s languages of are lack of these features “GPU-based data compression implementation that decouples the recurrence search depth from the number of CPU search transactions… The data encoder may keep this window reference and iteratively compare the current end of file sequences with sequences present within the window representing prior data within the compressed data block” to provide such improvement. Therefore, the Applicant’s argument is not persuasive. Issue: The applicant argues that Applicant respectfully submits that the applied references fail to disclose or suggest at least these elements, and by extension, claim elements relying on these elements. Zhu generally discloses techniques for "estimating or predicting depth information for image data," and more specifically to "automatically estimating depth values missing from image data." Zhu at Abstract and [0001] (emphasis added). In certain embodiments, the Zhu system is configured to iteratively determine depth values by using a "self-correcting refinement model that refines the predicted depth ... progressively," which includes determining "a set of correction offsets 748 that the depth refinement module 280 combines with (e.g., adds to) a set of previous depths 718 (see FIG. 7) to obtain a set of refined depths." See id., [0106]. The offsets are between depth values of the same point (image data) within the image. These offsets are thus different from those recited in claim 1, at least in that the offsets of claim 1 represent distances between different locations within the data stream, and thus have no correlation to physical distances in three-dimensional space and/or to multiple values associated with a single data point in the data stream. Moreover, the comparisons described in Zhu are unrelated to the comparisons recited in claim 1. For example, the comparisons in Zhu involve comparing output data of a ML model (e.g., a neural network) with "a set of expected or desired outputs," which is generally representative of the model training process. Id. at [0137]. These Zhu training processes thus involve comparing estimated depth values output by a neural network associated with an image with known (e.g., ground truth) depth values associated with that image to determine differences between the known depth values at a respective position within the image data and the estimated depth value at that respective image position. In contrast, the comparisons recited in claim 1 are between data within the same stream at different locations within the stream to determine an optimal data sequence within the stream that maximizes matched data between data sequences (e.g., first and respective data sequences) within the stream. Zhu offers no further description or suggestion of such comparisons, matching, or offsets. Response: The examiner respectfully disagrees and asserts that firstly, the Applicant argues the differences from those recited in claim 1, at least in that the offsets of claim 1 represent distances between different locations within the data stream, and thus have no correlation to physical distances in three-dimensional space and/or to multiple values associated with a single data point in the data stream; however, the claim never mentions of a physical distances in three-dimensional space and/or to multiple values associated with a single data point in the data stream. Second, Zhu clearly discloses data matching points even on-the-fly (¶ [0197]) that is similar to claim 1. Last but not least, Zhu also discloses “the point cloud having points positioned at locations determined based on the plurality of present depths” (¶ [0631]), which also reads on the languages of claim 1. As such, the Applicant’s argument is not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claimed invention is directed to one or more abstract ideas without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below. Step 1: The claimed method (claims 1-13), system (claims 14-19), and non-transitory computer-readable storage medium (claim 20) are directed to one of the eligible categories of subject matter and therefore satisfies step 1. Step 2A, Prong One: Independent claim 1 (14 and 20) recites the following limitations that can be practically performed in the mind: Determining an optimal data sequence Updating the global offset based on optimal data sequence Step 2A, Prong Two: The additional elements are: -receiving, at one or more processor, a data file -simultaneously causing a plurality of threads of a graphics unit to compare data sequences in the data stream -storing, by the one or more processor, the optimal data sequence These additional elements are using generic computer functions as a tool to perform. Step 2B: For Step 2B, the additional elements, taken individually and in combination, do not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. MPEP 2106.07(a)(III)(B) identifies the list of cases in MPEP 2106. 05(d)(II) as available bases. Taking these aforementioned additional elements as an ordered combination, these additional elements add nothing that is not already present when the elements are considered separately. As per dependent claims 2-13 and 15-19: Step 2A, Prong Two: The dependent claims 2 and 15 are directed to a generic computer function when reciting “storing a reference location and a length value of the optimal data sequence in the reference table.” Step 2A, Prong One: The dependent claim 3 is directed to a mental process abstract idea: -the optimal data sequence includes a reference sequence or a literal sequence. Step 2A, Prong Two: The dependent claims 4 and 16 are directed to a generic computer function: -responsive to determining that the optimal data sequence is a literal sequence, storing the length of the literal sequence and a reference location of the literal sequence in a literal table. Step 2A, Prong Two: The dependent claim 5 is directed to a generic computer function: -data stream include image data. Step 2A, Prong Two: The dependent claims 6 and 17 are directed a generic computer function: -the number of threads corresponding to the plurality of threads equals a number of bytes in the data stream. Step 2A, Prong Two: The dependent claim 7 is directed to generic computer functions: receiving, by the one or more processors and after step (g), a search query to identify a feature represented in the data stream; and searching, by the one or more processors, the data stream based on the reference table. Step 2A, Prong Two: The dependent claims 8 and 18 are directed to generic computer functions: -updating the first global offset based on the optimal data sequence further comprises: establishing a second global offset at a first data location within the data stream that is a distance from the first global offset represented by a length value of the optimal data sequence. Step 2A Prong One: The dependent claims 9 and 19 are directed to a mathematical concept abstract idea: -compressing the reference table using an entropy encoding algorithm. Step 2A, Prong Two: The dependent claims 9 and 19 are directed to generic computer function: calculating, by the one or more processors, an offset value and a length value for each data sequence in the reference table. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu (US Pub. 2022/0292699) in view of Ozsoy (Optimizing LZSS compression on GPGPUs). Regarding claim 1, Zhu discloses a computer-implemented method for massively parallel data compression, the method comprising: (a) receiving, at one or more processors, a data stream (¶ [0062], receiving at least image that includes different data; (b) establishing, by the one or more processors, a global offset at a first data of the data stream (¶ [0106], For each iteration, the depth refinement module 280 determines a set of correction offsets); (c) substantially simultaneously causing a plurality of threads of a graphics processing unit (GPU) to compare respective data sequences in the data stream with a first data sequence that includes the first data, wherein each thread of the plurality of threads compares a respective data sequence of the respective data sequences, and each respective data sequence is offset from every other respective data sequence and the first data sequence (¶ [0137], processes inputs from training dataset 1002 and compares resulting outputs against a set of expected or desired outputs); (d) determining, by the one or more processors, an optimal data sequence that maximizes matched data, the optimal data sequence corresponding to the first data sequence from the respective data sequences (¶ [0137], a set of expected or desired outputs); (e) storing, by the one or more processors, the optimal data sequence in a reference table (¶ [0137] stored as trained result); (f) updating, by the one or more processors, the global offset based on the optimal data sequence (¶ [0256], deep-learning infrastructure may receive periodic updates from vehicle 1200, such as a sequence of images and/or objects that vehicle 1200 has located in that sequence of images); and (g) iteratively performing steps (c)-(f) until the global offset is a beginning or an end of the data stream (¶ [0104], The formulation of the refinement model may allow fine adjustments to be applied iteratively and may help preserve common 3D structures, such as shapes of the object 122 and 126, orientation of the objects). Ozsoy further discloses that minimizes non-matched data and maximize matched (p. 172, 2nd column; looping until reaches a non-matching character). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ozsoy into Zhu to result in divergent paths in the execution of threads. Regarding claim 2, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, wherein storing the optimal data sequence in the reference table further comprises: storing, by the one or more processors, (i) a reference location and (ii) a length value of the optimal data sequence in the reference table (¶ [0062], such as an optical center value, a focal length value, and the like). Regarding claim 3, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, wherein the optimal data sequence includes (i) a reference sequence or (ii) a literal sequence (Ozsoy, Fig. 1). Regarding claim 4, Zhu in view of Ozsoy discloses the computer-implemented method of claim 3, further comprising: responsive to determining that the optimal data sequence is a literal sequence, storing the length of the literal sequence and a reference location of the literal sequence in a literal table (Zhu, ¶ [0310], a segment table pointer for referencing). Regarding claim 5, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, wherein the data stream includes image data (¶ [0062]). Regarding claim 6, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, wherein the number of threads corresponding to the plurality of threads equals a number of bytes in the data stream (¶ 477], byte size data element; Ozsoy, p. 177). Regarding claim 7, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, further comprising: receiving, by the one or more processors and after step (g), a search query to identify a feature represented in the data stream (p. 171; ¶ [0491], search optimization); and searching, by the one or more processors, the data stream based on the reference table (p. 171, search looks into the sliding history buffer for substrings starting with ‘a’. The first match is the first character of sliding history and the following ‘bc’ characters also match with uncoded lookahead). Regarding claim 8, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, wherein the global offset is a first global offset, and updating the first global offset based on the optimal data sequence further comprises: establishing a second global offset at a first data location within the data stream that is a distance from the first global offset represented by a length value of the optimal data sequence (¶ [0106], allowing different offsets for correction and optimization). Regarding claim 9, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, further comprising: calculating, by the one or more processors, an offset value and a length value for each data sequence in the reference table (¶ [0116]); and compressing, by the one or more processors, the reference table using an entropy encoding algorithm (¶ [0116]). Regarding claim 10, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, further comprising: storing, by the one or more processors at step (g), a reference representing a location of the first data sequence relative to (i) the beginning of the data stream or (ii) the global offset ( ¶ [0106]; different offsets). Regarding claim 11, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, further comprising: comparing, by the one or more processors, a length value of the optimal data sequence with a match length threshold value (p. 172, 2nd column); responsive to determining that the length value does not exceed the match length threshold value, determining, by the one or more processors, a second optimal data sequence (p. 172, 2nd column); and responsive to determining that a second length value of the second optimal data sequence exceeds the match length threshold value, storing, by the one or more processors, the second optimal data sequence in the reference table (p. 172, 2nd column). Regarding claim 12, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, further comprising: at each iteration of steps (c)-(f), calculating, by the one or more processors, a compression index score for each thread of the plurality of threads by subtracting a respective number of non-matching characters from a respective number of matching characters; and determining, by the one or more processors, a maximum compression index score from the compression index score for each thread of the plurality of threads (p. 172, 2nd column). Regarding claim 13, Zhu in view of Ozsoy discloses the computer-implemented method of claim 1, further comprising: calculating, by the one or more processors, a combined compression index score for each respective pair of threads from the plurality of threads by subtracting a number of overlapping matching characters and a number of residual non-matching characters from a combined number of matching characters (p. 172, lookahead buffer in a loop until it reaches a non-matching character; ¶ [0581] allow overlaps); and determining, by the one or more processors, a maximum combined compression index score from the combined compression index score for each respective pair of threads of the plurality of threads (¶ [0064], represented by a variable I and associated with a variable i that represents an index value corresponding to each of the pixels). Regarding claims 14-19, see discussion of claims 12, 4, 6, 8 and 9 respectively for the same reason of rejection. Regarding claim 20, see discussion of claim 1 above for the same reason of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUANKHANH D PHAN whose telephone number is (571)270-3047. The examiner can normally be reached on Mon-Fri, 10:00am-18:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 or 571-272-1000. /TUANKHANH D PHAN/ Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Aug 09, 2025
Non-Final Rejection — §101, §103
Nov 18, 2025
Response Filed
Feb 20, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536215
AUTOMATED GENERATION OF GOVERNING LABEL RECOMMENDATIONS
2y 5m to grant Granted Jan 27, 2026
Patent 12517738
LOOP DETECTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 06, 2026
Patent 12511297
TECHNIQUES FOR DETECTING SIMILAR INCIDENTS
2y 5m to grant Granted Dec 30, 2025
Patent 12511701
SYSTEM AND METHOD FOR DETECTING RELEVANT POTENTIAL PARTICIPATING ENTITIES
2y 5m to grant Granted Dec 30, 2025
Patent 12505164
METHOD OF ENCODING TERRAIN DATABASE USING A NEURAL NETWORK
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+12.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month