Prosecution Insights
Last updated: April 19, 2026
Application No. 19/023,104

READING COMPRESSED DATA DIRECTLY INTO AN IN-MEMORY STORE

Final Rejection §DP
Filed
Jan 15, 2025
Examiner
LE, HUNG D
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
97%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
969 granted / 1073 resolved
+35.3% vs TC avg
Moderate +6% lift
Without
With
+6.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
1106
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1073 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This Office Action is in response to the amendment filed on 01/21/2026. Claims 1-20 are pending. Information Disclosure Statement 2. The information disclosure statement (IDS) filed on 12/31/2025 complies with the provisions of M.P.E.P. 609. The examiner has considered it. Response to Arguments 3. This office action has been issued in response to amendment filed 01/21/2026. Claims 1-20 are pending. Applicants’ arguments have been carefully and respectfully considered in light of the instant amendment as they relate to the claim rejections under double patenting as will be discussed below. Accordingly, this action has been made final. Double Patenting 4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" ranted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). 4. Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,229,168. Although the conflicting claims are not identical, they are not patentably distinct from each other. Instant Application 19023104 Patent US 12,229,168 Claim 1: A radix clustering system comprising: a processor; and a computer-readable medium storing instructions that are operative upon execution by the processor to: read compressed data from a file, the compressed data having a first storage compression scheme with a first storage compression dictionary and a second storage compression dictionary; hash entries in the first storage compression dictionary and the second storage compression dictionary; sort the hashed entries by their associated hash values; update a hash table with the hashed entries; and based on at least the updated hash table, transcode the first and second storage compression dictionaries into an in-memory compression dictionary. Claim 1: A system comprising: a processor; and a computer-readable medium storing instructions that are operative upon execution by the processor to: read compressed data from a file, the compressed data in the file having a first storage compression scheme with a first storage compression dictionary; without decompressing the compressed data, load the compressed data into an in-memory store, the compressed data in the in-memory store having an in-memory compression scheme with an in-memory compression dictionary; transcode the compressed data from the first storage compression scheme to the in-memory compression scheme based on radix clustering; perform a query on the compressed data in the in-memory store; and return a query result. Claim 8: A computer-implemented method comprising: reading compressed data from a file, the compressed data having a first storage compression scheme with a first storage compression dictionary and a second storage compression dictionary; hashing entries in the first storage compression dictionary and the second storage compression dictionary; sorting the hashed entries by their associated hash values; updating a hash table with the hashed entries; and based on at least the updated hash table, transcoding the first and second storage compression dictionaries into an in-memory compression dictionary. Claim 8: A computer-implemented method comprising: receiving a query; reading compressed data from a file, the compressed data in the file having a first storage compression scheme with a first storage compression dictionary; without decompressing the compressed data, loading the compressed data into an in memory store, the compressed data in the in-memory store having an in-memory compression scheme with an in-memory compression dictionary; transcoding the compressed data from the first storage compression scheme to the in-memory compression scheme based on radix clustering; performing the query on the compressed data in the in-memory store; and returning a query result. Claim 15: A computer storage device having computer-executable instructions stored thereon, which, on execution by a computer, cause the computer to perform operations comprising: reading compressed data from a file, the compressed data having a first storage compression scheme with a first storage compression dictionary and a second storage compression dictionary; hashing entries in the first storage compression dictionary and the second storage compression dictionary; sorting the hashed entries by their associated hash values; updating a hash table with the hashed entries; and based on at least the updated hash table, transcoding the first and second storage compression dictionaries into an in-memory compression dictionary. Claim 15: A computer storage device having computer-executable instructions stored thereon, which, on execution by a computer, cause the computer to perform operations comprising: reading compressed data from a file, the compressed data in the file having a first storage compression scheme with a first storage compression dictionary; without decompressing the compressed data, loading the compressed data into an in-memory store, the compressed data in the in-memory store having an in-memory compression scheme with an in-memory compression dictionary; transcoding the compressed data from the first storage compression scheme to the in-memory compression scheme based on radix clustering; receiving a query from across a computer network; performing the query on the compressed data in the in-memory store; and returning a query result. Examiner's Note 5. Radix clustering (According to Google): "Radix clustering, which is also known as radix sort, is a non-comparative sorting algorithm that groups data based on individual digits or characters, known as the radix. It organizes elements into "buckets" according to their place value, from the least significant digit (LSD) to the most significant digit (MSD). This process is repeated for each digit until the entire list is sorted". Compression dictionary (According to Google): "A compression dictionary is a data structure that maps frequently occurring data patterns, such as words or phrases, to shorter codes or pointers. This dictionary is used in data compression to reduce file size by replacing long sequences with shorter references, and it can be either static (pre-defined) or dynamic (built as the file is processed)." In-memory compression (According to Google): "In-memory compression refers to the technique of storing data in a compressed format directly within a system's random-access memory (RAM). The primary goal is to reduce the amount of physical memory required to hold a given dataset, thereby increasing the effective capacity of the RAM and potentially improving system performance." Transcoding data (According to Google): "Transcoding data is the process of converting a digital media file from one format or compression to another. This is done by decoding the file into an intermediate format, then re-encoding it into a new format, which can be used to make content compatible with different devices, reduce file size, or improve playback quality and speed.” A Compression Using Prefix Tree or Trie (According to Google): "Yes, a compressed trie, which is a space-optimized version of a standard trie (or prefix tree), is also known as a radix tree or radix trie. The terms are often used interchangeably to refer to the same data structure. The "compression" in a radix tree comes from merging nodes that are the only child of their parent. Instead of having a separate node for each character in a unique prefix chain, a radix tree combines these nodes into a single node with an edge label representing the entire string segment. This reduces the number of nodes and edges, leading to a more compact representation and often better performance for certain operations." Reading Data With a Stateful Enumerator (According to Google): "A stateful enumerator reads data by maintaining and updating its internal state as it processes a sequence of items. This differs from a standard, stateless enumerator, which simply moves through a collection and processes each item in isolation. Stateful emmerators are useful for tasks that require memory of past events to determine future behavior." Reading Data With a Callback (According to Google): "Reading data with a callback involves Providing a function (the callback) to another function that handles the data retrieval. This allows the program to continue executing other tasks while the data is being fetched, and then execute the call back function once the data is available. This pattern is particularly useful for asynchronous operations, such as reading files, making network requests, or interacting with databases, which can take an unpredictable amount of time to complete" Mueller et al, US 20150178305, [Abstract and paragraph 11 ("Innovations for adaptive compression and decompression for dictionaries of a column-store database can reduce the amount of memory used for columns of the database, allowing a system to keep column data in memory for more columns, while delays for access operations remain acceptable. For example, dictionary compression variants use different compression techniques and implementation options. Some dictionary compression variants provide more aggressive compression (reduced memory consumption) but result in slower run-time performance. Other dictionary compression variants provide less aggressive compression (higher memory consumption) but support faster run-time performance. As another example, a compression manager can automatically select a dictionary compression variant for a given column in a column-store database. For different dictionary compression variants, the compression manager predicts run-time performance and compressed dictionary size, given the values of the column, and selects one of the dictionary compression variants. ")] [Paragraphs 13 and 64 ("when the dictionary is sorted in ascending order, range queries can be performed efficiently. Value IDs for the endpoints of the range can be identified, then rows with value IDs in the range can be returned. On the other hand, some access operations are slower on compressed data for a column, compared to access operations on uncompressed data, since they involve another layer of lookup operations using a dictionary")] [Paragraph 152 ("For example, the dictionary compression variant uses hashing to map strings to index values (value IDs), compressed text self-indexes, a prefix tree or trie, a suffix tree, a compressed suffix tree, a directed acyclic word graph or another implementation/data structure ")] [Paragraphs 17 and33 ("a compression manager selects one of multiple available dictionary compression variants to apply to a dictionary or a column of a table in a column-store database (e.g., an in-memory column-store database) ")I [Paragraph 31 ("selecting dictionary compression variants to apply to dictionaries for an in-memory column store database", i.e.,first storage compression dictionary and second storage compression dictionary)] [Paragraph 116 ("the dictionary compression variant uses a Lempel-Ziv approach (e.g., Lempel-Ziv-Welch), run length encoding, arithmetic coding or a Burrows-Wheeler transformation", i.e., transcoding or transforming)] [Paragraph 152 ("compressed text self indexes, a prefix tree or trie, a suffix tree, a compressed suffix tree, a directed acyclic word graph or another implementation/data structure", i.e., radix compression or radix clustering)]. Kondiles, US 20240126762, [Abstract and paragraph 255 ("A storage dataset that includes a plurality of compressed data slabs is created based on the data set, and the storage data set is stored via a plurality of computing devices. Each compressed data slab of the plurality of compressed data slabs is generated from at least one corresponding uncompressed data slab of the plurality of uncompressed data slabs, and each compressed data slab is generated o include compressed data and compression information")] [Paragraph 254 ("a form of compression to allow for more efficient processing in a massively parallel database system. Uncompressed data slab k (and data slab k+1) is a column of a table that has been sorted based on a key. In an example each data slab includes 156 32-byte data values, however data slabs can be of any reasonable size and include any reasonable number of data values. In an example, logical data block addresses (LBAs) are assigned. Each uncompressed sorted data slab could be each of a portion of a logical block address (LBA), aligned with a LBA, or in an example a given uncompressed sorted data slab could span a plurality of LBAs. In an example an uncompressed sorted data slab could span thousands of LBAs")]. Baskett et al, 10,474,652, [Abstract ("The unique values can be stored in a dictionary table along with reference keys that point to a row of the database table. A reference store column can replace the original column, where the reference store column stores index values of the dictionary table. A hash table can be used in accessing the database. A hash function can provide a hash value of a query term, and the hash value can be used to access a hash table to obtain a stored value of an index value of the dictionary table ")] [Column 4, lines 45-56, and column 5, lines 56-60 ("this procedure for dictionary table 130 can provide a dictionary-storage based compression scheme for the varchar data, since only unique varchars are stored"] [Column 7, lines 24-38 ("For "Greater ThanX", the hash function can be applied to the query term X. The resulting hash output value can be checked against the hash table. A dictionary array indexfor the dictionary table can be obtained. Ifeither the dictionary table or the hash table (or potentially both) are ordered, the character field corresponding to the dictionary array index in the character fields corresponding to values greater than the dictionary array index can be returned")]. Conclusion 6. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136 (a) A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filled within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to [Hung D. Le], whose telephone number is [571-270-1404]. The examiner can normally be communicated on [Monday to Friday: 9:00 A.M. to 5:00 P.M.]. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on [571-272-4080]. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, contact [800-786-9199 (IN USA OR CANADA) or 571-272-1000]. Hung Le 03/06/2026 /HUNG D LE/Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Oct 24, 2025
Non-Final Rejection — §DP
Jan 21, 2026
Response Filed
Mar 06, 2026
Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596684
SYSTEMS AND METHODS FOR SEARCHING DEDUPLICATED DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596724
SYSTEMS AND METHODS FOR USE IN REPLICATING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596736
SYSTEMS AND METHODS FOR USING PROMPT DISSECTION FOR LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12591489
POINT-IN-TIME DATA COPY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585625
SYSTEM AND METHOD FOR IMPLEMENTING A DATA QUALITY FRAMEWORK AND ENGINE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
97%
With Interview (+6.4%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 1073 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month