Prosecution Insights
Last updated: April 19, 2026
Application No. 19/046,817

DATA SYSTEM CONFIGURED TO TRANSPARENTLY CACHE DATA OF DATA SOURCES AND ACCESS THE CACHED DATA

Non-Final OA §DP
Filed
Feb 06, 2025
Examiner
OWYANG, MICHELLE N
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Dremio Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
464 granted / 610 resolved
+21.1% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
16 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 610 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-21 are pending. Specification The abstract of the disclosure is objected to because the abstract contains a legal term “embodiments” " and should be removed since abstracts are not supposed to include legal terms. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-21 of U.S. Patent No. 11,537,617 (Appl. No. 16/861,048). Although the claims at issue are not identical, they are not patentably distinct from each other because both are directed to similar invention with similar limitations as demonstrated in the table below: Instant Application U.S. Patent No. 11,537,617 1. A non-transitory computer-readable medium storing computer- executable instructions that, when executed by at least one computer processor, cause the at least one computer processor to carry out operations comprising: obtaining the data object from an external data source; caching the data object in a storage location of a data system as a cached data object; generating a unit of hashing corresponding to an output of a hash algorithm based on an input indicative of the data object; mapping the cached data object to the external data source in accordance with the unit of hashing, wherein the cached data object is updatable automatically with the external data source based on the unit of hashing, receiving a query configured for reading data stored at the external data source to which the cached data object is mapped in accordance with the unit of hashing, wherein a first query result that satisfies the query includes the data object stored at the external data source; in response to the query, using the unit of hashing to obtain a second query result that is determined to satisfy the query by reading the cached data object stored in the storage location at the data system instead of reading the data object stored at the external data source, wherein the storage location of the cached data object is determined based on the mapping in accordance with unit of hashing; and returning the second query result including the cached data object read from the storage location. 2. The non-transitory computer-readable medium of claim 1, wherein the operations further comprise: automatically selecting the data object for caching in the data system when a frequency of accessing the data object exceeds a threshold. 3. The non-transitory computer-readable medium of claim 1, wherein the query is a first query and the operations further comprise: receiving a second query for data stored at the external data source; determining that the cached data object is outdated relative to a current data object stored at the external data source; and caching the current data object in the data system to replace the cached data object, wherein a result that satisfies the second query is obtained from the data system instead of the external data source. 4. The non-transitory computer-readable medium of claim 1, wherein the input indicative of the data object includes a combination comprising: an indication of a cluster of nodes associated with the data object, a type of the data object, a path and name of the data object, and information about a split of the data object. 5. The non-transitory computer-readable medium of claim 1, wherein the operations further comprise, prior to determining the storage location of the cached data object based on the unit of hashing: determining the storage location for caching the data object based on the unit of hashing. 6. The non-transitory computer-readable medium of claim 1, wherein the input indicative the data object includes a combination comprising: an indication of a cluster of nodes associated with the data object, a type of the data object, a path and name of the data object, or information about a split of the data object. 7. The non-transitory computer-readable medium of claim 1, wherein the input indicative of the data object includes location-dependent information and location- independent information. 8. A non-transitory computer-readable medium storing computer-executable instructions that, when executed by at least one computer processor, cause the at least one computer processor to carry out operations comprising: receiving a query configured for reading a data object stored at an external data source, wherein a first query result that satisfies the query includes the data object stored at the external data source; generating a query plan by parsing the query into a plurality of phases, each phase being configured to read a fragment of the data object from the external data source; generating a unit of hashing corresponding to an output of a hash algorithm based on an input indicative of the data object; using a unit of hashing to map fragments of the data object of the external data source to a cluster of nodes of a data system; generating a read request for the data object in accordance with the unit of hashing to read the fragments of the data object of the external data source; processing the read request by the cluster of nodes that divides the data object into discrete logical blocks, the read request using the unit of hashing to determine a link to one or more logical blocks of the external data source and using storage software to read the one or more logical blocks instead of reading the data object stored at the external data source; and returning a second query result that is determined to satisfy the query, the second query result including the data object obtained from the one or more logical blocks. 9. The computer-readable medium of claim 8, wherein the discrete logical blocks have a common size and a scope of the read request is greater than the common size. 10. The computer-readable medium of claim 8, wherein the storage software includes a plurality of format readers configured to process a plurality of different types of data objects. 11. The computer-readable medium of claim 8, wherein the operations further comprise, prior to processing the read request: performing a lookup process to compare a cached version of the data object with a version of the data object stored at the external data source. 12. The computer-readable medium of claim 8, wherein each node independently maintains a database that only tracks locally stored data objects and associated version information. 13. The computer-readable medium of claim 8, wherein processing the read request by the cluster of nodes comprises: returning a set of nodes for processing the fragments in proportion to a number of configured rings. 14. The computer-readable medium of claim 13, wherein a load balancer enables multiple rings for a data object so that multiple copies of data objects are cached, and the load balancer maps read requests for a data object to the cluster of nodes. 15. A computing system comprising: at least one computer processor; and at least one memory device storing non-transitory computer-executable instructions that are executable to cause the at least one computer processor to perform operations comprising: receiving a query configured for reading a data file stored at an external storage device, the data file being divided into multiple discrete logical blocks and associated with different instances of metadata stored at both a local cache storage device and the external storage device, wherein the different instances of metadata are mapped between the local cache storage device and the external storage device based on a unit of hashing, and wherein the unit of hashing corresponds to an output of a hash algorithm based on an input indicative of the data file; comparing a cached instance of a particular metadata at the local cache storage device with a stored instance of the particular metadata at the external storage device; determining that the cached instance of the particular metadata is different from the stored instance of the particular metadata; responsive to determining that the cached instance of the particular metadata is different from the stored instance of the particular metadata, reading the data file stored at the external storage device instead of reading the cached instance of the particular metadata from the local cache storage device; returning a query result including the data file obtained from the external storage device, the query result being determined to satisfy the query; and updating the multiple discrete logical blocks of the local cache storage device with the data file obtained from the external storage device, wherein the updated multiple discrete logical blocks include an updated cached instance of the particular metadata. 16. The computing system of claim 15, wherein the operations further comprise: receiving another query for the data file; determining that the updated cached instance of the particular metadata corresponds to the stored instance of the particular metadata; responsive to determining that the updated cached instance of the particular metadata corresponds to the stored instance of the particular metadata, reading the data file of the local cache storage device; and returning another query result including the data file obtained from the local cache storage device without reading the data file from the external storage device. 17. The computing system of claim 15, wherein the different instances of metadata include different versions of the data file. 18. The computing system of claim 15, wherein processing the query result and updating the multiple discrete logical blocks of the local cache storage device occurs asynchronously. 19. A computing system comprising: at least one computer processor; and at least one memory device storing non-transitory computer-executable instructions that are executable to cause the at least one computer processor to perform operations comprising: receiving, by the computing system, a read request configured for reading a data file that is stored at a local cache storage, wherein: a copy of the data file is stored at an external data storage, the data file is stored at a storage location of the local cache storage and mapped to the external data storage based on a unit of hashing, and the unit of hashing corresponds to an output of a hash algorithm based on an input indicative of the data file; selecting, by the computing system, a particular format reader of a plurality of format readers, the plurality of format readers being configured to read different types of data files, the particular format reader being selected based on a type of the data file in the read request; modifying, by using the particular format reader, the read request to include an attribute of the data file, the attribute depending on the type of the data file; parsing, by using the particular format reader, the data file for the read request into discrete logical blocks depending on the type of the data file; and reading, by using the particular format reader, data of the data file and the attribute stored at the external data storage unless the data file stored at the local cache storage is a current version of the data file stored and the attribute such that the data file and the attribute are read from the local cache storage instead of being read from the external data storage. 20. The computing system of claim 19, wherein the operations further comprise: updating the data file and the attribute stored at the local cache storage with the data file and the attribute obtained from the external data storage. 21. The computing system of claim 19, wherein the plurality of format readers are configured to read an Apache Parquet type file, an optimized row columnar (ORC) type file, and a comma-separated values (CSV) type file. 1. A method for caching a data object in a data system, the method comprising: obtaining the data object from an external data source; caching the data object in a storage location of the data system as a cached data object, generating a unit of hashing corresponding to an output of a hash algorithm based on an input indicative of the data object; mapping the cached data object to the external data source in accordance with the unit of hashing, wherein the cached data object is updatable automatically with the external data source based on the unit of hashing, receiving a query configured for reading data stored at the external data source to which the cached data object is mapped in accordance with the unit of hashing, wherein a first query result that satisfies the query includes the data object stored at the external data source; in response to the query, using the unit of hashing to obtain a second query result that is determined to satisfy the query by reading the cached data object stored in the storage location at the data system instead of reading the data object stored at the external data source, wherein the storage location of the cached data object is determined based on the mapping in accordance with unit of hashing; and returning the second query result including the cached data object read from the storage location. 2. The method of claim 1 further comprising, prior to caching the data object in the data system: automatically selecting the data object for caching in the data system when a frequency of accessing the data object exceeds a threshold. 3. The method of claim 1, wherein the query is a first query, the method further comprises: receiving a second query for data stored at the external data source; determining that the cached data object is outdated relative to a current data object stored at the external data source; and caching the current data object in the data system to replace the cached data object, wherein a result that satisfies the second query is obtained from the data system instead of the external data source. 4. The method of claim 1, wherein the input indicative of the data object includes a combination comprising: an indication of a cluster of nodes associated with the data object, a type of the data object, a path and name of the data object, and information about a split of the data object. 5. The method of claim 1 further comprising, prior to determining the storage location of the cached data object based on the unit of hashing: determining the storage location for caching the data object based on the unit of hashing. 6. The method of claim 1, wherein the input indicative the data object includes a combination comprising: an indication of a cluster of nodes associated with the data object, a type of the data object, a path and name of the data object, or information about a split of the data object. 7. The method of claim 1, wherein the input indicative of the data object includes location-dependent information and location-independent information. 8. A method comprising: receiving a query configured for reading a data object stored at an external data source, wherein a first query result that satisfies the query includes the data object stored at the external data source; generating a query plan by parsing the query into a plurality of phases, each phase being configured to read a fragment of the data object from the external data source; generating a unit of hashing corresponding to an output of a hash algorithm based on an input indicative of the data object; using a unit of hashing to map fragments of the data object of the external data source to a cluster of nodes of a data system; generating a read request for the data object in accordance with the unit of hashing to read the fragments of the data object of the external data source; processing the read request by the cluster of nodes that divides the data object into discrete logical blocks, the read request using the unit of hashing to determine a link to one or more logical blocks of the external data source and using storage software to read the one or more logical blocks instead of reading the data object stored at the external data source; and returning a second query result that is determined to satisfy the query, the second query result including the data object obtained from the one or more logical blocks. 9. The method of claim 8, wherein the discrete logical blocks have a common size and a scope of the read request is greater than the common size. 10. The method of claim 8, wherein the storage software includes a plurality of format readers configured to process a plurality of different types of data objects. 11. The method of claim 8 further comprising, prior to processing the read request: performing a lookup process to compare a cached version of the data object with a version of the data object stored at the external data source. 12. The method of claim 8, wherein each node independently maintains a database that only tracks locally stored data objects and associated version information. 13. The method of claim 8, wherein processing the read request by the cluster of nodes comprises: returning a set of nodes for processing the fragments in proportion to a number of configured rings. 14. The method of claim 13, wherein a load balancer enables multiple rings for a data object so that multiple copies of data objects are cached, and the load balancer maps read requests for a data object to the cluster of nodes. 15. A method comprising: receiving a query configured for reading a data file stored at an external storage device, the data file being divided into multiple discrete logical blocks and associated with different instances of metadata stored at both a local cache storage device and the external storage device, wherein the different instances of metadata are mapped between the local cache storage device and the external storage device based on a unit of hashing, and wherein the unit of hashing corresponds to an output of a hash algorithm based on an input indicative of the data file; comparing a cached instance of a particular metadata at the local cache storage device with a stored instance of the particular metadata at the external storage device; determining that the cached instance of the particular metadata is different from the stored instance of the particular metadata; responsive to determining that the cached instance of the particular metadata is different from the stored instance of the particular metadata, reading the data file stored at the external storage device instead of reading the cached instance of the particular metadata from the local cache storage device; returning a query result including the data file obtained from the external storage device, the query result being determined to satisfy the query; and updating the multiple discrete logical blocks of the local cache storage device with the data file obtained from the external storage device, the updated multiple discrete logical blocks include an updated cached instance of the particular metadata. 16. The method of claim 15 further comprising: receiving another query for the data file; determining that the updated cached instance of the particular metadata corresponds to the stored instance of the particular metadata; responsive to determining that the updated cached instance of the particular metadata corresponds to the stored instance of the particular metadata, reading the data file of the local cache storage device; and returning another query result including the data file obtained from the local cache storage device without reading the data file from the external storage device. 17. The method of claim 15, wherein the different instances of metadata include different versions of the data file. 18. The method of claim 15, wherein processing the query result and updating the multiple discrete logical blocks of the local cache storage device occurs asynchronously. 19. A method comprising: receiving, by a data system, a read request configured for reading a data file that is stored at a local cache storage; wherein a copy of the data file is stored at an external data storage, wherein the data file is stored at a storage location of the local cache storage and mapped to the external data storage based on a unit of hashing, and wherein the unit of hashing corresponds to an output of a hash algorithm based on an input indicative of the data file; selecting, by the data system, a particular format reader of a plurality of format readers, the plurality of format readers being configured to read different types of data files, the particular format reader being selected based on a type of the data file in the read request; modifying, by using the particular format reader, the read request to include an attribute of the data file, the attribute depending on the type of the data file; parsing, by using the particular format reader, the data file for the read request into discrete logical blocks depending on the type of the data file; and reading, by using the particular format reader, data of the data file and the attribute stored at the external data storage unless the data file stored at the local cache storage is a current version of the data file stored and the attribute such that the data file and the attribute are read from the local cache storage instead of being read from the external data storage. 20. The method of claim 19 further comprising: updating the data file and the attribute stored at the local cache storage with the data file and the attribute obtained from the external data storage. 21. The method of claim 19, wherein the plurality of format readers are configured to read an Apache Parquet type file, an optimized row columnar (ORC) type file, and a comma-separated values (CSV) type file. As demonstrated by the mappings in the table above, U.S. Patent No. 11,537,617 discloses or renders obvious all the features of the claims of the instant application. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Owyang whose telephone number is (571)270-1254. The examiner can normally be reached Monday-Friday, 8am-6pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at (571)272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE N OWYANG/ Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Feb 06, 2025
Application Filed
Nov 25, 2025
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566764
Ambient Multi-Device Framework for Agent Companions
2y 5m to grant Granted Mar 03, 2026
Patent 12566799
TRANSACTION EXCHANGE PLATFORM HAVING CONFIGURABLE MICROSERVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12561286
COMPRESSION TECHNIQUES FOR VERTICES OF GRAPHIC MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12547605
PERFORMING LOAD ERROR TRACKING DURING LOADING OF DATA FOR STORAGE VIA A DATABASE SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12536235
USING A MACHINE LEARNING SYSTEM TO PROCESS A CORPUS OF DOCUMENTS ASSOCIATED WITH A USER TO DETERMINE A USER-SPECIFIC AND/OR PROCESS-SPECIFIC CONSEQUENCE INDEX
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.9%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 610 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month