Prosecution Insights
Last updated: April 19, 2026
Application No. 19/169,996

Utilizing Native Operators to Optimize Query Execution on a Disaggregated Cluster

Non-Final OA §103§DP
Filed
Apr 03, 2025
Examiner
HWA, SHYUE JIUNN
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Wind Jammer Technologies LLC
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
703 granted / 852 resolved
+27.5% vs TC avg
Strong +39% interview lift
Without
With
+39.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
880
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
42.1%
+2.1% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 852 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1, and 26-47 are pending in this office action. This action is responsive to Applicant’s application filed 06/13/2025. Priority 3. Applicant’s claim for the benefit of a Continuation in Part of 17740230, filed 05/09/2022, now U.S. Patent # 12271375 and filing therein 17740230 is a Continuation in Part of 17017318, filed 09/10/2020, now U.S. Patent # 11327966 17017318 Claims Priority from Provisional Application 62898331, filed 09/10/201 is acknowledged. Since the Continuation application relied on part of the priority document (Continuation), the claim of priority will be considered on a claim-by-claim basis. The priority date of the instant application is at least 04/03/2025 (the filing date), but depending upon the specific material claimed, could be as early as 09/10/2019. Information Disclosure Statement 4. The references listed in the IDS filed 06/16/2025 and 10/10/2025 has been considered. A copy of the signed or initialed IDS is hereby attached. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). 5. Claims 1, 26-47 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-30 of U.S. Patent No. 12,271,375. Although the conflicting claims are not identical, they are not patentably distinct from each other because they are substantially similar in scope and they use the same limitations. The following table shows the claims 1, 26-35 in Instant Application that are rejected by corresponding claim(s) 1-3, 5-7, 9, and 11-13 in US Patent No. 12,271,375. Instant Application US 12,271,375 1. One or more non-transitory computer-readable storage mediums storing one or more sequences of instructions for executing a query in a disaggregated cluster, which when executed, cause: receiving, at the disaggregated cluster, a query plan for a query, wherein the disaggregated cluster comprises one or more compute nodes and one or more storage nodes, wherein at least one of the one or more compute nodes and at least one of the one or more storage nodes are implemented by separate physical machines accessible over a network, wherein said query plan describes (a) the computation to be performed, represented as a query tree comprising a hierarchy of vertices, each of which corresponds to a query operator that is responsible for executing a portion of the query and (b) the data sets to which the query requires access; employing one or more execution engine instances to optimize execution of query fragments of the query plan by utilizing local resources of a compute node of said disaggregated cluster upon which the execution engine instance executes to (a) create and execute parallel pipelines of sequences of native operators corresponding to vertices of linear subtrees of a query plan fragment and (b) prefetch a plurality of data sets identified as being responsive to at least a portion of said query fragment from at least one storage node of said disaggregated cluster; and obtaining and providing a result for said query. 26. The one or more non-transitory computer-readable storage mediums of claim 1, wherein said one or more storage nodes include or correspond to one or more of: a cloud object store, a Hadoop Distributed File System (HDFS), and a Network File System (NFS). 27. The one or more non-transitory computer-readable storage mediums of claim 1, wherein said one or more storage nodes include or correspond to one or more of: an analytics database, a data warehouse, a transactional database, an Online Transaction Processing (OLTP) system, a NoSQL database, and a Graph database. 28. The one or more non-transitory computer-readable storage mediums of claim 1, wherein the one or more storage nodes include at least one data lake which is accessed by at least one of said one or more execution engine instances, and wherein a data lake is a repository that stores structured data and unstructured data. 29. The one or more non-transitory computer-readable storage mediums of claim 1, wherein a set of compute nodes which are participating in the query execution, of the one or more compute nodes, issue read operation requests against the one or more storage nodes in advance of when results of said read operation requests are required by said set of compute nodes. 30. The one or more non-transitory computer-readable storage mediums of claim 1, wherein execution of the one or more sequences of instructions further causes: maintaining a DRAM cache of prefetched data sets in available DRAM of at least one of the one or more compute nodes of said disaggregated cluster. 31. The one or more non-transitory computer-readable storage mediums of claim 30, wherein said DRAM cache is backed by asynchronously writing prefetched data sets into available local storage and resolving misses which occur in the DRAM cache by retrieving from local storage when present rather than retrieving from disaggregated storage nodes. 32. The one or more non-transitory computer-readable storage mediums of claim 1, wherein the one or more compute nodes are transient instances that can cease operation during the processing of the query, and wherein the composition of the one or more compute nodes changes during the processing of the query. 33. The one or more non-transitory computer-readable storage mediums of claim 1, wherein execution of the one or more sequences of instructions further causes: the one or more compute nodes each periodically and asynchronously persistently storing, on one or more of said storage nodes, recovery state data that describes a present state of processing operations pertaining to said query tree; and in response to (a) any of said one or more compute nodes encountering a fault or becoming disabled or (b) adding a new compute node to said disaggregated cluster, all operational nodes of said one or more compute nodes continue processing the query by retrieving the recovery state data associated with the query tree stored by each of the one or more compute nodes without starting said processing over from the beginning. 34. The one or more non-transitory computer-readable storage mediums of claim 33, wherein the recovery state data comprises a minimal state for the recovery of each native operator, including hash tables, sorted data, and aggregation tables. 35. The one or more non-transitory computer-readable storage mediums of claim 33, wherein the recovery state data comprises only data required to resume processing the query tree from a checkpoint. 1. One or more non-transitory computer-readable storage mediums storing one or more sequences of instructions for executing a query in a disaggregated cluster, which when executed, cause: receiving, at the disaggregated cluster, the query, wherein the disaggregated cluster comprises one or more compute nodes and one or more storage nodes, wherein at least one of the one or more compute nodes and at least one of the one or more storage nodes are implemented by separate physical machines accessible over a network; creating, at a particular compute node of the disaggregated cluster, a query graph based on the query, wherein the query graph identifies a hierarchy of vertices, wherein each vertex of the query graph is associated with a set of data responsive to at least a portion of the query; the one or more compute nodes processing the query graph by: (a) identifying a minimum set of tables, files, and objects stored on the one or more storage nodes whose access is required to retrieve data that satisfy the query, (b) selectively assigning the identified tables, files, and objects to a leaf vertex of said query graph to optimize retrieving data from the one or more storage nodes to minimize query execution engine processing stalls, and (c) processing data set partitions associated with each vertex of the query graph, wherein leaf vertices of the query graph are performed in parallel, wherein work associated with each vertex of the query graph is performed in parallel on each of said one or more compute nodes, of the disaggregated cluster, which are participating in the query execution, and wherein processing said data set partitions comprises using a native massively parallel processing (MPP) engine which stages data and selects algorithms to execute queries by (i) selecting in-memory hash joins in lieu of sort merge joins whenever sufficient DRAM of the disaggregated cluster is available, and (ii) dynamically estimating requirements in relation to cluster resource availability of said disaggregated cluster; and providing a result set for said query. 2. The one or more non-transitory computer-readable storage mediums of claim 1, wherein said one or more storage nodes include or correspond to one or more of: a cloud object store, a Hadoop Distributed File System (HDFS), and a Network File System (NFS). 3. The one or more non-transitory computer-readable storage mediums of claim 1, wherein said one or more storage nodes include or correspond to one or more of: an analytics database, a data warehouse, a transactional database, an Online Transaction Processing (OLTP) system, a NoSQL database, and a Graph database. 7. The one or more non-transitory computer-readable storage mediums of claim 1, wherein the one or more storage nodes include at least one data lake, and wherein a data lake is a repository that stores structured data and unstructured data. 5. The one or more non-transitory computer-readable storage mediums of claim 1, wherein a set of compute nodes which are participating in the query execution, of the one or more compute nodes, issue read operation requests against the one or more storage nodes in advance of when results of said read operation requests are required by said set of compute nodes. 9. The one or more non-transitory computer-readable storage mediums of claim 1, wherein the one or more compute nodes processing the query graph further comprises: pre-fetching data sets, preidentified as being responsive to at least a portion of said query, from at least one storage node and maintaining a DRAM cache of the prefetched data sets in the available DRAM of at least one of the one or more compute nodes of said disaggregated cluster. by asynchronously backing the prefetched data into a non-volatile storage device. 6. The one or more non-transitory computer-readable storage mediums of claim 1, wherein the one or more compute nodes are transient instances that can cease operation during the processing of the query, and wherein the composition of the one or more compute nodes changes during the processing of the query. 11. The one or more non-transitory computer-readable storage mediums of claim 1, wherein the one or more compute nodes processing the query graph further comprises: the one or more compute nodes each periodically and asynchronously persistently storing, on one or more of said storage nodes, recovery state data that describes a present state of processing operations pertaining to said query graph; and in response to (a) any of said one or more compute nodes encountering a fault or becoming disabled or (b) said one or more compute nodes adding a new compute node thereto, all operational nodes of said one or more compute nodes continue processing the query graph by retrieving the recovery state data associated with the query graph stored by each of the one or more compute nodes without starting said processing over from the beginning. 12. The one or more non-transitory computer-readable storage mediums of claim 11, wherein the recovery state data comprises in-memory hash tables stored in volatile memory of a node performing a hash-join operation, sort data stored in volatile memory of a node performing a sort operation, and aggregation tables stored in volatile memory of a node performing an aggregation operation. 13. The one or more non-transitory computer-readable storage mediums of claim 11, wherein the recovery state data comprises only data required to resume processing the query graph from a checkpoint. Although the conflicting claims are not identical, they are not patentably distinct from each other because they are substantially similar in scope and they use the same limitations. After analyzing the language of the claims, it is clear that claims 1, 26-47 are merely an obvious variation of claims 1-30 of US Patent No. 12,271,375. It is clear that under the broadest reasonable interpretation of the claims. Therefore, these two sets of claims are not patentably distinct. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a). 6. Claims 1, 27, 29, 32-35, 36, 38, 40, 43-47 are rejected under 35 U.S.C. 103(a) as being unpatentable over Hunt et al. (US Patent Publication No. 2009/0018996 A1, hereinafter “Hunt”) in view of and Kothari et al. (US Patent Publication No. 2019/0303405 A1, hereinafter “Kothari”). As to Claim 1, Hunt teaches the claimed limitations: “One or more non-transitory computer-readable storage mediums storing one or more sequences of instructions for executing a query in a disaggregated cluster, which when executed, cause:” as in-store media presence and conditions may also be integrated to facilitate providing additional insights on this emerging communications medium (paragraph 1286). The analytic platform includes methods and systems for providing various representations of data and metadata, methodologies for acting on data and metadata, an analytic engine, and a data management facility that is capable of handling disaggregated data and performing aggregation, calculations, functions, and real-time or quasi-real-time projections (paragraphs 0117, 0119-0120). “Receiving, at the disaggregated cluster, a query plan for a query, wherein the disaggregated cluster comprises one or more compute nodes and one or more storage nodes, wherein at least one of the one or more compute nodes and at least one of the one or more storage nodes are implemented by separate physical machines accessible over a network” as distributed calculations may include a projection method that has a separate member list for every cell in the projected data set. In embodiments, aggregating data may not build hierarchical bias into the projected data set. In embodiments, a flexible hierarchy created by the tuple’s facility may be provided in association with in the projected data set (paragraph 0138). A master data management hub (MDMH) may accommodate a blend of disaggregated and pre-aggregated data as necessitated by a client's needs. For example, a client in the retail industry may have a need for a rolling, real-time assessment of store performance within a sales region. The ability of the MDMH to accommodate twinkle data, and the like may give the client useful insights into disaggregated sales data as it becomes available and make it possible to create projections based upon it and other available data (paragraph 0152). Failover clusters may operate using redundant nodes, which may be used to provide service when system components fail. Failover cluster implementations may manage the redundancy inherent in a cluster to minimize the impact of single points of failure, load-balancing clusters may operate by having all workload come through one or more load-balancing front ends, which then distribute it to a collection of back-end servers. Such a cluster of computers is sometimes referred to as a server farm, high-performance clusters may be implemented to provide increased performance by splitting a computational task across many different nodes in the cluster. Such clusters commonly run custom programs which have been designed to exploit the parallelism available on high-performance clusters (paragraphs 0272, 0312). “Wherein said query plan describes (a) the computation to be performed, represented as a query tree comprising a hierarchy of vertices, each of which corresponds to a query operator that is responsible for executing a portion of the query” as a threading model may be used for inter-processing communication between the nodes and the master, multiple threads may run with one logical process and with separate physical processes running on different machines. A new series of threads may be created for new thread arrival. The listener threads may be designed to look for information from a specific slave source. If a query comes into the system, a new collator thread may be created, a new worker thread created in each slave node, and information sent from each slave node to a listener on the master that passes information to the collator thread created for that query. The collator thread may then pass information back through the socket to the ODBC client. The SQL query may be translated into something the server can understand. Next, the master node may pass a thread to all nodes as part of a Query One. The first node may retrieve Store One data, and may add up a partial result and creates a data tuple that it communicates back to the listener for that slave node. The Second Node may do the same thing and communicate with its listener. Nodes with only Store Two may do nothing. At the master node, the collator may add up the results from the two relevant listeners' results (paragraphs 0283-0285). A user may interact with the map, such as by clicking on particular stores, encircling them with a perimeter, specifying a distance from a center location, or otherwise interacting with the map, thus establishing a desired geographic dimension for a view. The desired geographic dimension can then be used as the dimension for a view or query of that market, such as to show store data for the selected geographic area, to make a projection to stores in that area, or the like. In other embodiments, other dimensions may similarly be presented graphically, so that users can select dimensions by interacting with shapes, graphs, charts, maps in order to select dimensions (paragraph 0297). “(b) the data sets to which the query requires access; employing one or more execution engine instances to optimize execution of query fragments of the query plan by utilizing local resources of a compute node of said disaggregated cluster upon which the execution engine instance executes to” as the one or more compute nodes processing the query graph by: (a) identifying all tables, files, and objects stored on the one or more storage nodes whose access is required to retrieve data that satisfy the query” as systems and methods may involve using a platform as for applications where the systems and methods involve receiving a post-perturbation dataset, wherein the post-perturbation dataset is based on finding non-unique values in a data table, perturbing the non-unique values to render unique values, and using non-unique values as identifiers for data items. It may also involve storing the post-perturbation dataset in a partition within a partitioned database, wherein the partition is associated with a data characteristic. It may also involve associating a master processing node with a plurality of slave nodes, wherein each of the plurality of slave nodes is associated with a partition of the partitioned database. It may also involve submitting an analytic query to the master processing node; and processing the query by the master node assigning processing steps to an appropriate slave node (paragraph 0649). Hunt does not explicitly teach the claimed limitation “(a) create and execute parallel pipelines of sequences of native operators corresponding to vertices of linear subtrees of a query plan fragment and (b) prefetch a plurality of data sets identified as being responsive to at least a portion of said query fragment from at least one storage node of said disaggregated cluster; and obtaining and providing a result for said query”. Kothari teaches the memory may store instructions executable by the processor to: access a first join graph representing tables in a database, wherein the first join graph has vertices corresponding to respective tables in the database and directed edges corresponding to many-to-one join relationships; receive a first query that references data in two or more of the tables of the database; select a connected subgraph of the first join graph that includes the two or more tables referenced in the first query; generate multiple leaf queries that reference respective subject tables that are each a root table of the connected subgraph or a table including a measure referenced in the first query, wherein generating at least two of the leaf queries includes inserting a reference to a primary key column for a shared attribution dimension table of the respective subject tables of the at least two of the leaf queries; generate a query graph that specifies joining of results from queries based on the multiple leaf queries to obtain a transformed query result for the first query, wherein the query graph has a single root node corresponding to the transformed query result; and invoke a transformed query on the database that is based on the query graph and the queries based on the multiple leaf queries to obtain the transformed query result (paragraphs 0004-6, 0076). The systems and techniques may provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling. For example, these systems and techniques may provide query modularity and query optimizations by using a chain of query transformers that may optimize the query early in the query transformation process (paragraph 0031). A query graph that is a tree with a root corresponding to a transformed query based on an input query. The query graph includes leaf vertices that correspond respectively to multiple leaf queries; a vertex corresponding to Q4, which is a join of results from the queries of its child vertices; and a root vertex corresponding to Q5, which is a join of all the results of the queries of the query graph. The query graph includes directed edges corresponding to many-to-one joins of query results (paragraphs 0098-0100). The technique includes selecting an initial connected subgraph of the first join graph that includes the two or more tables referenced in the first query. For example, selecting the initial connected subgraph may include selecting all the vertices of the join graph corresponding to tables referenced by the first query, if necessary, selecting additional tables with corresponding vertices in the join graph to form a connected graph. In some implementations, selecting the initial connected subgraph includes biasing table selection to select paths that include tables referenced in the first query. In some implementations, selecting the connected subgraph includes biasing table selection to select paths that include root tables of the first join graph (paragraphs 0156, 0164; see also figure 8). The relational search engine unit may instantiate or generate one or more search objects. The relational search engine unit may initiate a search query by sending a search object to a search constructor. The search constructor may be implemented as part of the analysis and visualization unit as part of the relational search engine unit, or as a separate unit of the database analysis server (paragraph 0188). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt and Kothari before him/her, to modify Hunt selectively assigning the objects to a vertex of the query graph to optimize retrieving data because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). As to Claim 27, Hunt teaches the claimed limitations: “Wherein said one or more storage nodes include or correspond to one or more of: an analytics database, a data warehouse, a transactional database, an Online Transaction Processing (OLTP) system, a NoSQL database, and a Graph database” as (paragraphs 0124, 0297, 0323, 0395-0396, 0410,0439, 0446). Kothari teaches (paragraphs 0028, 0153, 0163). As to Claim 29, Hunt does not explicitly teach the claimed limitation “Wherein a set of compute nodes which are participating in the query execution, of the one or more compute nodes, issue read operation requests against the one or more storage nodes in advance of when results of said read operation requests are required by said set of compute nodes”. Kothari teaches (paragraphs 0076, 01000128, 0217). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt and Kothari before him/her, to modify Hunt compute node responsible for work associated with a vertex of the query graph because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). As to Claim 32, Hunt teaches the claimed limitations: “Wherein the one or more compute nodes are transient instances that can cease operation during the processing of the query, and wherein the composition of the one or more compute nodes changes during the processing of the query” as (paragraphs 0153, 0178-0179). As to Claim 33, Hunt does not explicitly teach the claimed limitation “Wherein execution of the one or more sequences of instructions further causes: the one or more compute nodes each periodically and asynchronously persistently storing, on one or more of said storage nodes, recovery state data that describes a present state of processing operations pertaining to said query tree; and in response to (a) any of said one or more compute nodes encountering a fault or becoming disabled or (b) adding a new compute node to said disaggregated cluster, all operational nodes of said one or more compute nodes continue processing the query by retrieving the recovery state data associated with the query tree stored by each of the one or more compute nodes without starting said processing over from the beginning” as (paragraphs 0118-0119, 0146, 0292, 0561, 1220, 1291 1350, 1774, 1783). Kothari teaches (paragraphs 0004-0006, 0205). As to Claim 34, Hunt teaches the claimed limitations: “Wherein the recovery state data comprises a minimal state for the recovery of each native operator, including hash tables, sorted data, and aggregation tables” as (paragraphs 0137-0138, 0269, 02780407, 0416-0418, 0561, 1220, 1774). As to Claim 35, Hunt teaches the claimed limitations: “Wherein the recovery state data comprises only data required to resume processing the query tree from a checkpoint” as (paragraphs 0269, 0561, 1220, 1774). Hunt does not explicitly teach the claimed limitation “required to resume processing the query graph from a logical checkpoint”. Kothari teaches (paragraphs 0004-0006, 0032, 0058, 0073, 0086, 0100, 0155-0156). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt and Kothari before him/her, to modify Hunt resume processing the query graph because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). As to claims 36, 38, 40, 43-46 are rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claims 1, 27, 29, 32-35. In addition, Hunt teaches methods and systems for providing various representations of data and metadata, methodologies for acting on data and metadata, an analytic engine, and a data management facility handling disaggregated data and performing aggregation, calculations, functions, and real-time or quasi-real-time projections (paragraphs 0117, 0119-0120). Therefore, these claims are rejected for at least the same reasons as claims 1, 27, 29, 32-35. As to claim 47 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 1. In addition, Hunt teaches methods and systems for providing various representations of data and metadata, methodologies for acting on data and metadata, an analytic engine, and a data management facility handling disaggregated data and performing aggregation, calculations, functions, and real-time or quasi-real-time projections (paragraphs 0117, 0119-0120). Therefore, this claim is rejected for at least the same reasons as claim 1. 7. Claims 26, 30-31, 37, and 41-42 are rejected under 35 U.S.C. 103(a) as being unpatentable over Hunt et al. (US Patent Publication No. 2009/0018996 A1) as applied to claims 1, 15, and 29 above, and further in view of and Kothari et al. (US Patent Publication No. 2019/0303405 A1) and Haghighat et al. (US Patent Publication No. 2021/0263779 A1, hereinafter “Haghighat”). As to Claim 26, Hunt does not explicitly teach the claimed limitation “wherein said one or more storage nodes include or correspond to one or more of: a cloud object store, a Hadoop Distributed File System (HDFS), and a Network File System (NFS)” Haghighat teaches (paragraph 0026, 0066). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt, Kothari and Haghighat before him/her, to modify Hunt selectively assigning the objects to a vertex of the query graph to optimize retrieving data because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). Or distributed Hadoop environments provide enhanced function as a service to users as taught by Haghighat (abstract, paragraph 0849). As to Claim 30, Hunt does not explicitly teach the claimed limitation “wherein execution of the one or more sequences of instructions further causes: maintaining a DRAM cache of prefetched data sets in available DRAM of at least one of the one or more compute nodes of said disaggregated cluster”. Haghighat teaches (paragraphs 0182, 0185, 0192, 0207, 0209, 0216, 0245, 0266, 0373, 0918, 0933). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt, Kothari and Haghighat before him/her, to modify Hunt selectively assigning the objects to a vertex of the query graph to optimize retrieving data because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). Or pre-fetching data set in volatile memory provide enhanced function as a service to users as taught by Haghighat (abstract, paragraph 0849). As to Claim 31, Hunt does not explicitly teach the claimed limitation “wherein said DRAM cache is backed by asynchronously writing prefetched data sets into available local storage and resolving misses which occur in the DRAM cache by retrieving from local storage when present rather than retrieving from disaggregated storage nodes” Haghighat teaches (paragraphs 0154, 0160, 0280-0281, 0506, 0918, 0933, 1075). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt, Kothari and Haghighat before him/her, to modify Hunt selectively assigning the objects to a vertex of the query graph to optimize retrieving data because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). Or pre-fetching a data set to at least a portion of said query provide enhanced function as a service to users as taught by Haghighat (abstract, paragraph 0849). As to claims 37, and 41-42 are rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claims 26, 30-31. In addition, Hunt teaches methods and systems for providing various representations of data and metadata, methodologies for acting on data and metadata, an analytic engine, and a data management facility handling disaggregated data and performing aggregation, calculations, functions, and real-time or quasi-real-time projections (paragraphs 0117, 0119-0120). Therefore, these claims are rejected for at least the same reasons as claims 26, 30-31. 8. Claims 28, and 39 are rejected under 35 U.S.C. 103(a) as being unpatentable over Hunt et al. (US Patent Publication No. 2009/0018996 A1) as applied to claims 1 and 36 above, and further in view of Kothari et al. (US Patent Publication No. 2019/0303405 A1) and Pandis et al. (US Patent No. 10,528,599 B1, hereinafter “Pandis”). As to Claim 8, Hunt does not explicitly teach the claimed limitation “wherein the one or more storage nodes include at least one data lake which is accessed by at least one of said one or more execution engine instances, and wherein a data lake is a repository that stores structured data and unstructured data”. Pandis teaches (column 6, line 12 to column 7, line 29). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Hunt, Kothari and Pandis before him/her, to modify Hunt selectively assigning the objects to a vertex of the query graph to optimize retrieving data because that would provide robust and accurate query results over a wide variety of database schema with little overhead for data modeling as taught by Kothari (paragraph 0031). Or modify Hunt storage nodes include at least one data lake because that would provide a network data processing services that utilize format independent data processing service to perform tiered data processing for data stored in data storage services as taught by Pandis. As to claim 39 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 28. In addition, Hunt teaches methods and systems for providing various representations of data and metadata, methodologies for acting on data and metadata, an analytic engine, and a data management facility handling disaggregated data and performing aggregation, calculations, functions, and real-time or quasi-real-time projections (paragraphs 0117, 0119-0120). Therefore, this claim is rejected for at least the same reasons as claim 28. Examiner’s Note Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Hwa whose telephone number is 571-270-1285 or email address james.hwa@uspto.gov. The examiner can normally be reached on 9:00 am – 5:30 pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached on 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only, for more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 03/28/2026 /SHYUE JIUNN HWA/ Primary Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

Apr 03, 2025
Application Filed
Mar 28, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602571
NETWORK PARTITIONING FOR SENSOR-BASED SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596683
LOG-STRUCTURED FILE SYSTEM FOR A ZONED BLOCK MEMORY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12596700
CONCURRENT OPTIMISTIC TRANSACTIONS FOR TABLES WITH DELETION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12566750
SYSTEMS AND METHODS OF FACILITATING AN INFORMED CONSENSUS-DRIVEN DISCUSSION
2y 5m to grant Granted Mar 03, 2026
Patent 12561580
GENERATING ENRICHED SCENES USING SCENE GRAPHS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+39.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 852 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month