Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Response to Amendment
Applicant’s Amendment, filed January 2, 2026, has been fully considered and entered. Accordingly, Claims 1-30 are pending in this application. Claims 1, 11, and 21 are Independent Claims and have been amended.
Claim Interpretation
In accordance with paragraph [0355] of the Specification, the “computer-storage medium” of Claims 20-30 is not directed to transitory propagating signals.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 2, 11, 12, 21, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Arye (PG Pub. No. 2020/0334232 A1) and further in view of Sankaranarayanan (WO2015038224A1).
Regarding Claim 1, Arye discloses a system comprising:
at least one hardware processor (see Arye, paragraph [0022], where embodiments of a computer system comprise one or more computer processors and a computer readable storage medium); and
at least one memory storing instructions (see Arye, paragraph [0022], where embodiments of a computer system comprise one or more computer processors and a computer readable storage medium) that cause the at least one hardware processor to perform operations comprising:
configuring a materialized table (MT) based on a query and a base table (see Arye, paragraph [0037], where Fig. 12A illustrates processing of a query by combining data of a materialized table with a base table);
generating a query plan for the query (see Arye, paragraph [0083], where the complete query plan is generated by the first node and sent to nodes that are determined to store chunks processed by the first query); and
determining multiple sets of data processing operations included in the query plan (see Arye, paragraph [0235], where the query processor may use various logic to determine what part of the data to get from the materialized tables and what data to get from the base table).
Arye does not disclose:
generating a plurality of intermediate MTs, the plurality of intermediate MTs holding an intermediate processing state for the multiple sets of data processing operations, each intermediate MT of the plurality of intermediate MTs being persistently stored and reusable across multiple refreshes of the MT; and
configuring a refresh of the MT based on the intermediate processing state for the multiple sets of data processing operations maintained in the plurality of intermediate MTs.
Arye in view of Sankaranarayanan discloses:
generating a plurality of intermediate MTs, the plurality of intermediate MTs holding an intermediate processing state for the multiple sets of data processing operations (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload), each intermediate MT of the plurality of intermediate MTs being persistently stored and reusable across multiple refreshes of the MT (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload); and
configuring a refresh of the MT (see Arye, paragraph [0208], where the materialization engine reads information produced by the invalidation engine in order to know which regions of data to re-materialize in its current run; for embodiments that store a single invalidation threshold, the materialization engine recomputes its materialization on data between the invalidation threshold and the current time (or lag interval if present) based on the intermediate processing state for the multiple sets of data processing operations maintained in the plurality of intermediate MTs (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Sankaranarayanan for the benefit of tuning multi-store systems to speed up a big data query workload (see Sankaranarayanan, Abstract).
Regarding Claim 2, Arye in view of Sankaranarayanan discloses the system of Claim 1, wherein the operations further comprise configuring the MT further based on a lag duration value, the lag duration value indicating a maximum time period that a result of a prior refresh of the query lags behind a current time instance before a subsequent refresh is initiated (see Arye, paragraph [0205], where the user is able to configure how far behind the materialization system should operate (e.g., no more than at least one hour behind the latest record or current time), which we sometimes refer to as the lag interval).
Regarding Claim 11, Arye discloses a method comprising:
configuring, by at least one hardware processor, a materialized table (MT) based on a query and a base table (see Arye, paragraph [0037], where Fig. 12A illustrates processing of a query by combining data of a materialized table with a base table);
generating a query plan for the query (see Arye, paragraph [0083], where the complete query plan is generated by the first node and sent to nodes that are determined to store chunks processed by the first query); and
determining multiple sets of data processing operations included in the query plan (see Arye, paragraph [0235], where the query processor may use various logic to determine what part of the data to get from the materialized tables and what data to get from the base table).
Arye does not disclose:
generating a plurality of intermediate MTs, the plurality of intermediate MTs holding an intermediate processing state for the multiple sets of data processing operations, each intermediate MT of the plurality of intermediate MTs being persistently stored and reusable across multiple refreshes of the MT; and
configuring a refresh of the MT based on the intermediate processing state for the multiple sets of data processing operations maintained in the plurality of intermediate MTs.
Arye in view of Sankaranarayanan discloses:
generating a plurality of intermediate MTs, the plurality of intermediate MTs holding an intermediate processing state for the multiple sets of data processing operations (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload), each intermediate MT of the plurality of intermediate MTs being persistently stored and reusable across multiple refreshes of the MT (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload); and
configuring a refresh of the MT (see Arye, paragraph [0208], where the materialization engine reads information produced by the invalidation engine in order to know which regions of data to re-materialize in its current run; for embodiments that store a single invalidation threshold, the materialization engine recomputes its materialization on data between the invalidation threshold and the current time (or lag interval if present) based on the intermediate processing state for the multiple sets of data processing operations maintained in the plurality of intermediate MTs (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Sankaranarayanan for the benefit of tuning multi-store systems to speed up a big data query workload (see Sankaranarayanan, Abstract).
Regarding Claim 12, Arye in view of Sankaranarayanan discloses the method of Claim 11, further comprising configuring the MT further based on a lag duration value, the lag duration value indicating a maximum time period that a result of a prior refresh of the query lags behind a current time instance before a subsequent refresh is initiated (see Arye, paragraph [0205], where the user is able to configure how far behind the materialization system should operate (e.g., no more than at least one hour behind the latest record or current time), which we sometimes refer to as the lag interval).
Regarding Claim 21, Arye discloses a computer-storage medium comprising instructions that, when executed by one or more processors of a machine, configure the machine to perform operations comprising:
configuring, by at least one hardware processor (see Arye, paragraph [0037], where Fig. 12A illustrates processing of a query by combining data of a materialized table with a base table);
generating a query plan for the query (see Arye, paragraph [0083], where the complete query plan is generated by the first node and sent to nodes that are determined to store chunks processed by the first query); and
determining multiple sets of data processing operations included in the query plan (see Arye, paragraph [0235], where the query processor may use various logic to determine what part of the data to get from the materialized tables and what data to get from the base table).
Arye does not disclose:
generating a plurality of intermediate MTs, the plurality of intermediate MTs holding an intermediate processing state for the multiple sets of data processing operations, each intermediate MT of the plurality of intermediate MTs being persistently stored and reusable across multiple refreshes of the MT; and
configuring a refresh of the MT based on the intermediate processing state for the multiple sets of data processing operations maintained in the plurality of intermediate MTs.
Arye in view of Sankaranarayanan discloses:
generating a plurality of intermediate MTs, the plurality of intermediate MTs holding an intermediate processing state for the multiple sets of data processing operations (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload), each intermediate MT of the plurality of intermediate MTs being persistently stored and reusable across multiple refreshes of the MT (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload); and
configuring a refresh of the MT (see Arye, paragraph [0208], where the materialization engine reads information produced by the invalidation engine in order to know which regions of data to re-materialize in its current run; for embodiments that store a single invalidation threshold, the materialization engine recomputes its materialization on data between the invalidation threshold and the current time (or lag interval if present) based on the intermediate processing state for the multiple sets of data processing operations maintained in the plurality of intermediate MTs (see Sankaranarayanan, Claim 1, where the method comprises receiving byproducts of query processing in a multistore system, wherein the byproducts include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Sankaranarayanan for the benefit of tuning multi-store systems to speed up a big data query workload (see Sankaranarayanan, Abstract).
Regarding Claim 22, Arye in view of Sankaranarayanan discloses the computer-storage medium of Claim 21, wherein the operations further comprise configuring the MT further based on a lag duration value, the lag duration value indicating a maximum time period that a result of a prior refresh of the query lags behind a current time instance before a subsequent refresh is initiated (see Arye, paragraph [0205], where the user is able to configure how far behind the materialization system should operate (e.g., no more than at least one hour behind the latest record or current time), which we sometimes refer to as the lag interval).
Claims 3-8, 13-18, and 23-28 are rejected under 35 U.S.C. 103 as being unpatentable over Arye and Sankaranarayanan as applied to Claims 1, 2, 11, 12, 21, and 22 above, and further in view of Ma (PG Pub. No. 2023/01411990 A1).
Regarding Claim 3, Arye in view of Sankaranarayanan discloses the system of Claim 1, wherein the operations further comprise:
Arye does not disclose separating the multiple sets of data processing operations into a plurality of fragments, each fragment of the plurality of fragments including at least one data processing operation of the multiple sets of data processing operations. Ma discloses separating the multiple sets of data processing operations into a plurality of fragments, each fragment of the plurality of fragments including at least one data processing operation of the multiple sets of data processing operations (see Ma, paragraph [0057], where Fig. 3A illustrates an example query execution plan 300A; the query execution plan 300A is represented as a directed tree of nodes and links; each node represents respective query operations, which can correspond to a respective operator of a received query; for example, the query execution plan 300A includes a merge join node 302A, a nested loop join node 304A, a hash join node 306A, and scan nodes 308A, 312A, 314A, and 316A).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 4, Arye in view of Sankaranarayanan and Ma discloses the system of Claim 3, wherein the operations further comprise configuring each intermediate MT of the plurality of intermediate MTs with a hash of a corresponding fragment of the plurality of fragments (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 5, Arye in view of Sankaranarayanan and Ma discloses the system of Claim 4, wherein the operations further comprise performing a verification of each intermediate MT of the plurality of intermediate MTs based on the hash (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 6, Arye in view of Sankaranarayanan and Ma discloses the system of Claim 5, wherein the operations configuring the refresh further comprise performing a refresh of each intermediate MT of the plurality of intermediate MTs based on the verification (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 7, Arye in view of Sankaranarayanan discloses the system of Claim 1, wherein the operations further comprise:
Arye does not disclose generating a data definition language (DDL) log of dependencies among the plurality of MTs and generating a dependency graph of the plurality of intermediate MTs based on the DDL log of dependencies, the dependency graph comprising a plurality of nodes corresponding to the plurality of intermediate MTs. Ma discloses generating a data definition language (DDL) log of dependencies among the plurality of MTs and generating a dependency graph of the plurality of intermediate MTs based on the DDL log of dependencies, the dependency graph comprising a plurality of nodes corresponding to the plurality of intermediate MTs (see Ma, paragraph [0014], where the one or more processors are configured to generate a data structure representing attribute dependencies for each attribute referenced in the query, an attribute dependency for a first attribute comprising data representing zero or more child attributes that are respective result attributes of one or more query operations executed at respective earlier execution steps than an execution step for a query operation in which the first attribute is needed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 8, Arye in view of Sankaranarayanan and Ma discloses the system of Claim 7, wherein the operations further comprise:
Arye does not disclose applying a graph rendering process to the DDL log of dependencies to generate the dependency graph of the plurality of intermediate MTs, the dependency graph including a plurality of nodes coupled with edges, at least a first node of the plurality of nodes associated with a first intermediate MT of the plurality of intermediate MTs, and at least a second node of the plurality of nodes associated with a second intermediate MT of the plurality of intermediate MTs from which the first intermediate MT depends. Ma discloses applying a graph rendering process to the DDL log of dependencies to generate the dependency graph of the plurality of intermediate MTs, the dependency graph including a plurality of nodes coupled with edges, at least a first node of the plurality of nodes associated with a first intermediate MT of the plurality of intermediate MTs, and at least a second node of the plurality of nodes associated with a second intermediate MT of the plurality of intermediate MTs from which the first intermediate MT depends (see Ma, paragraph [0102], where for each node in the query execution plan, the DBMS generates a respective data structure representing attribute dependencies for each attribute referenced in the query, according to block 420B; see also paragraph [0083], where Fig. 3C illustrates an example attribute mapping 300C for the example dependency mapping 300B; attribute maps 302C, 304C, 306C, 308C, 312C, 314C, and 316C include respective attributes 302D-G, 304D-F, 306D-G, 308D-F, 312D-E, 314D-E, and 316D-F, and are shown in Fig. 3C as dashed boxes).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 13, Arye in view of Sankaranarayanan discloses the method of Claim 11, further comprising:
Arye does not disclose separating the multiple sets of data processing operations into a plurality of fragments, each fragment of the plurality of fragments including at least one data processing operation of the multiple sets of data processing operations. Ma discloses separating the multiple sets of data processing operations into a plurality of fragments, each fragment of the plurality of fragments including at least one data processing operation of the multiple sets of data processing operations (see Ma, paragraph [0057], where Fig. 3A illustrates an example query execution plan 300A; the query execution plan 300A is represented as a directed tree of nodes and links; each node represents respective query operations, which can correspond to a respective operator of a received query; for example, the query execution plan 300A includes a merge join node 302A, a nested loop join node 304A, a hash join node 306A, and scan nodes 308A, 312A, 314A, and 316A).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 14, Arye in view of Sankaranarayanan and Ma discloses the method of Claim 13, further comprising configuring each intermediate MT of the plurality of intermediate MTs with a hash of a corresponding fragment of the plurality of fragments (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 15, Arye in view of Sankaranarayanan and Ma discloses the method of Claim 14, further comprising performing a verification of each intermediate MT of the plurality of intermediate MTs based on the hash (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 16, Arye in view of Sankaranarayanan and Ma discloses the method of Claim 15, wherein the operations configuring the refresh further comprise performing a refresh of each intermediate MT of the plurality of intermediate MTs based on the verification (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 17, Arye in view of Sankaranarayanan discloses the method of Claim 11, further comprising:
Arye does not disclose generating a data definition language (DDL) log of dependencies among the plurality of MTs and generating a dependency graph of the plurality of intermediate MTs based on the DDL log of dependencies, the dependency graph comprising a plurality of nodes corresponding to the plurality of intermediate MTs. Ma discloses generating a data definition language (DDL) log of dependencies among the plurality of MTs and generating a dependency graph of the plurality of intermediate MTs based on the DDL log of dependencies, the dependency graph comprising a plurality of nodes corresponding to the plurality of intermediate MTs (see Ma, paragraph [0014], where the one or more processors are configured to generate a data structure representing attribute dependencies for each attribute referenced in the query, an attribute dependency for a first attribute comprising data representing zero or more child attributes that are respective result attributes of one or more query operations executed at respective earlier execution steps than an execution step for a query operation in which the first attribute is needed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 18, Arye in view of Sankaranarayanan and Ma discloses the method of Claim 17, further comprising:
Arye does not disclose applying a graph rendering process to the DDL log of dependencies to generate the dependency graph of the plurality of intermediate MTs, the dependency graph including a plurality of nodes coupled with edges, at least a first node of the plurality of nodes associated with a first intermediate MT of the plurality of intermediate MTs, and at least a second node of the plurality of nodes associated with a second intermediate MT of the plurality of intermediate MTs from which the first intermediate MT depends. Ma discloses applying a graph rendering process to the DDL log of dependencies to generate the dependency graph of the plurality of intermediate MTs, the dependency graph including a plurality of nodes coupled with edges, at least a first node of the plurality of nodes associated with a first intermediate MT of the plurality of intermediate MTs, and at least a second node of the plurality of nodes associated with a second intermediate MT of the plurality of intermediate MTs from which the first intermediate MT depends (see Ma, paragraph [0102], where for each node in the query execution plan, the DBMS generates a respective data structure representing attribute dependencies for each attribute referenced in the query, according to block 420B; see also paragraph [0083], where Fig. 3C illustrates an example attribute mapping 300C for the example dependency mapping 300B; attribute maps 302C, 304C, 306C, 308C, 312C, 314C, and 316C include respective attributes 302D-G, 304D-F, 306D-G, 308D-F, 312D-E, 314D-E, and 316D-F, and are shown in Fig. 3C as dashed boxes).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 23, Arye in view of Sankaranarayanan discloses the computer-storage medium of Claim 21, wherein the operations further comprise:
Arye does not disclose separating the multiple sets of data processing operations into a plurality of fragments, each fragment of the plurality of fragments including at least one data processing operation of the multiple sets of data processing operations. Ma discloses separating the multiple sets of data processing operations into a plurality of fragments, each fragment of the plurality of fragments including at least one data processing operation of the multiple sets of data processing operations (see Ma, paragraph [0057], where Fig. 3A illustrates an example query execution plan 300A; the query execution plan 300A is represented as a directed tree of nodes and links; each node represents respective query operations, which can correspond to a respective operator of a received query; for example, the query execution plan 300A includes a merge join node 302A, a nested loop join node 304A, a hash join node 306A, and scan nodes 308A, 312A, 314A, and 316A).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 24, Arye in view of Sankaranarayanan and Ma discloses the computer-storage medium of Claim 23, wherein the operations further comprise configuring each intermediate MT of the plurality of intermediate MTs with a hash of a corresponding fragment of the plurality of fragments (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 25, Arye in view of Sankaranarayanan and Ma discloses the computer-storage medium of Claim 24, wherein the operations further comprise performing a verification of each intermediate MT of the plurality of intermediate MTs based on the hash (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 26, Arye in view of Sankaranarayanan and Ma discloses the computer-storage medium of Claim 25, wherein the operations configuring the refresh further comprise performing a refresh of each intermediate MT of the plurality of intermediate MTs based on the verification (see Arye, paragraph [0235], where the system can keep statistics or sketches reflecting how different the materialization data is from the base table’s raw data or it may keep information about the staleness of the materialization [it is the position of the Examiner that a sketch is not patentably distinguishable from a hash]).
Regarding Claim 27, Arye in view of Sankaranarayanan discloses the computer-storage medium of Claim 21, wherein the operations further comprise:
Arye does not disclose generating a data definition language (DDL) log of dependencies among the plurality of MTs and generating a dependency graph of the plurality of intermediate MTs based on the DDL log of dependencies, the dependency graph comprising a plurality of nodes corresponding to the plurality of intermediate MTs. Ma discloses generating a data definition language (DDL) log of dependencies among the plurality of MTs and generating a dependency graph of the plurality of intermediate MTs based on the DDL log of dependencies, the dependency graph comprising a plurality of nodes corresponding to the plurality of intermediate MTs (see Ma, paragraph [0014], where the one or more processors are configured to generate a data structure representing attribute dependencies for each attribute referenced in the query, an attribute dependency for a first attribute comprising data representing zero or more child attributes that are respective result attributes of one or more query operations executed at respective earlier execution steps than an execution step for a query operation in which the first attribute is needed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Regarding Claim 28, Arye in view of Sankaranarayanan and Ma discloses the computer-storage medium of Claim 27, wherein the operations further comprise:
Arye does not disclose applying a graph rendering process to the DDL log of dependencies to generate the dependency graph of the plurality of intermediate MTs, the dependency graph including a plurality of nodes coupled with edges, at least a first node of the plurality of nodes associated with a first intermediate MT of the plurality of intermediate MTs, and at least a second node of the plurality of nodes associated with a second intermediate MT of the plurality of intermediate MTs from which the first intermediate MT depends. Ma discloses applying a graph rendering process to the DDL log of dependencies to generate the dependency graph of the plurality of intermediate MTs, the dependency graph including a plurality of nodes coupled with edges, at least a first node of the plurality of nodes associated with a first intermediate MT of the plurality of intermediate MTs, and at least a second node of the plurality of nodes associated with a second intermediate MT of the plurality of intermediate MTs from which the first intermediate MT depends (see Ma, paragraph [0102], where for each node in the query execution plan, the DBMS generates a respective data structure representing attribute dependencies for each attribute referenced in the query, according to block 420B; see also paragraph [0083], where Fig. 3C illustrates an example attribute mapping 300C for the example dependency mapping 300B; attribute maps 302C, 304C, 306C, 308C, 312C, 314C, and 316C include respective attributes 302D-G, 304D-F, 306D-G, 308D-F, 312D-E, 314D-E, and 316D-F, and are shown in Fig. 3C as dashed boxes).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Ma for the benefit of materializing data as needed rather than waiting to materialize the data before the query begins executing (see Ma, Abstract).
Claims 9, 19, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Arye and Sankaranarayanan as applied to Claims 3-8, 13-18, and 23-28 above, and further in view of Crupi (PG Pub. No. 2018/0307728 A1).
Regarding Claim 9, Arye in view of Sankaranarayanan and Ma discloses the system of Claim 8, wherein the operations further comprise:
Arye does not disclose separating the plurality of nodes into subsets of nodes based on data processing account association, wherein nodes in a subset of the subsets are associated with a common time instance of a set of aligned time instances. Crupi discloses separating the plurality of nodes into subsets of nodes based on data processing account association, wherein nodes in a subset of the subsets are associated with a common time instance of a set of aligned time instances (see Crupi, paragraph [0041], where within ‘phase 4’ of the technical processing introduced above, the central RDBMS optimizer component may also determine/group candidates for remote derived source creation based upon latency tolerance and may supply/define refresh requirements/periodicity for each remote derived source; see also paragraph [0108], where at block 512, the process 500 groups the aggregated candidates by latency requirement group based upon latency tolerance).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Crupi for the benefit of distributing materialized table workload to back-end systems (see Crupi, paragraph [0040]).
Regarding Claim 19, Arye in view of Sankaranarayanan and Ma discloses the method of Claim 18, further comprising:
Arye does not disclose separating the plurality of nodes into subsets of nodes based on data processing account association, wherein nodes in a subset of the subsets are associated with a common time instance of a set of aligned time instances. Crupi discloses separating the plurality of nodes into subsets of nodes based on data processing account association, wherein nodes in a subset of the subsets are associated with a common time instance of a set of aligned time instances (see Crupi, paragraph [0041], where within ‘phase 4’ of the technical processing introduced above, the central RDBMS optimizer component may also determine/group candidates for remote derived source creation based upon latency tolerance and may supply/define refresh requirements/periodicity for each remote derived source; see also paragraph [0108], where at block 512, the process 500 groups the aggregated candidates by latency requirement group based upon latency tolerance).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Crupi for the benefit of distributing materialized table workload to back-end systems (see Crupi, paragraph [0040]).
Regarding Claim 29, Arye in view of Sankaranarayanan and Ma discloses the computer-storage medium of Claim 28, wherein the operations further comprise:
Arye does not disclose separating the plurality of nodes into subsets of nodes based on data processing account association, wherein nodes in a subset of the subsets are associated with a common time instance of a set of aligned time instances. Crupi discloses separating the plurality of nodes into subsets of nodes based on data processing account association, wherein nodes in a subset of the subsets are associated with a common time instance of a set of aligned time instances (see Crupi, paragraph [0041], where within ‘phase 4’ of the technical processing introduced above, the central RDBMS optimizer component may also determine/group candidates for remote derived source creation based upon latency tolerance and may supply/define refresh requirements/periodicity for each remote derived source; see also paragraph [0108], where at block 512, the process 500 groups the aggregated candidates by latency requirement group based upon latency tolerance).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Crupi for the benefit of distributing materialized table workload to back-end systems (see Crupi, paragraph [0040]).
Claims 10, 20, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Arye, Sankaranarayanan, Ma, Crupi as applied to Claims 9, 19, and 29 above, and further in view of Shih (US Patent No. 8,812,752 B1).
Regarding Claim 10, Arye in view of Sankaranarayanan, Ma, and Crupi discloses the system of Claim 9, wherein the operations further comprise:
Arye does not disclose:
configuring processing pipelines based on the set of aligned time instances, each processing pipeline of the processing pipelines corresponding to the nodes associated with the common time instance;
selecting a processing pipeline of the processing pipelines based on the corresponding time instances from the set of aligned time instances; and
scheduling refresh operations for one or both of the first intermediate MT and the second intermediate MT using the processing pipeline.
Shih discloses:
configuring processing pipelines based on the set of aligned time instances, each processing pipeline of the processing pipelines corresponding to the nodes associated with the common time instance (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded);
selecting a processing pipeline of the processing pipelines based on the corresponding time instances from the set of aligned time instances (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded); and
scheduling refresh operations for one or both of the first intermediate MT and the second intermediate MT using the processing pipeline (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Shih for the benefit of flexibility on determining which resources are deployed and when (see Shih, column 2, lines 17-20).
Regarding Claim 20, Arye in view of Sankaranarayanan, Ma, and Crupi discloses the method of Claim 19, further comprising:
Arye does not disclose:
configuring processing pipelines based on the set of aligned time instances, each processing pipeline of the processing pipelines corresponding to the nodes associated with the common time instance;
selecting a processing pipeline of the processing pipelines based on the corresponding time instances from the set of aligned time instances; and
scheduling refresh operations for one or both of the first intermediate MT and the second intermediate MT using the processing pipeline.
Shih discloses:
configuring processing pipelines based on the set of aligned time instances, each processing pipeline of the processing pipelines corresponding to the nodes associated with the common time instance (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded);
selecting a processing pipeline of the processing pipelines based on the corresponding time instances from the set of aligned time instances (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded); and
scheduling refresh operations for one or both of the first intermediate MT and the second intermediate MT using the processing pipeline (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Shih for the benefit of flexibility on determining which resources are deployed and when (see Shih, column 2, lines 17-20).
Regarding Claim 30, Arye in view of Sankaranarayanan, Ma, and Crupi discloses the computer-storage medium of Claim 29, wherein the operations further comprise:
Arye does not disclose:
configuring processing pipelines based on the set of aligned time instances, each processing pipeline of the processing pipelines corresponding to the nodes associated with the common time instance;
selecting a processing pipeline of the processing pipelines based on the corresponding time instances from the set of aligned time instances; and
scheduling refresh operations for one or both of the first intermediate MT and the second intermediate MT using the processing pipeline.
Shih discloses:
configuring processing pipelines based on the set of aligned time instances, each processing pipeline of the processing pipelines corresponding to the nodes associated with the common time instance (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded);
selecting a processing pipeline of the processing pipelines based on the corresponding time instances from the set of aligned time instances (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded); and
scheduling refresh operations for one or both of the first intermediate MT and the second intermediate MT using the processing pipeline (see Shih, Claim 1, where the method includes configure a pipeline comprising a first data source node, a second data source node, and an activity node, wherein the first data source node represents first data from the first data source, wherein the second data source node represents second data from the second data source, and wherein the activity node represents a workflow activity that uses the first data and the second data as input; see also column 3, lines 14-18, where in some embodiments, a scheduler associated with the data pipeline may allow users to schedule large numbers of periodic tasks; the tasks may have complex inter-task dependencies; the scheduler may be multi-threaded).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Arye with Shih for the benefit of flexibility on determining which resources are deployed and when (see Shih, column 2, lines 17-20).
Response to Arguments
Applicant’s Arguments, filed January 2, 2026, have been fully considered, but they are moot in light of the new grounds of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARHAD AGHARAHIMI whose telephone number is (571)272-9864. The examiner can normally be reached M-F 9am - 5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARHAD AGHARAHIMI/Examiner, Art Unit 2161
/APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161