DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 3-7, 10-14, 17-20 are objected to because of the following informalities:
"wherein modifying" or “wherein to modify” should be "wherein the modifying" or “wherein to said modify” [Claims 3-7, 10-14, 17-20, all line 1];
Appropriate correction is required. Further, in an effort to practice compact prosecution, each of these limitations has been interpreted similarly as in the provided recommendation for each limitation, above.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3-4, 6, 10-11, 13, 17-18, 20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
The terms “high memory usage”, “high degree of concurrency”, “a low memory usage”, and “a low degree of concurrency” in claims 3-4, 6, 10-11, 13, 17-18, 20 are relative terms which render the claims indefinite. These terms are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. That is, it is not clear as to what, in particular, would constitute a “high degree” or “low degree”, even in light of the specification.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-2, 5, 8-9, 12, 15-16, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5, 10-12, 14, 19-21, 23 of U.S. Patent No. 12,267,390. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claims 1-3, 5, 10-12, 14, 19-21, 23 of U.S. Patent No. 12,267,390 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 12,267,390
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2 ,11, 20
allocating a plurality of processing units to a data warehouse, the plurality of processing units located in different availability zones, an availability zone comprising one or more data centers; as a result of monitoring a set of queries running at an input degree of parallelism on the plurality of processing units of the data warehouse, determining that the set of queries is serviceable by one fewer processing unit;
routing, by a processing device, a query from a first processing unit to a second processing unit within the data warehouse, the query having a common session identifier with another query previously provided to the second processing unit, the second processing unit determined to be caching a data segment associated with a cloud storage resource, usable by the query, wherein the cloud storage resource is independent of the plurality of processing units;
and removing the first processing unit from the data warehouse; wherein determining that the set of queries is serviceable by one fewer processing units comprises determining an availability of at least one of: processor resources for each processing unit; or
memory resources for each processing unit (claims 2, 11, 20)
2, 9, 16
wherein the degree of concurrency of each of the plurality of processing units is based on an input degree of parallelism of each query the processing units is running
1, 10, 19
as a result of monitoring a set of queries running at an input degree of parallelism on the plurality of processing units of the data warehouse, determining that the set of queries is serviceable by one fewer processing unit
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
3, 5, 12, 14, 21, 23
wherein removing the first processing unit from the data warehouse comprises triggering a shutdown of a processing unit in response to determining that the query, in combination with a current workload, does not require one or more currently allocated processing units to meet a performance metric (claim 3); wherein the performance metric comprises a maximum time period that the query will be queued (claim 5)
Claims 1-2, 5, 8-9, 12, 15-16, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 3, 5, 13-16, 18, 26, 27 of U.S. Patent No. 11593404. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claims 1-2, 3, 5, 13-16, 18, 26, 27 of U.S. Patent No. 11593404 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,593,404
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2, 13, 15, 26, 27
allocating a plurality of virtual processing units as part of a data warehouse, the plurality of virtual processing units comprising at least two virtual processing units in different availability zones, an availability zone comprising one or more data centers, each data center comprising redundant power, networking, and connectivity, wherein each of at least some of the plurality of virtual processing units accesses data within one or more databases in one or more cloud storage resources;
routing, by a processing device, one or more queries to one or more of the plurality of virtual processing units within the data warehouse, the one or more queries having a common session identifier with a query previously provided to one or more of the plurality of virtual processing units, the one or more of the plurality of virtual processing units further determined to be caching a data segment usable by the one or more queries, wherein in response to the one or more queries, the one or more virtual processing units perform database operations on a particular portion of a database table;
and dynamically changing a total number of virtual processing units to the data warehouse using at least a comparison of a targeted degree of parallelism and a runtime computed degree of parallelism of the plurality of virtual processing units, wherein the targeted degree of parallelism is input by a customer for the plurality of virtual processing units and the runtime computed degree of parallelism is computed using a number of queries running at an input degree of parallelism;
further comprising determining a query processing workload metric of the virtual processing units, wherein determining the query processing workload metric comprises determining an availability of one or more of: processor resources for each virtual processing unit; and memory resources for each virtual processing unit (claim 2)
2, 9, 16
wherein the degree of concurrency of each of the plurality of processing units is based on an input degree of parallelism of each query the processing units is running
1,13, 14, 26
dynamically changing a total number of virtual processing units to the data warehouse using at least a comparison of a targeted degree of parallelism and a runtime computed degree of parallelism of the plurality of virtual processing units, wherein the targeted degree of parallelism is input by a customer for the plurality of virtual processing units and the runtime computed degree of parallelism is computed using a number of queries running at an input degree of parallelism
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
3, 5, 16, 18
wherein dynamically changing a total number of virtual processing units comprises adding a virtual processing unit to the data warehouse based on a query processing workload, further comprising:
determining whether a query can be processed while meeting a performance metric for the query; and
triggering a startup of a new virtual processing unit in response to determining that the query in combination with a current workload does not allow one or more currently allocated virtual processing units to meet the performance metric (claim 3);
wherein the performance metric comprises a maximum time period that the query will be queued (claim 5)
Claims 1, 8, 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 11-12, 21-22 of U.S. Patent No. 11,630,850. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claim 1-2, 11-12, 21-22 of U.S. Patent No. 11,630,850 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,630,850
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers; routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2, 11, 12, 21, 22
routing one or more queries to one or more of a plurality of execution groups within a virtual data warehouse, the plurality of execution groups comprising at least two processing units in different availability zones, an availability zone comprising one or more data centers, each data center comprising redundant power, networking, and connectivity, the virtual data warehouse to access data within one or more databases in one or more cloud storage resources based on the one or more queries, the one or more queries having a common session identifier with a query previously provided to the execution group, the execution group further determined to be caching a data segment usable by the one or more queries, wherein each of the one or more of the plurality of execution groups comprises a set of execution nodes that is sized based on a configuration of the virtual data warehouse; and
in response to the one or more queries, each of the plurality of execution groups performing execution processes on fragments of respective queries;
and dynamically scaling, by a processor, a number of execution groups in the virtual data warehouse based at least in part on the configuration of the virtual data warehouse, the virtual data warehouse comprising at least two execution groups in different availability zones, and
a current workload of each of the plurality of execution groups, wherein the plurality of execution groups are separate and independent of the one or more databases;
further comprising determining the current workload of each of the plurality of execution groups by determining for each execution group, one or more of:
available processor resources of the execution group; and
available memory resources of the execution group (claims 2, 12, 22)
Claims 1, 5, 8, 12, 15, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 ,12-13, 21-23 of U.S. Patent No. 11,620,313. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claims 1-3 ,12-13, 21-23 of U.S. Patent No. 11,620,313 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,620,313
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2, 11, 12, 21, 22
receiving, by a resource manager, a plurality of queries that are each associated with a respective session identifier;
generating, by the resource manager, one or more execution plans for the plurality of queries, an execution plan of the one or more execution plans comprises providing a set of queries of the plurality of queries to a same execution group responsive to determining that each query of the set of queries is associated with a same session identifier;
routing, by the resource manager based on the one or more execution plans, the plurality of queries to a plurality of execution groups within a virtual data warehouse,
the plurality of queries having a common session identifier with a query previously provided to the plurality of execution groups, the plurality of execution groups further determined to be caching a data segment usable by the plurality of queries, the virtual data warehouse comprising at least two execution groups in different availability zones, an availability zone comprising one or more data centers, each data center comprising redundant power, networking, and connectivity; and
each of the plurality of execution groups accesses data within one or more databases in one or more cloud storage resources;
and scaling, by a processor, a number of execution groups in the virtual data warehouse based at least in part on a current workload of each of the plurality of execution groups, the virtual data warehouse comprising at least two execution groups in different availability zones;
determining, by the resource manager, the current workload of each of the plurality of execution groups by determining for each execution group, one or more of:
available processor resources of the execution group; and
available memory resources of the execution group (claims 2, 12, 22)
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
2, 3, 12, 13, 22, 23
further comprising:
comparing, by the resource manager, the current workload of each execution group to a performance threshold, the performance threshold comprising one or more of:
a maximum degree of concurrency for each of the plurality of execution groups; and a maximum time period that any query may be queued
Claims 1-2, 5, 8-9, 12, 15-16, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 5, 10-12, 14, 19-21, 23 of U.S. Patent No. 11,675,815. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claims 1-2, 5, 10-12, 14, 19-21, 23 of U.S. Patent No. 11,675,815 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,675,815
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2, 10, 11, 19, 20
allocating a plurality of processing units to a data warehouse, the plurality of processing units comprising at least two processing units, the two processing units located in different availability zones, an availability zone comprising one or more data centers, each data center comprising redundant power, networking, and connectivity;
routing, by a processor, a query to a processing unit within the data warehouse, the query having a common session identifier with a query previously provided to the processing unit, the processing unit determined to be caching a data segment usable by the query, wherein:
the data warehouse accesses data within a database associated with a cloud storage resource; the cloud storage resource is independent of the plurality of processing units; and
each of the plurality of processing units comprises a processor and a cache memory in which data associated with the database is cached; as a result of monitoring a query workload metric, wherein the query workload metric is a number of queries running at an input degree of parallelism, determining that a processing capacity of the plurality of processing units has reached a threshold;
and changing a total number of processing units associated with the data warehouse using the query workload metric;
wherein determining the processing capacity of the plurality of processing units comprises determining an availability of at least one of:
processor resources for each processing unit; or
memory resources for each processing unit
(claims 2, 11, 20)
2, 9, 16
wherein the degree of concurrency of each of the plurality of processing units is based on an input degree of parallelism of each query the processing units is running
2 ,11, 20
wherein the query workload metric is a number of queries running at an input degree of parallelism, determining that a processing capacity of the plurality of processing units has reached a threshold
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
5, 12, 14, 21, 23
wherein dynamically adding processing units to the data warehouse based on the query workload metric comprises:
determining whether a query can be processed while meeting a performance metric for the query; and
triggering startup of a new processing unit in response to determining that the query in combination with a current workload does not allow one or more currently allocated processing units to meet the performance metric (claim 3);
wherein the performance metric comprises a maximum time period that the query will be queued (claim 5)
Claims 1-2, 5, 8-9, 12, 15-16, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5, 10-12, 14, 19-21, 23 of U.S. Patent No. 11,983,198. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claims 1-3, 5, 10-12, 14, 19-21, 23 of U.S. Patent No. 11,983,198 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,983,198
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2, 10, 11, 19, 20
allocating a plurality of processing units to a data warehouse, the plurality of processing units located in different availability zones, an availability zone comprising one or more data centers;
routing, by a processing device, a query to a processing unit within the data warehouse, the query having a common session identifier with a query previously provided to the processing unit, the processing unit determined to be caching a data segment associated with a cloud storage resource, usable by the query, wherein the cloud storage resource is independent of the plurality of processing units;
as a result of monitoring a number of queries running at an input degree of parallelism, determining that a processing capacity of the plurality of processing units has reached a threshold; and changing a total number of processing units associated with the data warehouse using the input degree of parallelism and the number of queries;
wherein determining the processing capacity of the plurality of processing units comprises determining an availability of at least one of:
processor resources for each processing unit; or
memory resources for each processing unit (claim 2, 11, 20)
2, 9, 16
wherein the degree of concurrency of each of the plurality of processing units is based on an input degree of parallelism of each query the processing units is running
1, 10, 19
as a result of monitoring a number of queries running at an input degree of parallelism, determining that a processing capacity of the plurality of processing units has reached a threshold
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
3, 5, 12, 14, 21, 23
wherein changing the total number of processing units associated with the data warehouse comprises triggering startup of an additional processing unit in response to determining that the query, in combination with a current workload, does not allow one or more currently allocated processing units to meet a performance metric (claim 3);
wherein the performance metric comprises a maximum time period that the query will be queued (claim 5)
Claims 1-2, 5, 8-9, 12, 15-16, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6-7, 11, 21-22 of U.S. Patent No. 11,615,117. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claim 6-7, 11, 21-22 of U.S. Patent No. 11,615,117 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,615,117
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
6, 7, 21, 22
allocating a plurality of compute clusters on an execution platform as part of a virtual warehouse for accessing and performing queries against one or more databases in one or more cloud storage resources located on a storage platform separate from the execution platform, wherein the plurality of compute clusters is allocated separately from the one or more cloud storage resources;
routing queries directed to data within the one or more cloud storage resources to each of the plurality of compute clusters, the plurality of compute clusters comprising at least two processing units in different availability zones, an availability zone comprising one or more data centers, each data center comprising redundant power, networking, and connectivity, wherein a plurality of queries is provided to each of the plurality of compute clusters of the virtual warehouse, the plurality of queries having a common session identifier with a query previously provided to the compute cluster, the compute cluster further determined to be caching a data segment usable by the plurality of queries, and each of the plurality of compute clusters of the virtual warehouse comprise a processor and a cache memory to cache data stored in the one or more cloud storage resources; and
dynamically adding, by one or more processors, compute clusters to or removing compute clusters from the virtual warehouse based on a workload of the plurality of compute clusters, the workload using at least in part on a comparison of a runtime computed degree of concurrency on each of the plurality of compute clusters and a targeted degree of concurrency inputted by a customer, the runtime computed degree of concurrency is computed using a number of queries running at an input degree of concurrency, and wherein the adding or removing the compute clusters does not increase or decrease the one or more cloud storage resources;
further comprising determining the workload for the plurality of compute clusters, wherein determining the workload comprises determining availability of one or more of:
processor resources for each of the plurality of compute clusters; and memory resources for each of the plurality of compute clusters (claim 7)
2, 9, 16
wherein the degree of concurrency of each of the plurality of processing units is based on an input degree of parallelism of each query the processing units is running
6
the workload using at least in part on a comparison of a runtime computed degree of concurrency on each of the plurality of compute clusters and a targeted degree of concurrency inputted by a customer, the runtime computed degree of concurrency is computed using a number of queries running at an input degree of concurrency
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
11
wherein the performance metric comprises a maximum time period that the query will be queued
Claims 1-2, 5, 8-9, 12, 15-16, 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5, 10-12, 14, 20-22, 24 of U.S. Patent No. 11,593,403. Although the claims at issue are not identical, they are not patentably distinct from each other because one of ordinary skill in the art would recognize that claims 1-3, 5, 10-12, 14, 20-22, 24 of U.S. Patent No. 11,593,403 are directed to a similar invention because they anticipate the claims in the present application.
Claim
US 19/094,721
Claim
US 11,593,403
1, 8, 15
allocating a plurality of processing units to a data warehouse, the plurality of processing units located across different availability zones, wherein each availability zone comprises one or more data centers;
routing a query to a first processing unit of the data warehouse, the query having a common session identifier with a query previously provided to the first processing unit, wherein the first processing unit caches a data segment used by the query;
and modifying, by a processing device, a number of processing units allocated to the data warehouse based on a memory usage of each of the plurality of processing units and a degree of concurrency of each of the plurality of processing units
1, 2, 10, 11, 20, 21
allocating a plurality of processing units as part of a data warehouse, the plurality of processing units comprising at least two processing units in different availability zones, an availability zone comprising one or more data centers, each data center comprising redundant power, networking, and connectivity;
routing, by a processor, one or more queries to a processing unit within the data warehouse, the one or more queries having a common session identifier with a query previously provided to the processing unit, the processing unit further determined to be caching a data segment usable by the one or more queries, wherein the data warehouse accesses data within one or more databases in one or more cloud storage resources based on the one or more queries provided to each processing unit and the one or more cloud storage resources are separate and independent of the plurality of processing units, wherein each of the plurality of processing units comprises a processor and a cache memory in which data within the one or more databases is cached;
monitoring a query workload metric of the plurality of the processing units to determine that a processing capacity of the plurality of processing units has reached a threshold processing capacity;
and changing a total number of processing units to the data warehouse as needed using a configuration of the data warehouse and the query workload metric of the processing units, wherein the query workload metric is a number of queries running at an input degree of parallelism;
wherein determining the processing capacity of the plurality of processing units comprises determining an availability of one or more of:
processor resources for each processing unit; and
memory resources for each processing unit (claim 2)
2, 9, 16
wherein the degree of concurrency of each of the plurality of processing units is based on an input degree of parallelism of each query the processing units is running
1, 10, 20
wherein the query workload metric is a number of queries running at an input degree of parallelism
5, 12, 19
wherein modifying the number of processing units allocated to the data warehouse comprises: modifying the number of processing units allocated to the data warehouse based further on a maximum time period that the query will be queued
3, 5, 12, 14, 22, 24
wherein dynamically adding processing units to the data warehouse based on the query workload metric comprises:
determining whether a query can be processed while meeting a performance metric for the query; and
triggering startup of a new processing unit in response to determining that the query in combination with a current query workload metric does not allow one or more currently allocated processing units to meet the performance metric (claim 3);
wherein the performance metric comprises a maximum time period that the query will be queued (claim 5)
Allowable Subject Matter
Claims 7, 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Pertinent Prior Art
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Goldberg (US 2014/0108587) discloses dynamic search partitioning;
Shastry (US 2005/0071842) discloses managing data using parallel processing in a clustered network.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM P BARTLETT whose telephone number is (469)295-9085. The examiner can normally be reached on M-Th 11:30-8:30, F 11-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached on 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM P BARTLETT/
Primary Examiner, Art Unit 2169