DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
An Information Disclosure Statement (IDS) has not been submitted as of the mailing of the last Office Action dated 12 August 2025. Applicant is reminded of the continuing obligation under 37 CFR 1.56 to timely apprise the Office of any information which is material to patentability of the claims under consideration in this application.
Introductory Remarks
In response to communications filed on 12 November 2025, claims 20-23 are amended per Applicant's request. Claims 1, 8-9, and 16 are cancelled. No claims were withdrawn. No new claims were added. Therefore, claims 2-7, 10-15, and 17-23 are presently pending in the application, of which claims 21, 22, and 23 are presented in independent form.
The previously raised 101 rejection of the pending claims is withdrawn in view of the amendments to the claims.
The previously raised 103 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued.
Response to Arguments
Applicant’s arguments filed 12 November 2025 with respect to the objection of claim 20 (see Remarks, p. 9) have been fully considered and are persuasive. Applicant’s amendments address the previously raised objection, and the objection has been accordingly withdrawn.
Applicant’s arguments filed 12 November 2025 with respect to the rejection of the claims under 35 U.S.C. 101 (see Remarks, p. 9-14) have been fully considered but are not persuasive.
Applicant argues that there are various limitations that cannot be practically performed in the mind and thus the claims do not recite a mental task or process. See Remarks, p. 9-10. However, this is unpersuasive.
Firstly, Applicant’s argument is based on whether there are any limitations that cannot be performed in the mind. However, this is an incorrect construction of Step 2A, Prong 1. This step considers whether there exists recitations of abstract steps in the claim, not whether there are no recitations of abstract steps.
Secondly, Applicant attempts to reframe the claims as being necessarily within a computing, i.e., technological field, e.g., with the recitations of maintaining pipelines in a cloud infrastructure, invoking virtual data warehouses of predetermined sizes, and executing contact grouping queries against databases, for example
However, Applicant misconstrues the focus of the claims, which is what 101 is concerned with. The focus of the claims is on determining whether to perform these steps (e.g., by analyzing the queries), and then subsequently executing those purportedly non-abstract steps in response to the determination. In other words, the focus of the claims is on determining when to perform those computing steps. As a result, e.g., the steps of invoking virtual data warehouses of predetermined sizes and executing contact grouping queries against databases, at best, amount to nothing more than mere instructions to apply the judicial exception (abstract idea), which does not amount to significantly more.
With respect to the “maintaining pipelines” step, this is claimed in such a manner that it does no more than provide a context within which the claimed steps take place. The claims themselves are not about maintaining pipelines, but rather utilize the maintained pipelines to invoke the virtual data warehouse service. In other words, at best, it only provides additional context, i.e., narrowing, of the abstract idea, which does not amount to significantly more.1
Thus, the claims do not meaningfully limit the manner in which the claimed invention performs the determination of whether to invoke a virtual warehouse. Similar to the claims found to be abstract in CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366 (Fed. Cir. 2011), the present claims contain no hint as to how the information regarding the queries will be sorted, weighed, and ultimately converted into a useable conclusion that an additional virtual warehouse is needed to be invoked. Rather, the claims recite the determination step at a high level of generality, either stating at a high level that some sort of historical record is maintained and broadly utilized in making this determination (which does nothing more than attempt to limit the claims to a particular context, i.e., an insignificant field-of-use limitation, combined with the well-understood, routine, and conventional activity of electronic recordkeeping), or that the query is of a certain nature, yet does not provide any additional detail with respect to how the analysis is done (thus, the type of query is another insignificant field-of-use limitation, describing the context rather than a particular manner of achieving the result).
Applicant’s argument with respect to Step 2A, Prong Two, arguing that the claims integrate any alleged judicial exception into a practical application by improving upon a technology or technical field (see Remarks, p. 11-14) have been fully considered but are not persuasive.
With respect to Applicant’s argument that the claimed invention “achieves SLO compliance through a distinctive technical methodology that performs critical calculations and pipeline allocation decisions between scheduled execution cycles rather than during query execution… This post-execution analysis enables the system to compare actual performance metrics against SLO time intervals and make informed scaling decisions by dynamically adding new pipelines of the same warehouse size when thresholds are exceeded” (see Remarks, p. 12) is unpersuasive. This is a nonsensical argument from a technical standpoint, as there is no point to perform scaling operations (i.e., adding additional resources) after the query has already been completed. Rather, the scaling operations occur prior to query completion, if the query is needed to be completed on another virtual warehouse. See, e.g., Specification, [0015-0016] and [0030].
Therefore, Applicant’s argument “This approach provides a technical advantage over prior art systems that attempt predictive resource allocation during execution cycles” (see Remarks, p. 12) is unpersuasive, as (1) predictive resource allocation does not occur during query execution (as it does not make sense to predict which resource to allocate the query to while the query is already being executed on another resource), but rather prior to query execution, and (2) this is precisely what is being disclosed by Applicant’s own Specification. Applicant’s purported improvements stem from elements that are unsupported by the Specification, and indeed attempt to claim an improvement over what is precisely being described by their own specification.
Therefore, Applicant’s argument “The claims thus recite a particular technological solution to a technical problem of maintaining SLO compliance in virtual data warehouse environments—a reactive performance monitoring and scaling system that leverages post-execution analysis to make intelligent infrastructure provisioning decisions without degrading query execution performance” (see Remarks, p. 12) is unpersuasive, for at least the reasons of: (1) provisioning does not occur during query execution but prior to query execution, (2) there is no post-execution analysis, but rather pre-execution analysis to determine how to provision, and (3) there is no “reactive” performance monitoring and scaling system in the sense that Applicant is describing. Therefore, this purported improvement is moot in view of the fact that Applicant’s own specification does not support these interpretations set forth in the remarks.
Applicant argues that additional purported improvements provided by the amended claim limitations include performance-based monitoring, service level objective compliance, and conditional infrastructure scaling (see Remarks, p. 13). However, these are unpersuasive for at least the following reasons:
(1) The claim limitation mapped to the purported “performance-based monitoring” (i.e., “subsequent to executing the plurality of contact grouping queries, calculating an expected total execution runtime for all contact grouping queries allocated to the first data processing pipeline”) is taken out of context of the Specification. This is actually the limitation used for predicting the workload that would result from executing the query, and thus determining whether to provision additional resources. Therefore, due to Applicant’s mischaracterization of the claimed invention, whether this limitation is an improvement alleged by Applicant is moot.
(2) The service level objective compliance is, at best, nothing more than performing an abstract idea in conjunction with something that has not been invented by applicant, and indeed is well-known.2 Therefore, to essentially apply the claimed invention of provisioning additional resources within the context of service level objectives, e.g., in order to meet that service level objective, such as guaranteeing that queries (or more generally, other tasks) would execute within a certain timeframe, is not an improvement over the technical field.
(3) The “conditional infrastructure scaling” is also well-known3, in which additional resources are only provisioned as necessary. In other words, this purported improvement is well-known in the area of (work)load balancing, and thus does not improve upon this relevant area.
Nonetheless, these limitations representing the purported benefits recite abstract ideas and/or are additional elements that do not amount to significantly more. See the 101 rejection below for further details.
Thus, for at least the aforementioned reasons and those set forth in the 101 rejection below, the 101 rejection has been maintained.
Applicant’s arguments filed 12 November 2025 with respect to the rejection of the claims under 35 U.S.C. 103 (see Remarks, p. 14-18) have been fully considered but are not persuasive.
Applicant argues that the claimed invention enables the system to “maintain SLO compliance without degrading the execution performance of the actual contact grouping queries, since the computationally intensive analysis and scaling operations occur after query completion rather than concurrently with query processing” (see Remarks, p. 14-15). To reiterate from the 101 arguments above, this is a nonsensical argument from a technical standpoint, as there is no point to perform scaling operations (i.e., adding additional resources) after the query has already been completed. Rather, the scaling operations occur prior to query completion, if the query is needed to be completed on another virtual warehouse. See, e.g., Specification, [0015-0016] and [0030].
Applicant further argues that Gawande pertains to “predictive resource selection that occurs before or during query execution, rather than the claimed reactive scaling that occurs after execution completion” (see Remarks, p. 15). This is as nonsensical as the argument addressed above, and is not supported by the Specification, which shows that the scaling occurs prior to query execution. Indeed, Applicant’s arguments regarding Gawande “‘[routing]’ queries to selected resources based on this predictive analysis” (see Remarks, p. 15), is precisely what is being claimed by Applicant, in which the system calculates an expected total execution runtime, which is compared to a time interval indicated in a service level objective, etc. See, e.g., dependent claims 6 and 14, and Specification, [0015-0016] and [0030]. Note that calculating an “expected” total execution runtime is essentially predicting.
Applicant’s arguments against Cseri (see Remarks, p. 15) is unpersuasive for at least the same reasons above and as articulated in the 103 rejection below.
Applicant argues that Gawande “requires concurrent [performance] analysis during resource selection” (see Remarks, p. 16). This is nonsensical. The results of the performance analysis (from previous queries) are used to determine resource selection. Thus, Applicant is incorrect in the characterization of Gawande.
Applicant further argues that “[Gawande’s] concurrent analysis necessarily interferes with query execution performance” (see Remarks, p. 16). This is nonsensical also, as (1) as stated previously, there is no “concurrent” analysis, as it occurs prior to query execution, and (2) because the analysis occurs prior to query execution, there is no interference with query execution performance.
Applicant argues that the claimed invention “solves this interference problem through its specific post-execution methodology” (see Remarks, p. 16). However, as stated numerous times, this is nonsensical, as (1) Applicant is arguing features that do not make sense on a technical level with respect to query execution and resource allocation/scaling, and most importantly, (2) is not supported by the Specification.
Applicant’s argument with respect to the “streamlined, deterministic assignment methodology that relies on just two easily-obtained metrics” and that “In contrast, Gawande describes a significantly more complex predictive approach…” (see Remarks, p. 16) is unpersuasive. The question is not whether there is additional complexity or less complexity than what is claimed, which is a red herring, but rather the question is whether the prior art describes the same claimed features as the claimed invention. For at least the reasons set forth in the 103 rejection below, it has been shown that Gawande in combination with the other prior art references discloses the claimed features described by Applicant here.
Additionally, Applicant’s argument that Gawande “requires ongoing analysis…creating substantial computational overhead during the critical resource selection phase” (see Remarks, p. 16) is unpersuasive, as Applicant’s own Specification discloses keeping a historical record of previous queries which are utilized during the resource selection phase. See, e.g., Specification, [0015-0016] and [0030].
Therefore, Applicant’s attempts to distinguish the claimed invention from the prior art in the rest of the Remarks (see Remarks, p. 17-18), are unpersuasive for at least the reasons set forth above and those in the 103 rejection below, as Applicant not only mischaracterizes their own claimed invention, but also at times mischaracterizes the prior art, in addition to mischaracterizing the fundamental principles of prior art rejections under 35 U.S.C. 103.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “configured to” in claims 3, 6, 11, 14, and 18; and “means for” in claims 19-20 and 23.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 2-7, 10-15, and 17-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Independent Claims 21-23 recite “subsequent to executing the plurality of contact grouping queries, calculating an expected total execution runtime for all contact grouping queries allocated to the first data processing pipeline”. There is no support for this step of calculating an expected total execution runtime after executing the contact grouping queries. Rather, this step occurs prior to the execution of the plurality of contact grouping queries. See, e.g., Specification, [0015-0016] and [0030].
The dependent claims are rejected for at least by virtue of their dependency on their respective independent claims.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-7, 10-15, and 17-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Independent Claims 21-23 recite “subsequent to executing the plurality of contact grouping queries, calculating an expected total execution runtime for all contact grouping queries allocated to the first data processing pipeline”. Given that the queries have already been executed, it does not make sense that the analysis would take place after those queries had already been executed. Rather, the claimed invention pertains to performing this analysis, and then dynamically reallocating the query to a data pipeline with another warehouse. See, e.g., Specification, [0015-0016] and [0030].
Dependent Claims 6 and 14 recite “calculating an expected total execution runtime, for all contact grouping queries allocated to a specific data pipeline in the plurality of pipelines…”. It is unclear which contact grouping queries this is referring to, i.e., a lack of antecedent basis.
The rest of the dependent claims are rejected for at least by virtue of their dependency on their respective independent claims, and for failing to cure the deficiencies of their respective independent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 2-7, 10-15, and 17-23 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception (i.e., an abstract idea) without significantly more.
Independent claims 21-23 recite an allocation determination process in which assigning the execution of the query by a particular data pipeline is determined based on a mapping of i) a combination of a count of records and a query classification for the query, to ii) the predetermined size of a warehouse that the data pipeline is configured to invoke. Such steps encompass an evaluation, observation, and/or judgment, which falls under the “Mental Processes” grouping of abstract ideas.4
Dependent claims 2, 10, and 17 recite classifying a query as “simple” when the query does not include a SQL JOIN statement, and classifying the query as “complex” when the query includes a SQL JOIN statement. Dependent claims 3, 11, and 18 recite determining a number of compute resources that a virtual data warehouse service will devote to executing the queries allocated to the data pipeline. Dependent claims 5, 13 and 20 recite calculating an average query execution runtime for a predetermined number of prior query executions. Independent claims 21-23 and dependent claims 6 and 14 recite calculating an expected total execution runtime, for all queries allocated to a specific data pipeline, using for the calculation of the average query execution runtime for each query (claims 6 and 14 only), and comparing the expected total execution runtime for all queries allocated to the specific data pipeline to a time interval indicated in a service level objective (as well as implicitly determining to add a warehouse with the same fixed size as the warehouse that the specific data pipeline is configured to invoke, though the claim states that such a warehouse is “added”, but it is implied that it was determined to use the same fixed size prior to performing the “adding a new data pipeline” step). Such steps encompass an evaluation, observation, and/or judgment, which falls under the “Mental Processes” grouping of abstract ideas.
Because the claims cover performance of the limitation in the mind but for the recitation of generic computer components, the claims fall within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
The judicial exception is not integrated into a practical application of the idea. The claims recite various computing hardware components (processor, memory storage device), which are recited at a high level of generality and recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(h)).
Furthermore, although the claims recite steps involving data pipelines and associated virtual warehouses, because the claimed steps themselves are involved/concerned with (automatically) determining the selection of a data pipeline, which was typically performed by people mentally, such additional elements do not amount to significantly more for being directed to insignificant field-of-use limitations, describing the context rather than a particular manner of achieving the result.
Similarly, the recitations to the data, queries, and tables pertaining to “contact groups” or “contacts” (i.e., the “contact records”), do nothing more than describe the context, i.e., insignificant field-of-use limitations, and thus do not amount to significantly more, as they do not provide further limitations as to how—by what particular process or structure—the various determination steps of which data pipeline to allocate a query task/job to, is performed by a computer beyond what can be automatically and mentally performed by a person.
Further additional elements found in independent claims 21-23, which recite maintaining a mapping of the queries to the sizes of the virtual warehouses (an insignificant extra-solution activity), and subsequently invoking the data pipeline to execute the queries to update the table in the virtual data warehouse, amounts to nothing than recite mere instructions to apply the judicial exception, i.e., via a computer, which is also an insignificant extra-solution activity. Additionally, the latter step of invoking the data pipeline, which results in updating a table in the virtual data warehouse, is also an insignificant post-solution activity. Similarly, independent claims 21-23 and dependent claims 6 and 14, which recite adding a new data pipeline configured to invoke a warehouse of a same fixed size, are nothing more than mere instructions to apply the judicial exception.
Dependent claims 4-5, 7, 12-13, 15, and 19-20 recite updating a cache with various types of information. However, such steps are insignificant post-solution activities which are unrelated to the determination steps of which data pipeline should be selected for executing a query.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements reciting the use of various computing hardware components amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept.
The various claimed limitations involving maintaining data, e.g., a plurality of queries, and maintaining a mapping of the queries to the virtual warehouses, as well as updating data (e.g., updating a table, updating a cache), are nothing more than the well-understood, routine, and conventional activities of electronic recordkeeping.5 See, e.g., MPEP § 2106.05(d)(II) (“Electronic recordkeeping”, or alternatively, “Storing and retrieving data from memory”). Additionally, using a computer to issue automated instructions is also well-understood, routine, and conventional.6
Thus, when taking the claims separately or as a whole (i.e., as an ordered combination), the claims do not recite any additional elements that amount to significantly more than the judicial exception. The claims primarily recite the abstract steps of determining data pipelines to select for executing certain queries based on an analysis of the queries, but do not confine the claims to a specific means or manner by which a computer would carry out the claimed steps beyond what is capable of being performed in the mind of a person who may perform those assessments, e.g., by analyzing the query statement, calculating/estimating the runtime, and determining the best pipeline to select based on certain resource configurations.7
Attempting to broadly state that the claims fell within the context of “data pipelines” and “virtual warehouses” does not save the claims from abstraction, nor does the use of generic computing components, all of which do nothing more than attempt to limit the claims to a particular technological environment—which is implementation via computers. However, the claimed steps themselves are concerned with analyzing queries and performing some sort of automatic selection/determination of a data pipeline (which, as noted in the prior art, was previously performed by people, and thus the variously disclosed systems pertained to simplifying such processes by automating those steps). Thus, merely reciting that data pipelines and virtual warehouses are involved, is not enough for patent eligibility.
In CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366 (Fed. Cir. 2011), the Federal Circuit court found the claims to be directed to an abstract idea, as the claims “contain[ed] no hint as to how the information regarding the Internet transactions will be sorted, weighed, and ultimately converted into a useable conclusion that a particular transaction is fraudulent. The claims in this case are therefore even more abstract than the claims in Flook” (CyberSource, p. 20, footnote 4, citing Parker v. Flook, 437 U.S. 584, 98 S. Ct. 2522 (1978)). Here, the present claims do not contain any hint as to how any of the claimed invention, i.e., the mapped information, is utilized to analyze the query in any particular manner, e.g., how it is sorted, weighed, and ultimately converted into a useable conclusion as to which data pipeline and warehouse is best suited for executing a particular query.
As a result, the updating cache steps were found to be unrelated to such determination steps, and were not utilized in any particular manner to aid a computer in arriving at any particular determination about the query, and thus which data pipeline to select. Rather, the cache is merely utilized as a storage unit for holding data, i.e., storing the results of the collection and analysis. However, because it is unrelated to the analysis itself, such updating steps did not amount to significantly more.
Lastly, attempting to limit the claims to contact information, e.g., contact grouping, contact records, etc., thus amounted to an insignificant field-of-use limitation, as such steps were unrelated to the analysis of the queries as well, and the determination of which data pipeline and warehouse to select. Therefore, such limitations did not amount to significantly more.
As seen, the claimed steps involving the analysis of the query and the determination/selection of the data pipeline and associated virtual warehouse, were described at a high level of generality and not a particular means by which a computer would carry out any technical steps involved in arriving at such determinations. Rather, the computer is merely invoked in a generic manner, e.g., by stating that the allocation is performed and the warehouse is invoked. But because such steps merely instruct the computer to implement the results of the abstract idea on a computer, such steps therefore do nothing more than attempt to link the judicial exception to a particular technological environment—namely, implementation via computers.
Therefore, for at least the aforementioned reasons, the claims have been found to be directed to a judicial exception without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4, 7, 11-12, 15, 18-19, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Gawande et al. (“Gawande”) (US 2018/0060394 A1), in view of Cseri et al. (“Cseri”) (US 2021/0342360 A1).
Regarding claim 3: Gawande as modified teaches The computer-implemented method of claim 21, wherein the predetermined size of the virtual data warehouse that each data pipeline is configured to invoke determines a number of compute resources that the virtual data warehouse service will devote to executing contact grouping queries allocated to the data pipeline (Gawande, [0046], where managed query service 270 may implement resource planner 330 to select available computing resources from pools for execution of queries based on evaluated collected data statistics associated with query execution and determine an estimated number or configuration of computing resources for executing query within some set of parameters, and then identify which ones of the resources available to execute the query from a pool that may best fit the estimated number/configuration of resources. See Cseri, [0042] and [0046] above with regards to the “virtual warehouse” limitation).
Regarding claim 4: Gawande as modified teaches The computer-implemented method of claim 21, further comprising:
subsequent to executing each contact grouping query to update a contact group table, updating a cache with data to indicate a contact grouping query execution runtime for the contact grouping query (Gawande, [0024], where in addition to generating query results 142, query execution data 116 may be collected and stored as part of query execution history 130, in order to update the resource selection data available for analysis at resource selection 110 with further examples of resource configuration mapped to a performance outcome for a query. See Gawande, [0074], where a historical query model may be maintained that models the performance of queries with different characteristics based on different execution outcomes, e.g., time to complete, cost to complete, resources consumed to complete, etc. See Gawande, [0045], where query history, query execution logs, and other managed query service historical data may be maintained in an internal data store.
See Gawande, [0057-0058], where query model(s) 720 can be generated based on table metadata 752 and updating the query model(s) 720 accordingly. Updates to query model(s) 754 may be performed in response to trigger events, e.g., number of queries processed since the last update, number of new queries processed since the last update, new set of execution data 742 or table metadata 754 received, etc. (i.e., “subsequent to executing the…query to update the…table”)).
Although Gawande does not appear to explicitly state that a “cache” is utilized, Gawande states that an internal data store may maintain such data. Therefore, it would have been obvious to one of ordinary skill in the art to have modified Gawande to utilize caches with the motivation of quickly accessing such information (i.e., since caches provide local memory storage that can return information quickly).
Regarding claim 7: Gawande as modified teaches The computer-implemented method of claim 21, further comprising:
subsequent to executing a first contact grouping query to update a first contact group table, updating a cache with data to indicate the count of contact records for the entity associated with the first contact grouping query (Gawande, [0057-0058], where query model(s) 720 can be generated based on table metadata 752 and updating the query model(s) 720 accordingly. Table metadata may include information describing tables or other data evaluated or searched by queries, such as the number of rows in a table, the number of distinct values in a column, the number of null values in a column, the cardinality of data values in a column, size or number of data blocks in a table, etc. Updates to query model(s) 754 may be performed in response to trigger events, e.g., number of queries processed since the last update, number of new queries processed since the last update, new set of execution data 742 or table metadata 754 received, etc. (i.e., “subsequent to executing the…query to update the…table”). See also Gawande, [0059], where information related to a prior query, such as the number of rows in the access table, may be used to generate feature vectors that create a feature space for performing comparisons with newly received queries. See Gawande, [0045], where query history, query execution logs, and other managed query service historical data may be maintained in an internal data store (i.e., “updating a [storage] with data to indicate the count of…records…associated with the…query”)).
Although Gawande does not appear to explicitly state that a “cache” is utilized, Gawande states that an internal data store may maintain such data. Therefore, it would have been obvious to one of ordinary skill in the art to have modified Gawande to utilize caches with the motivation of quickly accessing historical query information (i.e., since caches provide local memory storage that can return information quickly).
Regarding claim 11: Claim 11 recites substantially the same claim limitations as claim 3, and is rejected for the same reasons.
Regarding claim 12: Claim 12 recites substantially the same claim limitations as claim 4, and is rejected for the same reasons.
Regarding claim 15: Claim 15 recites substantially the same claim limitations as claim 7, and is rejected for the same reasons.
Regarding claim 18: Claim 18 recites substantially the same claim limitations as claim 3, and is rejected for the same reasons.
Regarding claim 19: Claim 19 recites substantially the same claim limitations as claim 4, and is rejected for the same reasons.
Regarding claim 21: Gawande teaches A computer-implemented method for managing contact grouping queries on a Software-as-a-Service (SaaS) platform, the method comprising:
maintaining a plurality of data processing pipelines, each data processing pipeline to invoke, via the cloud-based … service …, a [resource] of a predetermined size to execute … queries to update … tables in the database (Gawande, [0052-0055], where query 530 may be received at managed query service control plane 320, which may submit the query 532 to resource planner 340. Resource planner 340 may analyze the query to determine the optimal cluster to process the query and submit the query to query tracker 340 indicating the selected cluster 536 for execution. The query 538 subsequently has its execution initiated at the selected provisioned cluster 510. The provisioned cluster 510 can generate a query execution plan and execute the query 544 with respect to data set(s) 520 according to the query plan. Note that the cluster 610 may implement nodes with query engines 624 and query engines 632a-n-.
See Gawande, [0077], where the query is evaluated with respect to resource configurations for executing the query, where a computing resource may be selected to execute the query from a plurality of differently configured computing resources that execute queries. The plurality of computing resources may be pre-configured according to query engines of different types, different configuration settings, and/or different sizes (e.g., number of nodes or slots in a cluster).
See Gawande, [0035], where data storage service(s) 230 may include various types of database storage services for storing, querying, and updating data. See Gawande, [0043], where a data set schema may be part of a table definition so that a query engine (executing on a computing resource) may be able to understand the data being queried, and tables or other data are evaluated or searched by queries using table metadata (Gawande, [0057]) (implying that updating data may include updating data stored in those tables).
See Gawande, [0035], where data storage service(s) 230 may be implemented as a network-based service that enables clients 250 to operate a data storage system in a cloud or network computing environment (i.e., “cloud-based…service”));
maintaining a mapping of … queries to sizes of [resources], the mapping of each … query to a size of a [resource] based on a combination of i) a count of … records …, and ii) a query classification for the … query (Gawande, [0024], where query execution data 116 may be collected and stored as part of query execution history 130 in order to update the resource selection data available for analysis at resource selection 110 with further examples of resource configuration mapped to a performance outcome for a query. See also Gawande, [0056], where the system maps different types of queries and resource configurations with different outcomes via query model(s) 754 (i.e., “maintaining a mapping of…queries”). In this way, query model(s) 720 can classify or otherwise identify features to be compared with received queries 702 in order to determine a configuration for executing the query according to a received execution limitation 704. Query model(s) 720 may be generated using many different sources of information, including data evaluated or searched by queries such as the number of rows in a table, number of distinct values in a column, cardinality of data values in a column, size or number of data blocks in a table, etc. (i.e., “a count of records”).
See also Gawande, [0060-0061], where the query and query execution plans may be provided and evaluated using query model(s) 720, which can classify the query based on a feature vector generated for that query, where the resulting classifications may include a number of nodes, slots, containers, or other components for a computing resource as well as the configuration for a query engine, with different types of query engines being implemented by the system to execute queries (i.e., “a query classification for the…query”));
assigning the execution of each … query to a data processing pipeline in the plurality of data processing pipelines based on the mapping (Gawande, [0060-0061], where the query and query execution plans may be provided and evaluated using query model(s) 720, which can classify the query based on a feature vector generated for that query. See Gawande, [0046], where resource planner 330 selects available computing resources from pools for execution of queries based on evaluated collected data statistics associated with query execution and determine an estimated number or configuration of computing resources for executing the query within some set of parameters, and then identify which ones of the resources available to execute the query from a pool may best fit the estimated number/configuration of resources); [and]
… invoking a first virtual data warehouse of a first size and executing a plurality of contact grouping queries allocated to the first data processing pipeline with the first virtual data warehouse to update contact records in contact grouping tables (Gawande, [0052], where resource planner 340 may submit the query to query tracker 340 indicating the selected cluster 536 for execution, and query tracker 340 may then initiate execution of the query 538 at the provisioned cluster 510, sending a query execution instruction to a managed query agent 512. See Gawande, [0039], where network-based requests may include requests for services, e.g., a request to create, read, write, obtain, or modify data in data storage service(s) 240. See Gawande, [0043], where the query pertains to a table, via a data set schema that identifies the field or column data types of a table so that a query engine may be able to understand the data being queried.
Recall from Gawande, [0035], above where data storage service(s) 230 may include various types of database storage services for storing, querying, and updating data. See Gawande, [0043], where a data set schema may be part of a table definition so that a query engine (executing on a computing resource) may be able to understand the data being queried (implying that updating data may include updating data stored in those tables)).
Gawande does not appear to explicitly teach maintaining a plurality of contact grouping queries for a plurality of entities, each contact grouping query associated with an entity and each contact grouping query, when executed, to update contact records for the entity in a contact group table stored in a database of a virtual data warehouse service; that the type of query involved pertains to contact grouping queries; that the type of resource pertains to a virtual data warehouse; that the data processing pipeline is invoked according to a predefined schedule; that the type of data pertains to contact records for the entity associated with the contact grouping query; that the invocation of the resource for executing the query is according to a predefined schedule; subsequent to executing the plurality of contact grouping queries, calculating an expected total execution runtime for all contact grouping queries allocated to the first data processing pipeline; comparing the expected total execution runtime to a time interval associated with a service level objective; and when the expected total execution runtime exceeds the time interval indicated in the service level objective, dynamically adding a new data processing pipeline configured to invoke a virtual data warehouse of the same size as the first virtual data warehouse.
Cseri teaches maintaining a plurality of … queries …, each … query, when executed, to update … records … in a … table stored in a database of a virtual data warehouse service; that the type of resource pertains to a virtual data warehouse (Cseri, [0034], where one or more jobs may be stored in the queue 124, where queue 124. Each one of those jobs may be communicated to the compute service manager 108 to be scheduled and executed. The queue 124 may determine a job to be performed based on a trigger event such as the updating of one or more rows in a table. See Cseri, [0023] and [0027], where the disclosed system automates virtual warehouse management to specify particular virtual warehouse requirements to execute a set of tasks in a given job (i.e., “virtual warehouse service”), and the network-based data warehouse system 102 includes a compute service manager 108 in communication with a database 114 and an execution platform 110 (i.e., “a database of a virtual warehouse service”));
that the data processing pipeline is invoked according to a predefined schedule; that the invocation of the resource for executing the query is according to a predefined schedule (Cseri, [0042], where during typical operation, the network-based data warehouse system 102 processes multiple jobs determined by the compute service manager 108. These jobs are scheduled and managed by the compute service manager 108 to determine when and how to execute the job. See also Cseri, [0048], where the task warehouse manager 150 schedules and manages the execution of queries on behalf of a client account. See also Cseri, [0028], where the compute service manager 108 manages clusters of computing services that provide compute resources (also referred to as “virtual warehouses”). See, more particularly, Cseri, [0046], where the job 154, including the multiple discrete tasks, is assigned to a particular virtual warehouse of the execution platform 110 for execution.
Although Cseri does not appear to explicitly state that the schedule is “predefined” as claimed, one of ordinary skill in the art would have found it obvious to modify Cseri to use predefined schedules with the motivation of simplifying scheduling operations and thus potentially requiring less resources for performing such scheduling operations); and
subsequent to executing the plurality of … queries, calculating an expected total execution runtime for all … queries allocated to the first data processing pipeline; comparing the expected total execution runtime to a time interval associated with a service level objective; and when the expected total execution runtime exceeds the time interval indicated in the service level objective, dynamically adding a new data processing pipeline configured to invoke a virtual data warehouse of the same size as the first virtual data warehouse (Cseri, [0087], where after a history of prior executions has been established (i.e., “subsequent to executing the plurality of queries”), the task warehouse manager analyzes the history to determine a size of a virtual warehouse for executing the task 502. If an execution time, e.g., a total runtime to complete a prior task (i.e., “expected total execution runtime”) is greater/less than a particular percentage of the task interval (i.e., “time interval”), the task warehouse manager increments a vote count for a larger/smaller size of a virtual warehouse to execute the task (respectively). Otherwise, the task warehouse manager increments a vote count of current size of the virtual warehouse that is to execute the task. The task warehouse manager then selects a particular virtual warehouse from the task warehouse pool 402 to execute the task 502 based on a “winner” of the votes from the last X number of prior executions of tasks, e.g., by only considering virtual warehouses with an exact same size as required.
See Gawande, [0062] and [0074], where computing resource(s) may be selected to execute the first query based, at least in part, on a best performance outcome, e.g., a service level agreement (i.e., “indicated in a service level objective”), i.e., the query execution having execution time limitations, which is a cost-defined limitation to execute the query, including service level agreements).
It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined the teachings of Gawande and Cseri (hereinafter “Gawande as modified”) with the motivation of enabling the automation of virtual warehouse management and increased flexibility via the use of a virtual warehouse, thereby decoupling a requirement for a user to specify particular virtual warehouse requirements to execute a set of tasks in a given job, thereby reducing costs and optimizing query execution for tasks (Cseri, [0023]), utilizing an execution history of prior tasks to more intelligently understand virtual warehouse usage and performance metrics in order to optimize the execution of current and/or future tasks, thereby improving the performance of a computing system by reducing computing resources (e.g., processor, memory cache) that are utilized to execute tasks (Cseri, [0023]), and allowing the execution platform to quickly deploy large amounts of computing resources when needed without being forced to continue paying for those computing resources when they are no longer needed (Cseri, [0063]).
Although Gawande as modified does not appear to explicitly teach that the type of information pertains to “contact” information, e.g., “contact grouping” queries, “contact” records, “contact group” tables, but rather more generically pertaining to queries, records, and tables, the claimed invention does not distinguish over the prior art because the only differences in the claim limitations and the prior art’s disclosure are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. The selection of certain nodes/pipelines for executing queries, updating and storing data in tables, would have been performed the same regardless of the specific data involved (i.e., contact-related information as claimed, generic data as disclosed by the prior art, or some other data). Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 1385, 217 USPQ2d 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have referred to Gawande’s teachings in making the claimed invention, because such data does not functionally relate to the steps in the method claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention over the prior art.
Additionally, although Gawande as modified does not appear to explicitly state that the records are “for the entity”, Gawande discloses in [0037] that a data warehouse service may offer clients a variety of different data management services, such as sales records marketing, management reporting, business process management, etc. Therefore, one of ordinary skill in the art would have been suggested by Gawande’s disclosure to have specific records “for the entity” with the motivation of grouping/organizing a client’s data storage according to a particular client’s needs (Gawande, [0037]).
Regarding claim 22: Claim 22 recites substantially the same claim limitations as claim 21, and is rejected for the same reasons.
Note that Gawande teaches A system comprising: a processor for executing computer-readable instructions; and a memory storage device storing instructions thereon, which, when executed by the processor, cause the system to perform operations comprising [the claimed steps] (Gawande, [0088] and [0100], where the disclosed methods may be implemented by a computer system that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors, where the processors are capable of executing instructions to perform the claimed steps).
Regarding claim 23: Claim 23 recites substantially the same claim limitations as claim 21, and is rejected for the same reasons.
Claims 2, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gawande et al. (“Gawande”) (US 2018/0060394 A1), in view of Cseri et al. (“Cseri”) (US 2021/0342360 A1), in further view of Singh et al. (“Singh”) (US 2023/0342356 A1).
Regarding claim 2: Gawande as modified teaches The computer-implemented method of claim 21, but does not appear to explicitly teach wherein a contact grouping query is assigned a query classification of simple, when the contact grouping query does not include a SQL JOIN statement, and a contact grouping query is assigned a query classification of complex, when the contact grouping query does include a SQL JOIN statement.
Singh teaches wherein a contact grouping query is assigned a query classification of simple, when the contact grouping query does not include a SQL JOIN statement, and a contact grouping query is assigned a query classification of complex, when the contact grouping query does include a SQL JOIN statement (Singh, [0052], where an incoming query is classified based on its complexity, which correlates to the resource usage of a query. The system can maintain a separate lane for each category, e.g., a simple lane, a medium lane, and complex lanes, each of which has its share of system computer resources, where each of these lanes can be managed using different policies such that the performance goals for each lane can be met, e.g., simple lane 210 consisting of relatively short duration queries, a fast response time may be important. Thus, queries in the simple lane 210 can be ordered based on their estimated execution time (i.e., queries classified as “simple”); and queries in the complex lane 214 benefit from a higher share of CPU resources and are allocated a larger share of CPU resources (i.e., queries classified as “complex”).
See also Singh, [0053], where the system assesses the complexity of queries, e.g., a query joining multiple small tables might have a high number of nodes and vertices in the directed acyclic graph representing an execution plan, but since the volume of data processed is small, the execution time or complexity of the query might be low. Similarly, a query which joins a few very large tables might have few nodes and edges in the DAG, but since the volume of data is high, the execution time or complexity of the query can be high.
See Gawande, [0024], where a query may be a SQL query, and queries may be formatted according to different query languages, including SQL (see, e.g., Gawande, [0073])).
Although Singh does not appear to explicitly state that the presence of joins automatically classifies the query as “simple” or “complex”, Singh states that the presence of joins may render the query to be classified as “complex”. Therefore, one of ordinary skill in the art would have found it obvious to modify Singh such that the presence of no joins in the query automatically renders it as “simple” with the motivation of performing classification quickly.
Furthermore, although Singh does not appear to explicitly state that the type of query is a SQL statement (and thus, detecting the presence/absence of a JOIN clause, which is a SQL-specific syntax), one of ordinary skill in the art would have found it obvious to modify Singh to explicitly include Gawande’s SQL queries with predictably equivalent operating characteristics, which is that a query is analyzed for join operations, with the motivation of having the system be able to process a popular query language.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Gawande as modified and Singh. Gawande discloses in [0079-0080] that the system may consider various query characteristics for selecting a particular resource configuration to execute a query, including the execution plans for prior queries including the various types of operations selected by the plan to perform the prior queries (i.e., a “join” being a type of “operation”).
Therefore, one of ordinary skill in the art would have been suggested by Gawande’s disclosure to detect the type of operation, such as the “join” operations described by Singh, with the motivation of predicting the complexity of a query accurately in order to build an intelligent workload management system (Singh, [0052]), e.g., better estimation of run times and other metrics can be useful for improving schedulers for computing systems (Singh, [0022]) by intelligently scheduling incoming requests or queries to ensure optimal utilization of computer resources (Singh, [0052]).
Regarding claim 10: Claim 10 recites substantially the same claim limitations as claim 2, and is rejected for the same reasons.
Regarding claim 17: Claim 17 recites substantially the same claim limitations as claim 2, and is rejected for the same reasons.
Claims 5-6, 13-14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gawande et al. (“Gawande”) (US 2018/0060394 A1), in view of Cseri et al. (“Cseri”) (US 2021/0342360 A1), in further view of Salim et al. (“Salim”) (US 2023/0244687 A1).
Regarding claim 5: Gawande as modified teaches The computer-implemented method of claim 4, further comprising:
for a first contact grouping query, calculating [a] contact grouping query execution runtime for a predetermined number of prior contact grouping query executions of the first contact grouping query (Cseri, [0087], where after a history of prior executions has been established after at least an X number of prior executions of prior tasks, the task warehouse manager analyzes the history to determine an execution time (e.g., a total runtime to complete a prior task) being greater/less than a particular percentage of a task interval); and
updating a cache to indicate the average contact grouping query execution runtime for the predetermined number of prior contact grouping query executions of the first contact grouping query (Gawande, [0074], where a historical query model may be maintained that models the performance of queries with different characteristics based on different execution outcomes, e.g., time to complete, cost to complete, resources consumed to complete, etc. See Gawande, [0045], where query history, query execution logs, and other managed query service historical data may be maintained in an internal data store.
Although Gawande does not appear to explicitly state that a “cache” is utilized, Gawande states that an internal data store may maintain such data. Therefore, it would have been obvious to one of ordinary skill in the art to have modified Gawande to utilize caches with the motivation of quickly accessing such information (i.e., since caches provide local memory storage that can return information quickly)).
Gawande as modified does not appear to explicitly teach that the execution runtime for a query is an average query execution runtime.
Salim teaches that the execution runtime for a query is an average query execution runtime (Salim, [0071], where performance measures may indicate one or more processing times corresponding to each of the plurality of different events, which may indicate, for each emulated event of the emulated plurality of different events, a quantity of time taken by a virtual warehouse to complete the emulated event. Such values may be collected and averaged).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Gawande as modified and Salim (hereinafter “Gawande as modified”) with the motivation of better query execution performance (e.g., having numerous samples to calculate an estimated query execution time, e.g., instead of relying on a few data points, may be better, as an average query execution time may be more representative of a particular type of job/task, and thus may be a better predictor of query execution requirements than, e.g., singular data points), since the system may better allocate resources to the task with better predictive information.
Regarding claim 6: Gawande as modified teaches The computer-implemented method of claim 5, further comprising:
calculating an expected total execution runtime, for all contact grouping queries allocated to a specific data pipeline in the plurality of data pipelines, using for the calculation the average contact grouping query execution runtime for each contact grouping query; comparing the expected total execution runtime for all contact grouping queries allocated to the specific data pipeline to a time interval indicated in a service level objective; and adding a new data pipeline configured to invoke a virtual data warehouse that is the same fixed size as the virtual data warehouse that the specific data pipeline is configured to invoke (Cseri, [0087], where after a history of prior executions has been established, the task warehouse manager analyzes the history to determine a size of a virtual warehouse for executing the task 502. If an execution time, e.g., a total runtime to complete a prior task (i.e., “expected total execution runtime”) is greater/less than a particular percentage of the task interval (i.e., “time interval”), the task warehouse manager increments a vote count for a larger/smaller size of a virtual warehouse to execute the task (respectively). Otherwise, the task warehouse manager increments a vote count of current size of the virtual warehouse that is to execute the task. The task warehouse manager then selects a particular virtual warehouse from the task warehouse pool 402 to execute the task 502 based on a “winner” of the votes from the last X number of prior executions of tasks, e.g., by only considering virtual warehouses with an exact same size as required.
See Gawande, [0062] and [0074], where computing resource(s) may be selected to execute the first query based, at least in part, on a best performance outcome, e.g., a service level agreement (i.e., “indicated in a service level objective”), i.e., the query execution having execution time limitations, which is a cost-defined limitation to execute the query, including service level agreements.
See Salim, [0071], with regards to the “average” query execution runtime, where performance measures may indicate one or more processing times corresponding to each of the plurality of different events, which may indicate, for each emulated event of the emulated plurality of different events, a quantity of time taken by a virtual warehouse to complete the emulated event. Such values may be collected and averaged).
Regarding claim 13: Claim 13 recites substantially the same claim limitations as claim 5, and is rejected for the same reasons.
Regarding claim 14: Claim 14 recites substantially the same claim limitations as claim 6, and is rejected for the same reasons.
Regarding claim 20: Claim 20 recites substantially the same claim limitations as claim 5, and is rejected for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See the enclosed 892 form. Moon et al. (US 2013/0166750 A1), Morris et al. (US 2008/0162417 A1), and Shukla et al. (US 2019/0342379 A1) are cited to show that balancing workloads, e.g., query executions, in the context of service level agreements, is well-known in the prior art. The prior art should be considered to define the claims over the art of record.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE BAKER whose telephone number is (408)918-7601. The examiner can normally be reached M-F 8-5PM PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEVEEN ABEL-JALIL can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IRENE BAKER/Primary Examiner, Art Unit 2152
28 February 2026
1 BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281 (Fed. Cir. 2018) at p. 17-18 (“As a matter of law, narrowing or reformulating an abstract idea does not add ‘significantly more’ to it”).
2 See, e.g., previously cited prior art: Gawande et al. at [0062] and [0074]; and Singh et al. at [0120]. See also, e.g., Moon et al. (US 2013/0166750 A1) which pertains to workload scheduling under service level agreements; Morris et al. (US 2008/0162417 A1) at [0022] (“Under some known CLSM-type systems, incoming queries are split into workload groups, each workload group having respective service level goals (SLGs)…”); and Shukla et al. (US 2019/0342379 A1), which pertains to evaluating workloads, e.g., executing queries, and allocating those workloads in accordance to a service level agreement.
3 See, e.g., previously cited prior art: Gawande et al. at [0086] (“…launch, create, instantiate, or configure new resources according to the configuration of the determined computing resources….”); Cseri et al. at [0075] (“…the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed”); Singh et al. at [0120] (“Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment”); Salim et al. at [0064] (“…another virtual warehouse configuration may cause a virtual warehouse to dynamically add or remove clusters and/or nodes based on its workload”).
4 Note that the cited prior art pertain to automating workloads, e.g., automating what would otherwise have been a manual process of users having to select virtual warehouses, etc. The steps themselves are involved/concerned with (automatically) determining the selection of a data pipeline, which was typically performed by people mentally, therefore the claims recite an abstract idea.
5 See, e.g., Alice Corp. v. CLS Bank Int'l, 573 U.S. __, 134 S. Ct. 2347 (2014) (the court noting that the use of a computer to obtain data, adjust account balances—i.e., a form of updating data—and issue automated instructions, were computing functions that were well-understood, routine, conventional activities previously known to the industry).
6 See Alice at p. 15: “Using a computer to create and maintain ‘shadow’ accounts amounts to electronic recordkeeping—one of the most basic functions of a computer…. The same is true with respect to the use of a computer to obtain data, adjust account balances, and issue automated instructions; all of these computer functions are ‘well-understood, routine, conventional activit[ies]’ previously known to the industry…. In short, each step does not do more than require a generic computer to perform generic computer functions”.
7 This language is paraphrased in order to more clearly explain how the claims were found to be directed to an abstract idea. Because the claims were worded in a manner that avoided the use of words such as “analyzing” or “determining”, the 101 rejection rephrased the claimed concepts to more clearly phrase the intention of the claimed invention, which was focused on the analysis/determination steps despite the absence of such explicit words.