DETAILED ACTION
In response to communication filed on 17 February 2026, claims 1-2, 8, 11-12 and 18 are amended. Claims 3 and 13 are canceled. Claims 1-2, 4-12 and 14-20 are pending.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see “Section 103 Rejections” filed 24 December 2025, have been carefully considered but are not persuasive.
APPLICANT’S ARGUMENT: Applicant argues that Claim 3 is merged into Claim 1 in its present form that recites "multiple seasonality features" that Tootaghaj lacks. Tootaghaj is mischaracterized for merged Claim 3.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. The current claim language does not clarify what specifically seasonality features are. To a person of ordinary skill in the art, based on the broadest reasonable interpretation, in the light of specification; seasonality features may be reasonably interpreted as time based information which is being taught by Tootaghaj in [0090] and [0097]-[0098]. Also, Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Therefore the above arguments are not considered to be persuasive.
APPLICANT’S ARGUMENT: Applicant also argues that Folkert's "cost" is cited as the recited "benefit" of Claims 1 and 7. Folkert is mischaracterized.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. The current claim language does not clarify what specifically benefit is. To a person of ordinary skill in the art, based on the broadest reasonable interpretation, in the light of specification; benefit may be reasonably interpreted as lowest cost of refresh which is being taught by Folkert in [0072]. Also, Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Therefore the above arguments are not considered to be persuasive.
APPLICANT’S ARGUMENT: Regarding claim 4, Applicant further argues that the Office action says "an amount of neurons that is based on...Litivinova...[0063]...64 neurons...128 neurons". Here, "64" and "128" are arbitrary (i.e. not based on anything). Litivinova is mischaracterized
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. Litivinova clarifies that the number of neurons are based on the specific layer requirements as explained in [0063]. It is not arbitrary as being argued above. Litivinova clarifies that “Four layers are provided in this example: an input layer, two hidden layers, and an output layer. Layer 1, in this embodiment, is a dense layer, with 64 neurons, ReLU activation, and no dropout. Layer 2, in this embodiment, is a dense layer, with 64 neurons, Sigmoid activation, and 0.4 dropout. Layer 3, in this embodiment, is a dense layer, with 128 neurons, Tanh activation, and 0.5 dropout. Finally, the output layer, in this embodiment, has 3 neurons”. As a result the above arguments are not considered to be persuasive.
APPLICANT’S ARGUMENT: Regarding claim 4, the Office action quotes "Tootaghaj, [0041]...support vector regression (SVR) models", and Claim 4 recites "regression model". The Office action says "proposed combination of Folkert, Tootaghaj and Horvitz to yield the predictable results of...classification...Litivinova, [0051]...classification". The cited "classification" would not have been a motivation to modify Tootaghaj's cited "regression...models". Due to invalid combination rationale under KSR for Claim 4: there is no motivation to combine Litivinova.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. According to MPEP (2144 – IV) “The reason or motivation to modify the reference may often suggest what the inventor has done, but for a different purpose or to solve a different problem. It is not necessary that the prior art suggest the combination to achieve the same advantage or result discovered by applicant. See, e.g., In re Kahn, 441 F.3d 977, 987, 78 USPQ2d 1329, 1336 (Fed. Cir. 2006) (motivation question arises in the context of the general problem confronting the inventor rather than the specific problem solved by the invention)”. Also, according to MPEP 2141.01 (a) “A reference is analogous art to the claimed invention if: (1) the reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or (2) the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention). Note that "same field of endeavor" and "reasonably pertinent" are two separate tests for establishing analogous art; it is not necessary for a reference to fulfill both tests in order to qualify as analogous art. See Bigio, 381 F.3d at 1325, 72 USPQ2d at 1212. The examiner must determine whether a reference is analogous art to the claimed invention when analyzing the obviousness of the subject matter under examination. When more than one prior art reference is used as the basis of an obviousness rejection, it is not required that the references be analogous art to each other”. So, Tootaghaj is related to regression models and those are applied for the purpose of classification as well. As a result the above arguments are not considered to be persuasive.
All the other arguments regarding claims 2 and 8 are related to newly amended limitations and are addressed in the rejection below.
Claim Interpretation
Claims 5 and 15 recite “the first hidden layer using a logistic sigmoid activation function; the second hidden layer using a Tanh activation function”. The claims do not recite functionality, but instead recites what the logistic sigmoid activation function and Tanh activation functions are used for. Examiner suggests amending the claim to recite the functionality performed by the claimed method, instead of reciting what the claim elements are used for.
Claims 6 and 16 recite “a second predicted count of query rewrites that will use the second materialized view during said future time interval”. The claims do not recite functionality, but instead recites what the second materialized views are used for. Examiner suggests amending the claim to recite the functionality performed by the claimed method, instead of reciting what the claim elements are used for.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 9-11 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Folkert et al. (US 2005/0234945 A1, hereinafter “Folkert”) in view of Tootaghaj et al. (US 2021/0184942 A1, hereinafter “Tootaghaj”) further in view of Horvitz et al. (US 2005/0033711 A1, hereinafter “Horvitz”).
Regarding claim 1, Folkert teaches
A method comprising: (see Folkert, [0043] “A method and apparatus”).
tracking rewrite activity for a materialized view; (see Folkert, [0020] “The MV logs track changes made to the base table that are relevant to the MV. To find which rows to apply the computed changes, the changes that need to be applied to the MV are joined to the original MV”).
…. based on the rewrite activity,… (see Folkert, [0020] “The MV logs track changes made to the base table that are relevant to the MV. To find which rows to apply the computed changes, the changes that need to be applied to the MV are joined to the original MV”) for the materialized view; (see Folkert, [0020] “The MV logs track changes made to the base table that are relevant to the MV. To find which rows to apply the computed changes, the changes that need to be applied to the MV are joined to the original MV”).
… rewritten to reference the materialized view; (see Folkert, [0077] “where any query may be rewritten against any base table, any fresh MV, or any MV”).
estimating, based on summation that is based on cost of refresh… (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”) an estimated benefit of (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”) having the materialized view not be stale during the future time interval; and (see Folkert, [0083] “an MV will be considered fresh, available for rewrite, and in enforced mode if (1) the MV is currently fresh and allows query rewrites in enforced mode, or (2) the MV is in the Refresh List and is currently stale, but will allow rewrites in enforced mode after being refreshed”; [0061] “The ordering of the refresh of the MVs determines which MVs may be refreshed using other MVs, because the refresh of a given MV can be performed by a rewrite from all of the MVs that are fresh at the time that the given MV is being refreshed”).
scheduling the materialized view for a refresh based on the estimated benefit (see Folkert, [0072] “for each of a set of refresh schedules, the MV rewrite system is used for estimating the cost of using rewrites against other MVs. The cost of refreshing a particular MV is computed using one or more MVs selected from a set of MVs that are usable for refreshing that particular MV. The computation costs of refreshing a particular MV is repeated for each of the MVs in the set of MVs. Then a refresh schedule that has the lowest cost is chosen from the set of refresh schedules”).
Folkert does not explicitly teach training, based on the rewrite activity, a regression model for the materialized view; including the regression model accepting an input that contains multiple seasonality features; predicting, by the regression model, a predicted count of multiple queries during a future time interval that will be rewritten to reference the materialized view; the predicted count of multiple query rewrites.
However, Tootaghaj discloses prediction model and teaches
training, based on historical observation… a regression model for queries… (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observation”; [0064] “a number of transactions, queries or requests that has been predicted by the machine-learning prediction model will be received at the future time”) including the regression model accepting an input that contains multiple seasonality features; (see Tootaghaj, [0085] “Depending upon the particular implementation, the machine-learning prediction model may be based on an SVR model or a deep learning model”; [0090] “a window of time-series workload information 610 represents an input parameter to or is otherwise made available to the machine-learning prediction model training processing”; [0097]-[0098] “Algorithm #2 -SVR Training Procedure… For purposes of completeness, a non-limiting pseudo code example of a SVR training procedure is presented below: 1. Input: Time series workload information (X). 2. Output: SVR Model to predict future cl-second workload at time t during a window size of W”).
predicting, by the regression model, a predicted workload information (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observation”; [0064] “a number of transactions, queries or requests that has been predicted by the machine-learning prediction model will be received at the future time”; [0038] “the workload prediction process 233 may include one or more of a target performance metric, a previous number of replicas in use at prior times and past values of the target performance metric at the prior times”; [0075] “more accurate predictions can be achieved by performing the prediction process more frequently”) during a future time interval (see Tootaghaj, [0077] “a workload prediction at a future time t+d may be made by training an SVR model using the observed/monitored workload information during a window size of [t-W, t]”; [0076] “the default window size (W) is between approximately 10 and 90 seconds and the prediction time interval (d) is between approximately 1 and 15 seconds”; [0092] “time t is used as training data and the future workload (e.g., QPS), predicted at time t+d is used as the test data”).
the predicted workload information (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observation”; [0064] “a number of transactions, queries or requests that has been predicted by the machine-learning prediction model will be received at the future time”; [0038] “the workload prediction process 233 may include one or more of a target performance metric, a previous number of replicas in use at prior times and past values of the target performance metric at the prior times”; [0075] “more accurate predictions can be achieved by performing the prediction process more frequently”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of training regression model, predicting by a regression model during a future time interval, support vector machine, input into regression model, training examples and neural network as being disclosed and taught by Tootaghaj in the system taught by Folkert to yield the predictable results of improving response times and the efficiency of resource usage by leveraging predictions (see Tootaghaj, [0020] “provide a serverless autoscaling approach that improves response times and the efficiency of resource usage by leveraging workload prediction and the novel control-theoretic resource orchestration”).
The proposed combination of Folkert and Tootaghaj does not explicitly teach count of multiple queries that will be rewritten to reference the materialized view; the predicted count of multiple query rewrites.
However, Horvitz discloses query rewrites and teaches
count of multiple queries… that will be rewritten (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”).
count of multiple query rewrites, (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of query rewrites, decision tree and training examples as being disclosed and taught by Horvitz in the system taught by the proposed combination of Folkert and Tootaghaj to yield accurate predictions (see Horvitz, [0014] “Beyond extending probabilistic models of accuracy and expected value analysis, question-answering systems in general can be refined in several ways. Refinements include introducing new variants of query rewrites and modifying methods for combining search results into candidate answers”).
Claim 11 incorporates substantively all the limitations of claim 1 in a computer-readable medium form (see Folkert, [0133] “Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 1204 for execution”) and is rejected under the same rationale.
Regarding claim 9, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
further comprising: identifying an object in the materialized view that is not stale; refreshing the materialized view without refreshing, in the materialized view, the object (see Folkert, [0077] “The best refresh expression is the refresh expression with the lowest estimated cost of any of the refresh methods, where any query may be rewritten against any base table, any fresh MV, or any MV in the Refresh List excluding itself”; [0096] “The refresh graph does not include base tables and fresh MVs 402, because they are either base tables or else are fresh, and therefore do to need to be refreshed”). The motivation for the proposed combination is maintained.
Claim 19 incorporates substantively all the limitations of claim 9 in a computer-readable medium form and is rejected under the same rationale.
Regarding claim 10, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
further comprising detecting a particular operation that makes the materialized view stale is an operation selected from a group consisting of dropping a partition of a table, adding a partition to a table, merging two partitions of a table into one partition, and splitting a partition of a table into two partitions (see Folkert, [0014] “whenever the base tables of an MV are updated, the MV is marked as stale until the MV is refreshed”; [0024] “if Partition Maintenance Operations (PMOPs) (e.g., adding, dropping, splitting, and merging partitions), were performed on the base tables”). The motivation for the proposed combination is maintained.
Claim 20 incorporates substantively all the limitations of claim 10 in a computer-readable medium form and is rejected under the same rationale.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Folkert, Tootaghaj and Horvitz in view of Anderson (US 11,836,319 B2, hereinafter “Anderson”).
Regarding claim 2, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
wherein the regression model is selected from a group consisting of (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observations”).
The proposed combination of Folkert, Tootaghaj and Horvitz does not explicitly teach a three-nearest neighbors and a naïve Bayes.
However, Anderson discloses a classifier and teaches
a three-nearest neighbors and (see Anderson, [col 9 lines 33-34] “nearest neighbors, such as K nearest neighbors (KNN), is used for machine learning… a variety of values can be used for K or the number of neighbors used for comparison in order to make a classification. For example, the values for K or the number of neighbors can be at least 1, 2, 3”; [col 19 lines 29-31] “wherein a value of K is in an odd number in a range of 3 to 7 nearest neighbors”) a naive Bayes (see Anderson, [col 11 lines 13-14] “that can be used include logistic regression, naive Bayes”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of three-nearest neighbors and naïve Bayes as being disclosed and taught by Anderson in the system taught by the proposed combination of Folkert, Tootaghaj and Horvitz to yield predictable results of accurately applying classifier to classify information (see Anderson, [col 8 line 63 – col 9 line 4] “In example, classifiers or classifier models can include machine learning models, such as using artificial neural networks, nearest neighbors, or a combination thereof. In practice, classifiers or classifier models, such as nearest neighbors (for example, K nearest neighbors) or artificial networks (such as deep neural network), can be used to classify a waveform (such as an emission waveform or a transformed waveform) or features extracted therefrom”; [col 11 line 13-14] “Examples of classifiers or classifier models that can be used include logistic regression, naive Bayes”).
Claims 4-5 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Folkert, Tootaghaj and Horvitz in view of Litvinova (US 2020/0268252 A1, hereinafter “Litvinova”).
Regarding claim 4, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
wherein: the rewrite activity (see Folkert, [0020] “The MV logs track changes made to the base table that are relevant to the MV. To find which rows to apply the computed changes, the changes that need to be applied to the MV are joined to the original MV”) contains a plurality of training examples; (see Tootaghaj, [0068] “machine-learning prediction model is trained… an RBF SVR kernel is used to train the machine-learning prediction model based on time series workload information (e.g., historically observed workload information or past workload information for a window of time up to and including the current time)”).
the regression model is a neural network… (see Tootaghaj, [0041] “a number of machine-learning techniques including, but not limited to, support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs))”) a count of the training examples (see Horvitz, [0030] “may be provided that enables users to assess or select various parameters that influence the utility model 140”).
The proposed combination of combination of Folkert, Tootaghaj and Horvitz does not explicitly teach neural network that contains a hidden layer that contains an amount of neurons that is based on a count of the training examples.
However, Litvinova discloses machine-learning model and teaches
neural network that contains a hidden layer that contains an amount of neurons that is based on specific layer requirements (see Litivinova, [0044] “facilitate efficient implementation of neural network functionality”; [0063] “Four layers are provided in this example: an input layer, two hidden layers, and an output layer… Layer 2, in this embodiment, is a dense layer, with 64 neurons, Sigmoid activation, and 0.4 dropout… Layer 3, in this embodiment, is a dense layer, with 128 neurons, Tanh activation, and 0.5 dropout”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of hidden layer that contains neurons, two hidden layers, first hidden layer using a logistic sigmond activation function and second hidden layer using a Tanh activation function as being disclosed and taught by Litivinova in the system taught by the proposed combination of Folkert, Tootaghaj and Horvitz to yield the predictable results of effective inputting data to perform appropriate classification and characterization (see Litivinova, [0051] “Some combinations of data that may be acquired optically and noninvasively by fluorescence spectroscopy device 100 have been determined to be particularly effective as inputs to classification and characterization approaches relying upon machine learning techniques, for a wide variety of clinical, medical or industrial applications”).
Claim 14 incorporates substantively all the limitations of claim 4 in a computer-readable medium form and is rejected under the same rationale.
Regarding claim 5, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
wherein: the regression model is a neural network… (see Tootaghaj, [0041] “a number of machine-learning techniques including, but not limited to, support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs))”)
the method further comprises: (see Folkert, [0043] “A method and apparatus”).
The proposed combination of combination of Folkert, Tootaghaj and Horvitz does not explicitly teach neural network that contains exactly two hidden layers that are a first hidden layer and a second hidden layer; the first hidden layer using a logistic sigmoid activation function; the second hidden layer using a Tanh activation function.
However, Litvinova discloses machine-learning model and teaches
that contains exactly two hidden layers that are a first hidden layer and a second hidden layer; (see Litivinova, [0044] “facilitate efficient implementation of neural network functionality”; [0063] “Four layers are provided in this example: an input layer, two hidden layers, and an output layer… Layer 2, in this embodiment, is a dense layer, with 64 neurons, Sigmoid activation, and 0.4 dropout… Layer 3, in this embodiment, is a dense layer, with 128 neurons, Tanh activation, and 0.5 dropout”).
the first hidden layer using a logistic sigmoid activation function; the second hidden layer using a Tanh activation function (see Litivinova, [0044] “facilitate efficient implementation of neural network functionality”; [0063] “Four layers are provided in this example: an input layer, two hidden layers, and an output layer… Layer 2, in this embodiment, is a dense layer, with 64 neurons, Sigmoid activation, and 0.4 dropout… Layer 3, in this embodiment, is a dense layer, with 128 neurons, Tanh activation, and 0.5 dropout”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of hidden layer that contains neurons, two hidden layers, first hidden layer using a logistic sigmond activation function and second hidden layer using a Tanh activation function as being disclosed and taught by Litivinova in the system taught by the proposed combination of Folkert, Tootaghaj and Horvitz to yield the predictable results of effective inputting data to perform appropriate classification and characterization (see Litivinova, [0051] “Some combinations of data that may be acquired optically and noninvasively by fluorescence spectroscopy device 100 have been determined to be particularly effective as inputs to classification and characterization approaches relying upon machine learning techniques, for a wide variety of clinical, medical or industrial applications”).
Claim 15 incorporates substantively all the limitations of claim 5 in a computer-readable medium form and is rejected under the same rationale.
Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Folkert, Tootaghaj and Horvitz in view of Fuchie et al. (US 11,190,744 B2, hereinafter “Fuchie”).
Regarding claim 6, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
wherein: the predicted workload information (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observation”; [0064] “a number of transactions, queries or requests that has been predicted by the machine-learning prediction model will be received at the future time”; [0038] “the workload prediction process 233 may include one or more of a target performance metric, a previous number of replicas in use at prior times and past values of the target performance metric at the prior times”; [0075] “more accurate predictions can be achieved by performing the prediction process more frequently”) count of query rewrites (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”) is a first predicted (see Tootaghaj, [0075] “more accurate predictions can be achieved by performing the prediction process more frequently” – there are plurality of predictions) count of query rewrites; (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”).
the method further comprises: (see Folkert, [0043] “A method and apparatus”).
a) predicting, by a second regression model (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observation” – there are plurality of regression models) for a second materialized view, (see Folkert, [0090] “Refresh List 404, includes MVs A, B, C, D, and E” – includes plurality of materialized views) a second predicted (see Tootaghaj, [0075] “more accurate predictions can be achieved by performing the prediction process more frequently” – there are plurality of predictions) count of query rewrites that will use (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”) the second materialized view (see Folkert, [0090] “Refresh List 404, includes MVs A, B, C, D, and E” – includes plurality of materialized views) during said future time interval; (see Tootaghaj, [0077] “a workload prediction at a future time t+d may be made by training an SVR model using the observed/monitored workload information during a window size of [t-W, t]”; [0076] “the default window size (W) is between approximately 10 and 90 seconds and the prediction time interval (d) is between approximately 1 and 15 seconds”; [0092] “time t is used as training data and the future workload (e.g., QPS), predicted at time t+d is used as the test data”).
b) detecting (see Folkert, [0062] “the schedule of the order in which to refresh the MVs is determined”) the first predicted (see Tootaghaj, [0075] “more accurate predictions can be achieved by performing the prediction process more frequently” – there are plurality of predictions) count of query rewrites… (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”) the second predicted (see Tootaghaj, [0075] “more accurate predictions can be achieved by performing the prediction process more frequently” – there are plurality of predictions) count of query rewrites; (see Horvitz, [0055] “Given a query, query rewrites are first sorted into a list by single-query models. Then, an ensemble of Bayesian models for different numbers of rewrites are employed in conjunction with a utility model to select the best number of rewrites to issue to a search engine (or engines)”).
said scheduling is based on said detecting (see Folkert, [0062] “the schedule of the order in which to refresh the MVs is determined”).
The proposed combination of combination of Folkert, Tootaghaj and Horvitz does not explicitly teach the first predicted count of query rewrites exceeds the second predicted count of query rewrites.
However, Fuchie discloses prediction values and teaches
first value exceeds second value (see Fuchie, [claim 3] “set the prediction residual code amount of the corresponding candidate prediction mode to a first value… set the prediction residual code amount of the corresponding candidate prediction mode to a second value… the first value is greater than the second value”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of aa specific value exceeding another value as being disclosed and taught by Fuchie in the system taught by the proposed combination of Folkert, Tootaghaj and Horvitz to yield the predictable results of effectively comparing values between two prediction modes to select an appropriate prediction mode (see Fuchie, [col 27 lines 48-54] “However, the orthogonal direction prediction exhibits a smaller cost value than the DC prediction in the sum of the prediction error code amount and the mode code amount, that is, in comparison of the cost between two prediction modes. Therefore, the orthogonal direction prediction may be selected as a prediction mode of the intra-prediction in this case”).
Claim 16 incorporates substantively all the limitations of claim 6 in a computer-readable medium form and is rejected under the same rationale.
Regarding claim 7, the proposed combination of Folkert, Tootaghaj and Horvitz teaches
wherein: the estimated benefit is a first estimated benefit; (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”).
the method further comprises: (see Folkert, [0043] “A method and apparatus”).
a) estimating, (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”; [0046] “the estimated cost of refreshing each of a set of MVs” – estimated costs are determined for plurality of MVs) based on a prediction by a second regression model (see Tootaghaj, [0041] “support vector regression (SVR) models and deep learning models (e.g., deep neural networks (DNNs)) can be trained to make predictions based on historical observation”; [0038] “the workload prediction process 233 may include one or more of a target performance metric, a previous number of replicas in use at prior times and past values of the target performance metric at the prior times”; [0075] “more accurate predictions can be achieved by performing the prediction process more frequently” – there are plurality of predictions and models) for a second materialized view, (see Folkert, [0090] “Refresh List 404, includes MVs A, B, C, D, and E” – includes plurality of materialized views) a second estimated benefit of (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”; [0046] “the estimated cost of refreshing each of a set of MVs” – estimated costs are determined for plurality of MVs) having the second materialized view (see Folkert, [0090] “Refresh List 404, includes MVs A, B, C, D, and E” – includes plurality of materialized views) not be stale during said future time interval; (see Folkert, [0083] “an MV will be considered fresh, available for rewrite, and in enforced mode if (1) the MV is currently fresh and allows query rewrites in enforced mode, or (2) the MV is in the Refresh List and is currently stale, but will allow rewrites in enforced mode after being refreshed”; [0061] “The ordering of the refresh of the MVs determines which MVs may be refreshed using other MVs, because the refresh of a given MV can be performed by a rewrite from all of the MVs that are fresh at the time that the given MV is being refreshed”).
b) detecting (see Folkert, [0062] “the schedule of the order in which to refresh the MVs is determined”) the first estimated benefit… (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”) the second estimated benefit; (see Folkert, [0076] “the cardinality of the MVs may be used estimating the cost of a refresh and therefore in determining which MV to use for refreshing other MVs, which may be of a higher level of aggregation… Without rewrite, all of the MVs of FIG. 3 would be using the same base data instead of a significantly smaller MV, and without scheduling, the benefits of rewrite could be much less”; [0046] “the estimated cost of refreshing each of a set of MVs” – estimated costs are determined for plurality of MVs).
said scheduling is based on said detecting (see Folkert, [0062] “the schedule of the order in which to refresh the MVs is determined”).
The proposed combination of combination of Folkert, Tootaghaj and Horvitz does not explicitly teach the first predicted count of query rewrites exceeds the second predicted count of query rewrites.
However, Fuchie discloses prediction values and teaches
the first estimated benefit exceeds the second estimated benefit (see Fuchie, [claim 3] “set the prediction residual code amount of the corresponding candidate prediction mode to a first value… set the prediction residual code amount of the corresponding candidate prediction mode to a second value… the first value is greater than the second value”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of aa specific value exceeding another value as being disclosed and taught by Fuchie in the system taught by the proposed combination of Folkert, Tootaghaj and Horvitz to yield the predictable results of effectively comparing values between two prediction modes to select an appropriate prediction mode (see Fuchie, [col 27 lines 48-54] “However, the orthogonal direction prediction exhibits a smaller cost value than the DC prediction in the sum of the prediction error code amount and the mode code amount, that is, in comparison of the cost between two prediction modes. Therefore, the orthogonal direction prediction may be selected as a prediction mode of the intra-prediction in this case”).
Claim 17 incorporates substantively all the limitations of claim 7 in a computer-readable medium form and is rejected under the same rationale.
Regarding claim 8, the proposed combination of Folkert, Tootaghaj, Horvitz and Fuchie teaches
wherein: the second materialized view is stale; (see Folkert, [0090] “related to a refresh schedule for tables of schema 400. At the time of the refresh, the tables in a schema 400 may be divided into three groups: the base tables and fresh MVs 402, the Refresh List 404, and other stale materialized views 406, which are not being refreshed… includes MVs A, B, C, D, and E, base tables and fresh MVs 402 includes base tables and fresh MVs T, S, and V, and other stale MVs 106 includes stale MVs Y and Z”).
said scheduling comprises deciding not to refresh the second materialized view (see Folkert, [0090] “related to a refresh schedule for tables of schema 400. At the time of the refresh, the tables in a schema 400 may be divided into three groups: the base tables and fresh MVs 402, the Refresh List 404, and other stale materialized views 406, which are not being refreshed… includes MVs A, B, C, D, and E, base tables and fresh MVs 402 includes base tables and fresh MVs T, S, and V, and other stale MVs 106 includes stale MVs Y and Z”; [0096] “Since MVs Y and Z from the set of stale MVs 406 are not being refreshed the refresh graph also does not include MVs Y and Z. Additionally, since MVs Z and Y are stale and are not included in the refresh list”).
Claim 18 incorporates substantively all the limitations of claim 8 in a computer-readable medium form and is rejected under the same rationale.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISHALI SHAH whose telephone number is (571)272-8532. The examiner can normally be reached Monday - Friday (7:30 AM to 4:00 PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AJAY BHATIA can be reached at (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAISHALI SHAH/Primary Examiner, Art Unit 2156