Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The instant office action having application number 18/913,737, filed on October 11, 2024, has claims 1-20 pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/14/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,117,980. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-20 of U.S. Patent No. 12,117,980 anticipate or render obvious claims 1-20 of the instant application as set forth in the table below.
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the independent claims 1, 9 and 14, of the instant application to accessing, by the computing system, a set of rules, the set of rules being based at least in part on respective properties of one or more big data query engines, and the set of rules correlating at least one of the one or more characteristics associated with the big data query and the one or more user parameters with the respective properties of the one or more big data query engines; determining, by the computing system, a candidate list of big data query engines comprising a subset of the one or more big data query engines, the candidate list determined based at least in part on the set of rules; executing, by the computing system, the big data query using the particular big data query engine; identifying, by the computing system, a trigger indicating a performance issue with the particular big data query engine; and switching, by the computing system, the execution the big data query to a second big data query engine of the candidate list of big data query engines. The motivation for doing so is to generating, by the computing system and using a machine learning model, respective probability scores for each query engine of candidate list of query engines to execute a big data query.
Please, see the comparison table below:
Instant Application 18913737
Patent No. 12117980
1. A method, comprising: determining, by a computing system, a candidate list of query engines to execute a query determined based in part on at least one of on a set of rules correlating respective properties of the query engines of the candidate list, one or more characteristics of the query, or one or more user parameters; generating, by the computing system and using a machine learning model, respective probability scores for each query engine of candidate list of query engines, the respective probability scores representing a likelihood of the query being successfully completed by each query engine of a subset of the candidate list of query engines; and selecting, by the computing system, a particular query engine of the one or more query engines of the candidate list, based at least in part on the respective probability score of the particular query engine.
9. A computing system, comprising: one or more processors; and a computer readable memory comprising instructions that, when executed by the one or more processors, cause the computing system to perform operations to: determine a candidate list of query engines to execute a query determined based in part on at least one of on a set of rules correlating respective properties of the query engines of the candidate list, one or more characteristics of the query, or one or more user parameters; generate, using a machine learning model, respective probability scores for each query engine of candidate list of query engines, the respective probability scores representing a likelihood of the query being successfully completed by each query engine of a subset of the candidate list; and select a particular query engine of the one or more query engines of the candidate list, based at least in part on the respective probability score of the particular query engine.
14. A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to perform operations comprising: determining, by a computing system, a candidate list of query engines to execute a query determined based in part on at least one of on a set of rules correlating respective properties of the query engines of the candidate list, one or more characteristics of the query, or one or more user parameters; generating, by the computing system and using a machine learning model, respective probability scores for each query engine of candidate list of query engines, the respective probability scores representing a likelihood of the query being successfully completed by each query engine of the subset; and selecting, by the computing system, a particular query engine of the one or more query engines of the candidate list, based at least in part on the respective probability score of the particular query engine.
1. A method, comprising: receiving, by a computing system, a request for a big data query comprising one or more characteristics and one or more user parameters; accessing, by the computing system, a set of rules, the set of rules being based at least in part on respective properties of one or more big data query engines, and the set of rules correlating at least one of the one or more characteristics associated with the big data query and the one or more user parameters with the respective properties of the one or more big data query engines; determining, by the computing system, a candidate list of big data query engines comprising a subset of the one or more big data query engines, the candidate list determined based at least in part on the set of rules; generating, by the computing system and using a machine learning model respective probability scores for each big data query engine of candidate list of big data query engines, the respective probability scores representing a likelihood of the big data query being successfully completed by each big data query engine of the subset; selecting, by the computing system, a particular big data query engine of the one or more big data query engines of the candidate list, based at least in part on the respective probability score of the particular big data query engine; executing, by the computing system, the big data query using the particular big data query engine; identifying, by the computing system, a trigger indicating a performance issue with the particular big data query engine; and switching, by the computing system, the execution the big data query to a second big data query engine of the candidate list of big data query engines.
9. A computing system, comprising: one or more processors; and a computer readable memory comprising instructions that, when executed by the one or more processors, cause the computing system to perform operations to: receive, by the computing system, a request for a big data query comprising one or more characteristics and one or more user parameters; access, by the computing system, a set of rules, the set of rules being based at least in part on respective properties of one or more big data query engines, and the set of rules correlating at least one of the one or more characteristics associated with the big data query and the one or more user parameters with the respective properties of the one or more big data query engines; determine, by the computing system, a candidate list of big data query engines comprising a subset of the one or more big data query engines, the candidate list determined based at least in part on the set of rules; generate, by the computing system and using a machine learning model respective probability scores for each big data query engine of candidate list of big data query engines, the respective probability scores representing a likelihood of the big data query being successfully completed by each big data query engine of the subset; select, by the computing system, a particular big data query engine of the one or more big data query engines of the candidate list, based at least in part on the respective probability score of the particular big data query engine; execute, by the computing system, the big data query using the particular big data query engine; identify, by the computing system, a trigger indicating a performance issue with the particular big data query engine; and switch, by the computing system, the execution the big data query to a second big data query engine of the candidate list of big data query engines.
15. A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to perform operations comprising: receiving, by a computing system, a request for a big data query comprising one or more characteristics and one or more user parameters; accessing, by the computing system, a set of rules, the set of rules being based at least in part on respective properties of one or more big data query engines, and the set of rules correlating at least one of the one or more characteristics associated with the big data query and the one or more user parameters with the respective properties of the one or more big data query engines; determining, by the computing system, a candidate list of big data query engines comprising a subset of the one or more big data query engines, the candidate list determined based at least in part on the set of rules; generating, by the computing system and using a machine learning model, respective probability scores for each big data query engine of candidate list of big data query engines, the respective probability scores representing a likelihood of the big data query being successfully completed by each big data query engine of the subset; selecting, by the computing system, a particular big data query engine of the one or more big data query engines of the candidate list, based at least in part on the respective probability score of the particular big data query engine; and executing, by the computing system, the big data query using the particular big data query engine; identifying, by the computing system, a trigger indicating a performance issue with the particular big data query engine; and switching, by the computing system, the execution the big data query to a second big data query engine of the candidate list of big data query engines.
"A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by, the earlier claim. In re Longi, 759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness-type double patenting where a patent application claim to a genus is anticipated by a patent claim to a species within that genus). " ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION FOR REHEARING EN BANC (DECIDED: May 30, 2001).
The application claim 1 does not contain specific limitations as shown in the patent claim 1; however, according to In re Goodman, the application claim 1 is generic to the species of information covered by claim 1 of the patent. Thus, the generic invention is anticipated by the species of the patented invention.
The application claim 9 does not contain specific limitations as shown in the patent claim 9; however, according to In re Goodman, the application claim 9 is generic to the species of information covered by claim 9 of the patent. Thus, the generic invention is anticipated by the species of the patented invention.
The application claim 14 does not contain specific limitations as shown in the patent claim 14; however, according to In re Goodman, the application claim 14 is generic to the species of information covered by claim 14 of the patent. Thus, the generic invention is anticipated by the species of the patented invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 recites:
A method, comprising:
determining, by a computing system, a candidate list of query engines to execute a query determined based in part on at least one of on a set of rules correlating respective properties of the query engines of the candidate list, one or more characteristics of the query, or one or more user parameters (A person can mentally determine a candidate list of query engines);
generating, by the computing system and using a machine learning model, respective probability scores for each query engine of candidate list of query engines, the respective probability scores representing a likelihood of the query being successfully completed by each query engine of a subset of the candidate list of query engines (It is a mental process steps of mathematical calculation that can be perform in human mind); and
selecting, by the computing system, a particular query engine of the one or more query engines of the candidate list, based at least in part on the respective probability score of the particular query engine (A mental steps that can be performed conceptually in the human mind).
This judicial exception is not integrated into a practical application because the features of a “a computer system” perform the mental processes merely attempts to implement the abstract idea on a general purpose computer. See MPEP §2106.05(f). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no additional elements beyond those discussed regarding integration into a practical application to possibly amount to significantly more. Again, those elements merely attempt to apply the abstract idea on a general purpose computer.
Accordingly, the claim merely further describes the abstract idea of claim 1 without integrating into a practical application, or reciting significantly more.
As to claim 2, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites the additional elements of “executing, by the computing system, the query using the particular query engine; identifying, by the computing system, a trigger indicating a performance issue; monitoring, by the computing system, one or more performance metrics of the particular query engine during the execution of the query; and determining, by the computing system, that the particular query engine is not performing to an expected level based at least in part on the one or more performance metrics and the one or more user parameters; terminating, by the computing system, the execution of the query by the particular query engine; switching, by the computing system, the execution of the query to a second query engine of the candidate list of query engines; and executing, by the computing system, the query using the second query engine, the second query engine selected based at least in part on the respective probability score of the second query engine.” This judicial exception is not integrated into a practical application or amounts to significantly more because “executing, by the computing system, the query using the particular query engine” recites insignificant extra-solution activity of executing a query to identify the performance issue needed to implement the abstract idea (See MPEP §2106.05(g). The feature of “computing system” and “query engine” merely recites a generic computer component performing is generic function of storing data, and thus merely further implements the abstract idea on a computer (See MPEP §2106.05(f)). Furthermore the steps of “identifying”, “monitoring”, “determining”, “terminating” and “switching” are also insignificant extra-solution activity of data gathering an outputting necessary to implement the abstract idea on a computer (See MPEP §2106.05(g)). Accordingly, the claim merely further describes the abstract idea of claim 1 without integrating into a practical application, or reciting significantly more.
As to claim 3, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites the mental processes of “determining, by the computing system, one or more performance metrics of the particular query engine during the execution of the query; and retraining, by the computing system, the machine learning model using the one or more the one or more performance metrics and at least one of the one or more characteristics of the query and the one or more user parameters.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 4, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites “wherein the machine learning model is retrained after a specific number of query executions.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 5, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites “wherein the one or more user parameters comprise at least one of a reliability parameter, a latency parameter, and an accuracy parameter.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 6, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites “wherein the one or more characteristics of the query include a number of partitions, a row count, a query-type, and a table size.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 7, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites “wherein determining the candidate list is based at least in part on a relational tree comprising the one or more characteristics associated with the query and the one or more user parameters.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 8, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites “wherein the machine learning model is trained using a training data set comprising a data set size, a row count, a number of partitions, a column count, a column type map, a number of files, a query-operator count map, a query result reliability weight, a query execution time, and a query execution time weight.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 9 is rejected under the same rationale as claim 1 above.
As to claim 10, the claim is rejected for the same reasons as claim 9 above. In addition, the claim recites “wherein the computing system is implemented to select a query engine in a Hadoop environment.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 11, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites “wherein the machine learning model is trained using a training data set comprising a data set size, a row count, a number of partitions, a column count, a column type map, a number of files, a query-operator count map, a query result reliability weight, a query execution time, and a query execution time weight.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 12, the claim is rejected for the same reasons as claim 9 above. In addition, the claim recites “wherein the one or more user parameters comprise at least one of a reliability parameter, a latency parameter, and an accuracy parameter.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 13, the claim is rejected for the same reasons as claim 9 above. In addition, the claim recites “wherein the one or more characteristics of the query include a number of partitions, a row count, a query-type, and a table size.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 14 is rejected under the same rationale as claim 1 above.
As to claim 15, the claim is rejected for the same reasons as claim 9 above. In addition, the claim recites “executing, by the computing system, the query using the particular query engine; identifying, by the computing system, a trigger indicating a performance issue; monitoring, by the computing system, one or more performance metrics of the particular query engine during the execution of the query; and determining, by the computing system, that the particular query engine is not performing to an expected level based at least in part on the one or more performance metrics and the one or more user parameters; terminating, by the computing system, the execution of the query by the particular query engine; switching, by the computing system, the execution of the query to a second query engine of the candidate list of query engines; and executing, by the computing system, the query using the second query engine, the second query engine selected based at least in part on the respective probability score of the second query engine.” This judicial exception is not integrated into a practical application or amounts to significantly more because “executing, by the computing system, the query using the particular query engine” recites insignificant extra-solution activity of executing a query to identify the performance issue needed to implement the abstract idea (See MPEP §2106.05(g). The feature of “computing system” and “query engine” merely recites a generic computer component performing is generic function of storing data, and thus merely further implements the abstract idea on a computer (See MPEP §2106.05(f)). Furthermore the steps of “identifying”, “monitoring”, “determining”, “terminating” and “switching” are also insignificant extra-solution activity of data gathering an outputting necessary to implement the abstract idea on a computer (See MPEP §2106.05(g)). Accordingly, the claim merely further describes the abstract idea of claim 1 without integrating into a practical application, or reciting significantly more.
As to claim 16, the claim is rejected for the same reasons as claim 14 above. In addition, the claim recites the mental processes of “determining, by the computing system, one or more performance metrics of the particular query engine during the execution of the query; and retraining, by the computing system, the machine learning model using the one or more the one or more performance metrics and at least one of the one or more characteristics of the query and the one or more user parameters.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 17, the claim is rejected for the same reasons as claim 14 above. In addition, the claim recites the mental processes of “wherein the machine learning model is retrained after a specific number of query executions.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 18, the claim is rejected for the same reasons as claim 14 above. In addition, the claim recites the mental processes of “wherein the one or more user parameters comprise at least one of a reliability parameter, a latency parameter, and an accuracy parameter.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 19, the claim is rejected for the same reasons as claim 14 above. In addition, the claim recites the mental processes of “wherein the one or more characteristics of the query include a number of partitions, a row count, a query-type, and a table size.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
As to claim 20, the claim is rejected for the same reasons as claim 14 above. In addition, the claim recites the mental processes of “wherein determining the candidate list is based at least in part on a relational tree comprising the one or more characteristics associated with the query and the one or more user parameters.” which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-14 and 16-20 are rejected under 35 USC 103(a) as being unpatentable over Gerweck et al. (US 10713248 B2) (hereinafter Gerweck) in view of Remis et al. (US 12008456 B2) (hereinafter Remis).
As per claims 1, 9 and 14,Gerweck discloses determining, by a computing system, a candidate list of query engines to execute a query determined based in part on at least one of on a set of rules correlating respective properties of the query engines of the candidate list, one or more characteristics of the query, or one or more user parameters [FIG. 3A, query engine selector 116 and physical plan generator 114 are implemented in a data analytics engine 310 to facilitate real-time selection and use of the query engines 142 in distributed data storage environment 140. In this embodiment, data analytics engine 310 serves to establish the data operations abstraction layer 110 earlier described. As can be observed, a query analyzer/planner 312 at data analytics engine 310 receives the data statements 122 from a data analysis application (e.g., business intelligence tool) managed by analyst 102. For example, data statements 122 might be issued to operate on a subject dataset 344 from the datasets 144 stored in distributed data storage environment 140. The query analyzer/planner 312 accesses the virtual data model 112 to generate the logical plan 124 for data statements 122. The logical data structure representation of the virtual data model 112 is based on at least a portion of a set of dataset metadata 338 associated with datasets 144, col. 9, line 17]. However Gerweck does not disclose generating, by the computing system and using a machine learning model, respective probability scores for each query engine of candidate list of query engines, the respective probability scores representing a likelihood of the query being successfully completed by each query engine of a subset of the candidate list of query engines; and selecting, by the computing system, a particular query engine of the one or more query engines of the candidate list, based at least in part on the respective probability score of the particular query engine. On the other hand, Remis discloses generating, by the computing system and using a machine learning model, respective probability scores for each query engine of candidate list of query engines, the respective probability scores representing a likelihood of the query being successfully completed by each query engine of a subset of the candidate list of query engines [To predict optimal query performance or execution of the query 115, the query selection system 100 of the illustrated example includes a contextual query classifier 112. The graph language query 110 of the illustrated example communicates the query 115 to the contextual query classifier 112. The contextual query classifier 112 evaluates and/or predicts which database (e.g., the graph database 104 or the relational database 106) is optimal for executing the query 115. As described in greater detail in connection with FIGS. 2A and 4-6, the contextual query classifier 112 employs artificial intelligence and/or machine learning to predict the optimal search engine for each query (e.g., the query 115) received by the query selection system, Col. 8, line 21]; and selecting, by the computing system, a particular query engine of the one or more query engines of the candidate list, based at least in part on the respective probability score of the particular query engine [The model classifier 210 provides a predicted output (ŷ) (e.g., a binary output) representative of optimal query engine selection. For example, a first value of a first binary output represents the graph query engine 114 and a second value of a second binary output represents the relational query engine 116. For example, a binary value “0” represents a “not selected” query engine and a binary value of “1” represents a “selected” query engine. In the above-noted example, the binary output {0, 1} is presentative of employing the relational query engine, col. 12, line 35]. Both reference Gerweck and Remis are in the field of endeavor of data segmentation and deduplication. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine using a machine-learning model (e.g., LSTM) to predict which query engine will best execute a given query, assigning scores to each engine and selecting the engine accordingly as taught by Gerweck with determining the list of query engines based on rules correlating query characteristics and engine properties as disclosed in Remis to have a rule-based filtering by the use of machine learning having the predictive scoring to be able to select the best query engine.
As per claims 3 and 16, Remis discloses determining, by the computing system, one or more performance metrics of the particular query engine during the execution of the query; and retraining, by the computing system, the machine learning model using the one or more the one or more performance metrics and at least one of the one or more characteristics of the query and the one or more user parameters [in the event a first data structure type is selected at a first time and is observed to exhibit relatively good performance characteristics in connection with a first type of input data, in the event the input data types and/or input data quantities change throughout the use of the coded application, performance characteristics may degrade. Because data structure selection is a laborious process requiring substantial expertise, numerous design factors, and/or possible dynamic operating conditions, applications written and/or otherwise developed by code development personnel suffer in one or more performance metrics when particular data structures are selected., col. 4, line 24].
As per claims 4 and 17, Remis discloses wherein the machine learning model is retrained after a specific number of query executions [the LSTM model (e.g., the model classifier 210) can learn (e.g., via the model trainer 212) different conditions that can lead to performance changes and predicts their evolution over time, col.13, line 38].
As per claims 5, 12 and 18, Remis discloses wherein the one or more user parameters comprise at least one of a reliability parameter, a latency parameter, and an accuracy parameter [During training, the model verifier 216 of FIG. 2A compares the query results of the databases (y) and the predicted outputs (ŷ) (e.g., the predicted binary output ŷ) provided by the LSTM model to determine accuracy of the LSTM model, col. 13, line 60].
As per claims 6, 13 and 19, Gerweek disclose wherein the one or more characteristics of the query include a number of partitions, a row count, a query-type, and a table size [a query engine selection rule mapping table 364 illustrates a mapping of statement attributes to query engine selection actions. The representative rules in query engine selection rule mapping table 364 are identified by entries in a “ruleID” column. The rules are also assigned a priority level in a “priority” column. An “ownerID” column indicates the entity, col. 10, line 65, Fig. 3B].
As per claims 7 and 20, Remis discloses wherein determining the candidate list is based at least in part on a relational tree comprising the one or more characteristics associated with the query and the one or more user parameters [Query execution performance improvement could be obtained by concurrently combining the relational and graph models. In these examples, depending on the query characteristics, the most suitable model (e.g., either graph or relational) can be chosen to execute the query, col. 3, line 66].
As per claims 8 and 11, Remis discloses wherein the machine learning model is trained using a training data set comprising a data set size, a row count, a number of partitions, a column count, a column type map, a number of files, a query-operator count map, a query result reliability weight, a query execution time, and a query execution time weight [In the learning/training phase, a training algorithm or procedure is used to train a model to operate in accordance with patterns and/or associations based on, for example, training data, col. 5, line 36].
As per claim 10, Gerweek discloses wherein the computing system is implemented to select a query engine in a Hadoop environment [an enterprise might desire to have access to 100 TB or more of data that comprises some datasets stored in a variety of modern heterogeneous data storage environments (e.g., Hadoop distributed file system or HDFS), as well as some other datasets stored in a variety of legacy data storage environments, col. 1, line 15].
Allowable Subject Matter
Claims 2 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The primary reason for objecting to claims 2 and 15 is because the prior arts of record do not teach or suggest “executing, by the computing system, the query using the particular query engine; identifying, by the computing system, a trigger indicating a performance issue; monitoring, by the computing system, one or more performance metrics of the particular query engine during the execution of the query; and determining, by the computing system, that the particular query engine is not performing to an expected level based at least in part on the one or more performance metrics and the one or more user parameters; terminating, by the computing system, the execution of the query by the particular query engine; switching, by the computing system, the execution of the query to a second query engine of the candidate list of query engines; and executing, by the computing system, the query using the second query engine, the second query engine selected based at least in part on the respective probability score of the second query engine.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOOSHA ARJOMANDI whose telephone number is (571)272-9784. The examiner can normally be reached on (571)272-9784.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Beausoliel can be reached on (571)272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
December 10, 2025
/NOOSHA ARJOMANDI/Primary Examiner, Art Unit 2167