Prosecution Insights
Last updated: April 19, 2026
Application No. 18/307,509

METHOD AND APPARATUS FOR PROCESSING APPROXIMATE QUERY BASED ON MACHINE LEARNING MODEL

Final Rejection §101§103
Filed
Apr 26, 2023
Examiner
MAHMOOD, REZWANUL
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
81%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
186 granted / 402 resolved
-8.7% vs TC avg
Strong +35% interview lift
Without
With
+34.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
31 currently pending
Career history
433
Total Applications
across all art units

Statute-Specific Performance

§101
18.9%
-21.1% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 402 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is in response to the communication filed on September 09, 2025. Claims 1-5, 8-14, and 17-20 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on September 09, 2025 have been fully considered but they are not persuasive for the following reasons: Applicant in Pages 7-10 of the Remarks argues that the amended independent claims 1 and 11 are patent-eligible under each prong of the eligibility framework, the claims are not directed to a judicial exception because they recite a practical application of machine learning in query optimization, even if the claims were considered to involve an abstract idea, they are integrated into a specific and structured query processing architecture that materially improves system functionality, the claims recite an inventive concept though a non-conventional combination of trained machine learning models, synopsis reuse, and grammar-based constraint handling, and each of these grounds independently supports eligibility under 35 U.S.C. § 101. Examiner respectfully disagrees. It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. (MPEP 2106.05(a)). Independent claims 1 and 11 covers several steps, such as the parsing, selecting, and generating steps, that recite an abstract idea within the “Mental Processes” grouping of abstract ideas, because a person can mentally or using a pen and paper perform the limitations recited in said steps, which is discussed in detail in the current 101 rejection below. The claims do not provide any limitations that are directed to a specific improvement in computer technology because the steps argued by the applicant as being directed to a specific improvement in computer technology, are all recited in the claims as limitations that have been identified as abstract ideas. The remaining steps in the claims that are identified as reciting additional elements, such as the performing step, are only adding insignificant extra-solution activity to the judicial exception, and are recognized as a well understood, routine, and conventional activity within the field of computer functions, which is not sufficient to amount to significantly more than the judicial exception and are not directed to any specific improvement in computer technology. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Applicant in Pages 10-14 of the Remarks argues that Fan, Kapp, Kandukuri, and Jain do not teach or even suggest the features “generating, by the processing device, a basic execution plan based on a result of the parsing, and generating a plurality of executable candidate execution plans based on the basic execution plan; wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query; generating a second candidate execution plan using a second machine learning model that generates a synopsis, which is synthesized data, for query processing; and generating a third candidate execution plan by reusing a previously generated synopsis", as recited in amended independent claim 1 and similarly in amended independent claim 11. Examiner respectfully disagrees. The cited prior art alone and/or in combination discloses the argued features. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions. Fan in [0028] discloses executing query plan to retrieve information. Therefore, Fan discloses processing a user query when the user query is input through an approximate query language extension interface, the user query being an extended query form including information according to a user requirement. Fan does not explicitly disclose parsing a user query, but the Kapp reference discloses the feature. Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information. Fan in [0028] discloses executing query plan to retrieve information. Therefore, Fan discloses generating a basic execution plan based on a result. Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information. Fan in [0028] discloses executing query plan to retrieve information. Therefore, Fan discloses selecting optimal final execution plan reflecting the user requirement from among the plurality of executable candidate execution plans. Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information. Fan in [0028] discloses executing query plan to retrieve information. Therefore, Fan discloses performing query processing on the user query based on the final execution plan. Fan discloses processing a user query and returning answers or results, however, Fan does not explicitly disclose: parsing…a user query…; generating…a basic execution plan based on a result of the parsing, and generating a plurality of executable candidate execution plans based on the basic execution plan;… wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan…; generating a second candidate execution plan…for query processing; and generating a third candidate execution plan…. Kapp in [0007] and [0047] discloses user specifying a query. Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string. Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, which are evaluated by a query optimizer to determine which execution plan should be used to compute the query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated. Therefore, Kapp discloses parsing a user query and generating a basic execution plan based on a result of the parsing, and generating a plurality of executable candidate execution plans based on the basic execution plan, wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan, generating a second candidate execution plan for query processing, and generating a third candidate execution plan. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan and Kapp, to have combined Fan and Kapp. The motivation to combine Fan and Kapp would be to identify tokens from an input string to construct an intermediate representation by using a parser. Fan discloses generating an execution plan and Kapp discloses generating a plurality of executable candidate execution plans and reusing previously generated tables for executions, however, Fan and Kapp do not explicitly disclose: generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query; generating a second candidate execution plan using a…machine learning model…for query processing; and generating a third candidate execution plan by reusing a previously generated…. Kandukuri in Column 9 line 37 – Column 10 line 17 and in Column 12 line 33 – Column 13 line 12 discloses identify execution plan for a query, based on the query and the execution plan predicting a processing complexity of the query, processing complexity corresponding to a time to complete the query and resources required to execute the query, using a trained machine learning model as part of predicting the processing complexity of a query, a machine learning model trained based on a history of queries executed comprising results from those queries in terms of processing time, bandwidth used, size of the results, machine learning model may learn how various aspects of queries affect the processing complexity of those queries, providing as input to the trained machine learning model the determined execution plan and receive a prediction of the processing complexity of the received query. Therefore, Kandukuri discloses generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query and generating a second candidate execution plan using a machine learning model for query processing and generating a third candidate execution plan by reusing a previously generated. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, and Kandukuri, to have combined Fan, Kapp, and Kandukuri. The motivation to combine Fan, Kapp, and Kandukuri would be to receive a prediction of a processing complexity of a received query by training machine learning model with various aspect of a plurality of queries. Fan discloses generating an execution plan, Kapp discloses generating a plurality of executable candidate execution plans and reusing previously generated tables for executions, and Kandukuri discloses generating candidate execution plans using a machine learning model, however, Fan, Kapp, and Kandukuri do not explicitly disclose: …using a first machine learning model that infers a query prediction result of the user query; …using a second machine learning model that generates a synopsis, which is synthesized data, for query processing; and generating…by reusing a previously generated synopsis. Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set. Jain in Column 5 lines 16-47 discloses performing a machine learning task by accessing a machine learning model configured to predict an outcome specified in a request, applying the machine learning model to one or more records included in a data set, and generating output data corresponding to the outcome predicted by the machine learning model based on applying the model to one or more records included in the data set. Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 9 discloses machine learning task including accessing summary of aggregated data for a data set. Jain in Column 24 lines 41-55 , in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses training the model using aggregate data, machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model. Therefore, Jain discloses using a first machine learning model that infers a query prediction result of the user query, using a second machine learning model that generates a synopsis, which is synthesized data, for query processing, and generating by reusing a previously generated synopsis. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, Kandukuri, and Jain, to have combined Fan, Kapp, Kandukuri, and Jain. The motivation to combine Fan, Kapp, Kandukuri, and Jain would be to provide differentiated access to collected data based on applied data policies using machine learning models. For the above reasons, Examiner states that rejection of the current Office action is proper. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 8-14, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. At step 1: Independent claims 1 and 11 respectively recite a method and an apparatus, which are directed to a statutory category such as a process, machine, or an article of manufacture. At step 2A, prong one: Independent claim 1 and similarly independent claim 11 recites the limitations: “parsing…a user query when the user query is input…the user query being an extended query form including information according to a user requirement”; A person can mentally or using a pen and paper parse a user query that has been inputted. “generating…a basic execution plan based on a result of the parsing, and generating a plurality of executable candidate execution plans based on the basic execution plan”; A person can mentally or using a pen and paper generate a basic execution plan based on a result of a parsing, and generate a plurality of executable candidate execution plans based on the basic execution plan. “selecting…an optimal final execution plan reflecting the user requirement from among the plurality of executable candidate execution plans”. A person can mentally or using a pen and paper select an optimal final execution plan reflecting a user requirement from among a plurality of executable candidate execution plans. “wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan…that infers a query prediction result of the user query; generating a second candidate execution plan…that generates a synopsis, which is synthesized data, for query processing; and generating a third candidate execution plan by reusing a previously generated synopsis”. A person can mentally or using a pen and paper generate a plurality of executable candidate execution plans that comprises mentally or using a pen and paper generating a first candidate execution plan that infers a query prediction result of the user query, mentally or using a pen and paper generating a second candidate execution plan that generates a synopsis, which is synthesized data, for query processing, and mentally or using a pen and paper generating a third candidate execution plan by reusing a previously generated synopsis. The limitations, as recited above in claim 1 and similarly in claim 11, are processes that, under their broadest reasonable interpretation, cover steps that can be performed in the human mind or by a human using a pen and paper, but for recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. At step 2A, prong two: This judicial exception is not integrated into a practical application. Independent claim 1 and similarly independent claim 11 recites the limitations: “performing, by the processing device, query processing on the user query based on the final execution plan”, which is a step of performing a processing based on a plan, and amounts to no more than mere instructions to apply an exception using generic computer component. The additional elements “by a processing device”, “through an approximate query language extension interface”, “by the processing device”, “using a first machine learning model”, and “using a second machine learning model” in the steps in claim 1 are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. The additional elements “an apparatus for processing an approximate query, comprising: an interface device configured to provide an approximate query language extension interface; and a processor configured to perform query processing according to a user query input through the approximate query language extension interface, the user query being in a form of an extended query including information according to a user requirement”, “wherein the processor includes: a query parser configured to parse…a query transformer configured to generate…a query optimizer configured to select…and a query executor configured to perform…”, “wherein the query transformer is configured to: generate…”, “using a first machine learning model”, and “using a second machine learning model” in the steps in claim 11 are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. At step 2B: Independent claims 1 and 11 recite the same additional elements as identified in step 2A prong two above. These additional elements are not sufficient to amount to significantly more than the judicial exception. Independent claim 1 and similarly independent claim 11 recites the limitations: “performing, by the processing device, query processing on the user query based on the final execution plan”, which is a step of performing a processing based on a plan, and amounts to no more than mere instructions to apply an exception using generic computer component. Accordingly, the additional limitations are not sufficient to amount to significantly more than the judicial exception. Therefore, the claims are directed to an abstract idea and are not patent eligible. Dependent claim 2 and similarly dependent claim 12 recites additional limitations, such as: “wherein the approximate query language extension interface provides a query grammar extension function that allows a user to select desired accuracy and timeliness”. This limitation is directed to the same abstract idea under the mental processes grouping as independent claims 1 and 11, because a person can mentally or using a pen and paper select desired accuracy and timeliness, and because the limitation does not recite any additional elements that are sufficient to amount to significantly more. The additional elements “wherein the approximate query language extension interface provides a query grammar extension function that allows a user to select” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 3 and similarly dependent claim 13 recites additional limitations, such as: “wherein the user requirement includes information on an error tolerance range corresponding to the accuracy and information on a query processing allowable time corresponding to the timeliness”. This limitation is directed to the same abstract idea under the mental processes grouping as independent claims 1 and 11, because a person can mentally or using a pen and paper provide requirement that includes information on an error tolerance range corresponding to an accuracy and information on a query processing allowable time corresponding to a timeliness, and because the limitation does not recite any additional elements that are sufficient to amount to significantly more. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 4 recites additional limitations, such as: “wherein the approximate query language extension interface provides a query grammar extension function based on a structured query language (SQL) grammar”. The additional elements “wherein the approximate query language extension interface provides a query grammar extension function based on a structured query language (SQL) grammar” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 5 and similarly dependent claim 14 recites additional limitations, such as: wherein the selecting of the final execution plan includes: “selecting a candidate execution plan that satisfies the user requirement from among the plurality of executable candidate execution plans”; This limitation is directed to the same abstract idea under the mental processes grouping as independent claims 1 and 11, because a person can mentally or using a pen and paper select a candidate execution plan that satisfies a user requirement from among the plurality of executable candidate execution plans, and because the limitation does not recite any additional elements that are sufficient to amount to significantly more. “when there are a plurality of selected candidate execution plans, calculating query processing costs for each candidate execution plan and selecting a candidate execution plan having a minimum query processing cost as the final execution plan”. This limitation is directed to the same abstract idea under the mental processes grouping as independent claims 1 and 11, because a person can mentally or using a pen and paper calculate query processing costs for each candidate execution plan and select a candidate execution plan having a minimum query processing cost as a final execution plan when there are a plurality of selected candidate execution plans, and because the limitation does not recite any additional elements that are sufficient to amount to significantly more. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 8 and similarly dependent claim 17 recites additional limitations, such as: “accessing raw data to perform the query processing according to the final execution plan when it is determined that the user query is an exact query based on a parsing result”, which is a step of accessing data. At step 2A prong two, the step is recited at a high level of generality, and amounts to mere data gathering, which is a form of insignificant extra-solution activity. At step 2B, the step is recognized as a well understood, routine, and conventional activity within the field of computer functions as an element of receiving or transmitting data over a network (MPEP 2106.05(d)(II)(i)). “accessing synopsis data, which is synthesized data acquired from the raw data, or prediction result generated by inferring a prediction result of the query to perform the query processing according to the optimal execution plan when it is determined that the user query is an approximate query based on the parsing result”, which is a step of accessing data. At step 2A prong two, the step is recited at a high level of generality, and amounts to mere data gathering, which is a form of insignificant extra-solution activity. At step 2B, the step is recognized as a well understood, routine, and conventional activity within the field of computer functions as an element of receiving or transmitting data over a network (MPEP 2106.05(d)(II)(i)). Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 9 and similarly dependent claim 18 recites additional limitations, such as: wherein the accessing of the synopsis data to perform the query processing according to the optimal execution plan includes: “generating synopsis data based on a machine learning model…”. This limitation is directed to the same abstract idea under the mental processes grouping as independent claims 1 and 11, because a person can mentally or using a pen and paper generating synopsis data based on a machine learning model, and because the limitation does not recite any additional elements that are sufficient to amount to significantly more. The additional elements “based on a machine learning model” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. “…performing the query processing using the generated synopsis data”, which is a step of performing a processing using generated data, and amounts to no more than mere instructions to apply an exception using generic computer component. “performing the query processing using pre-generated synopsis data according to a syntax in a previous query form”, which is a step of performing a processing using generated data, and amounts to no more than mere instructions to apply an exception using generic computer component. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 10 and similarly dependent claim 20 recites additional limitations, such as: wherein the accessing the prediction result to perform the query processing according to the optimal execution plan includes: “predicting a query prediction result through a result inference type model”; This limitation is directed to the same abstract idea under the mental processes grouping as independent claims 1 and 11, because a person can mentally or using a pen and paper predict a query prediction result through a result inference type model, and because the limitation does not recite any additional elements that are sufficient to amount to significantly more. The additional elements “through a result inference type model” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. “performing the query processing using the query prediction result”, which is a step of performing a processing using a result, and amounts to no more than mere instructions to apply an exception using generic computer component. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Dependent claim 19 recites additional limitations, such as: “a metadata storage unit configured to store and manage table and column information of raw data for accessing the raw data, an ML model, and a model instance”, which is a step of storing data. At step 2A prong two, the step is recited at a high level of generality, and amounts to mere data gathering, which is a form of insignificant extra-solution activity. At step 2B, the step is recognized as a well understood, routine, and conventional activity within the field of computer functions as an element of storing and retrieving information in memory (MPEP 2106.05(d)(II)(iv)). The additional elements “a metadata storage unit configured to store” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, the additional elements, individually or in combination, do not integrate the abstract idea into a practical application, even viewing the claims a whole, because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, dependent claims 2-10 and 12-20 are also directed to abstract idea without significantly more and are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 5, 8-11, 14, 17, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US Pub 2017/0277750) in view of Kapp (US Pub 2023/0267120) in view of Kandukuri (US Pat 11, 494,413) and in further view of Jain (US Pat 11,315,041). With respect to claim 1, Fan discloses a method of processing an approximate query, comprising: …by a processing device, a user query when the user query is input through an approximate query language extension interface, the user query being an extended query form including information according to a user requirement (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions; Fan in [0028] discloses executing query plan to retrieve information; here Fan does not explicitly disclose parsing a user query, but the Kapp reference discloses the feature, as discussed below; generating, by the processing device, a basic execution plan based on a result… (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information; Fan in [0028] discloses executing query plan to retrieve information); selecting, by the processing device, an optimal final execution plan reflecting the user requirement from among the plurality of executable candidate execution plans (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information; Fan in [0028] discloses executing query plan to retrieve information); and performing, by the processing device, query processing on the user query based on the final execution plan (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information; Fan in [0028] discloses executing query plan to retrieve information). Fan discloses processing a user query and returning answers or results, however, Fan does not explicitly disclose: parsing…a user query…; generating…a basic execution plan based on a result of the parsing, and generating a plurality of executable candidate execution plans based on the basic execution plan;… wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan…; generating a second candidate execution plan…for query processing; and generating a third candidate execution plan…. The Kapp reference discloses parsing a user query and generating a basic execution plan based on a result of the parsing, and generating a plurality of executable candidate execution plans based on the basic execution plan, wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan, generating a second candidate execution plan for query processing, and generating a third candidate execution plan (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, which are evaluated by a query optimizer to determine which execution plan should be used to compute the query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan and Kapp, to have combined Fan and Kapp. The motivation to combine Fan and Kapp would be to identify tokens from an input string to construct an intermediate representation by using a parser (Kapp: [0191]). Fan discloses generating an execution plan and Kapp discloses generating a plurality of executable candidate execution plans and reusing previously generated tables for executions, however, Fan and Kapp do not explicitly disclose: generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query; generating a second candidate execution plan using a…machine learning model…for query processing; and generating a third candidate execution plan by reusing a previously generated…. The Kandukuri reference discloses generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query and generating a second candidate execution plan using a machine learning model for query processing and generating a third candidate execution plan by reusing a previously generated (Kandukuri in Column 9 line 37 – Column 10 line 17 and in Column 12 line 33 – Column 13 line 12 discloses identify execution plan for a query, based on the query and the execution plan predicting a processing complexity of the query, processing complexity corresponding to a time to complete the query and resources required to execute the query, using a trained machine learning model as part of predicting the processing complexity of a query, a machine learning model trained based on a history of queries executed comprising results from those queries in terms of processing time, bandwidth used, size of the results, machine learning model may learn how various aspects of queries affect the processing complexity of those queries, providing as input to the trained machine learning model the determined execution plan and receive a prediction of the processing complexity of the received query). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, and Kandukuri, to have combined Fan, Kapp, and Kandukuri. The motivation to combine Fan, Kapp, and Kandukuri would be to receive a prediction of a processing complexity of a received query by training machine learning model with various aspect of a plurality of queries (Kandukuri: Column 12 line 63 – Column 13 line 12). Fan discloses generating an execution plan, Kapp discloses generating a plurality of executable candidate execution plans and reusing previously generated tables for executions, and Kandukuri discloses generating candidate execution plans using a machine learning model, however, Fan and Kapp do not explicitly disclose: …using a first machine learning model that infers a query prediction result of the user query; …using a second machine learning model that generates a synopsis, which is synthesized data, for query processing; and generating…by reusing a previously generated synopsis. The Jain reference discloses using a first machine learning model that infers a query prediction result of the user query, using a second machine learning model that generates a synopsis, which is synthesized data, for query processing, and generating by reusing a previously generated synopsis (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task by accessing a machine learning model configured to predict an outcome specified in a request, applying the machine learning model to one or more records included in a data set, and generating output data corresponding to the outcome predicted by the machine learning model based on applying the model to one or more records included in the data set; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 9 discloses machine learning task including accessing summary of aggregated data for a data set; Jain in Column 24 lines 41-55 , in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses training the model using aggregate data, machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, Kandukuri, and Jain, to have combined Fan, Kapp, Kandukuri, and Jain. The motivation to combine Fan, Kapp, Kandukuri, and Jain would be to provide differentiated access to collected data based on applied data policies using machine learning models (Jain: Column 1 lines 7-10 and Column 1 line 55 – Column 2 line 3). With respect to claim 4, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the method of claim 1, wherein the approximate query language extension interface provides a query grammar extension function based on a structured query language (SQL) grammar (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). With respect to claim 5, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the method of claim 1, wherein the selecting of the final execution plan includes: selecting a candidate execution plan that satisfies the user requirement from among the plurality of executable candidate execution plans (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated); and when there are a plurality of selected candidate execution plans, calculating query processing costs for each candidate execution plan and selecting a candidate execution plan having a minimum query processing cost as the final execution plan (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). With respect to claim 8, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the method of claim 1, wherein the performing of the query processing includes: accessing raw data to perform the query processing according to the final execution plan when it is determined that the user query is an exact query based on a parsing result (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated); and accessing synopsis data, which is synthesized data acquired from the raw data, or prediction result generated by inferring a prediction result of the query to perform the query processing according to the optimal execution plan when it is determined that the user query is an approximate query based on the parsing result (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). With respect to claim 9, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the method of claim 8, wherein the accessing of the synopsis data to perform the query processing according to the optimal execution plan includes: generating synopsis data based on a machine learning model and performing the query processing using the generated synopsis data (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model); and performing the query processing using pre-generated synopsis data according to a syntax in a previous query form (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model). With respect to claim 10, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the method of claim 8, wherein the accessing the prediction result to perform the query processing according to the optimal execution plan includes: predicting a query prediction result through a result inference type model (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model); and performing the query processing using the query prediction result (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model). With respect to claim 11, Fan in view of Kapp discloses an apparatus for processing an approximate query, comprising: an interface device configured to provide an approximate query language extension interface (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions; Fan in [0028] discloses executing query plan to retrieve information); and a processor configured to perform query processing according to a user query input through the approximate query language extension interface, the user query being in a form of an extended query including information according to a user requirement (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions; Fan in [0028] discloses executing query plan to retrieve information), wherein the processor includes:… a query transformer configured to generate a basic execution plan based on the…result and generate a plurality of executable candidate execution plans based on the basic execution plan (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions; Fan in [0028] discloses executing query plan to retrieve information; here Fan does not explicitly disclose parsing a user query, but the Kapp reference discloses the feature, as discussed below); a query optimizer configured to select an optimal final execution plan reflecting the user requirement from among the plurality of executable candidate execution plans (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions; Fan in [0028] discloses executing query plan to retrieve information),; and a query executor configured to perform the query processing on the user query based on the final execution plan (Fan in [0003], [0004], [0023], and [0184] discloses determine query to retrieve exact answers, compute approximate query answers to query, determine cost associated with rewritten query based on query and access constraints, generate plan or rewritten one or more queries to retrieve information, processor executing instructions; Fan in [0028] discloses executing query plan to retrieve information),. Fan discloses processing a user query and returning answers or results, however, Fan does not explicitly disclose: a query parser configured to parse the user query; …generate a basic execution plan based on the parsing result and generate a plurality of executable candidate execution plans based on the basic execution plan;… wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan…; generating a second candidate execution plan…for query processing; and generating a third candidate execution plan…. The Kapp reference discloses a query parser configured to parse the user query, generating a basic execution plan based on the parsing result and generating a plurality of executable candidate execution plans based on the basic execution plan, wherein the generating of the plurality of executable candidate execution plans comprises: generating a first candidate execution plan, generating a second candidate execution plan for query processing, and generating a third candidate execution plan (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, which are evaluated by a query optimizer to determine which execution plan should be used to compute the query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan and Kapp, to have combined Fan and Kapp. The motivation to combine Fan and Kapp would be to identify tokens from an input string to construct an intermediate representation by using a parser (Kapp: [0191]). Fan discloses generating an execution plan and Kapp discloses generating a plurality of executable candidate execution plans and reusing previously generated tables for executions, however, Fan and Kapp do not explicitly disclose: generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query; generating a second candidate execution plan using a…machine learning model…for query processing; and generating a third candidate execution plan by reusing a previously generated…. The Kandukuri reference discloses generating a first candidate execution plan using a first machine learning model that infers a query prediction result of the user query and generating a second candidate execution plan using a machine learning model for query processing and generating a third candidate execution plan by reusing a previously generated (Kandukuri in Column 9 line 37 – Column 10 line 17 and in Column 12 line 33 – Column 13 line 12 discloses identify execution plan for a query, based on the query and the execution plan predicting a processing complexity of the query, processing complexity corresponding to a time to complete the query and resources required to execute the query, using a trained machine learning model as part of predicting the processing complexity of a query, a machine learning model trained based on a history of queries executed comprising results from those queries in terms of processing time, bandwidth used, size of the results, machine learning model may learn how various aspects of queries affect the processing complexity of those queries, providing as input to the trained machine learning model the determined execution plan and receive a prediction of the processing complexity of the received query). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, and Kandukuri, to have combined Fan, Kapp, and Kandukuri. The motivation to combine Fan, Kapp, and Kandukuri would be to receive a prediction of a processing complexity of a received query by training machine learning model with various aspect of a plurality of queries (Kandukuri: Column 12 line 63 – Column 13 line 12). Fan discloses generating an execution plan, Kapp discloses generating a plurality of executable candidate execution plans and reusing previously generated tables for executions, and Kandukuri discloses generating candidate execution plans using a machine learning model, however, Fan and Kapp do not explicitly disclose: …using a first machine learning model that infers a query prediction result of the user query; …using a second machine learning model that generates a synopsis, which is synthesized data, for query processing; and generating…by reusing a previously generated synopsis. The Jain reference discloses using a first machine learning model that infers a query prediction result of the user query, using a second machine learning model that generates a synopsis, which is synthesized data, for query processing, and generating by reusing a previously generated synopsis (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task by accessing a machine learning model configured to predict an outcome specified in a request, applying the machine learning model to one or more records included in a data set, and generating output data corresponding to the outcome predicted by the machine learning model based on applying the model to one or more records included in the data set; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 9 discloses machine learning task including accessing summary of aggregated data for a data set; Jain in Column 24 lines 41-55 , in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses training the model using aggregate data, machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, Kandukuri, and Jain, to have combined Fan, Kapp, Kandukuri, and Jain. The motivation to combine Fan, Kapp, Kandukuri, and Jain would be to provide differentiated access to collected data based on applied data policies using machine learning models (Jain: Column 1 lines 7-10 and Column 1 line 55 – Column 2 line 3). With respect to claim 14, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the apparatus of claim 11, wherein the query optimizer is configured to select a candidate execution plan that satisfies the user requirement from among the plurality of executable candidate execution plans, and calculate query processing costs for each candidate execution plan and select a candidate execution plan having a minimum query processing cost as a final execution plan when the number of selected candidate execution plans is plural (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). With respect to claim 17, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the apparatus of claim 11, wherein the query executor is configured to perform: an operation of accessing raw data to perform the query processing according to the final execution plan, when it is determined that the user query is an exact query based on the parsing result (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated); and an operation of accessing synopsis data, which is synthesized data acquired from the raw data, or prediction result generated by inferring a prediction result of the query to perform the query processing according to the optimal execution plan, when it is determined that the user query is an approximate query based on the parsing result (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string; Kapp in [0226] and [0231] discloses query optimization generates one or more different candidate execution plans for a query, query optimizer optimizes a query by transforming the query by rewriting another semantically equivalent query that produces the same result and can be executed more efficiently and for which a more efficient and less costly execution plan can be generated). With respect to claim 18, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the apparatus of claim 17, wherein, in the case of the operation of accessing the synopsis data to perform the query processing according to the optimal execution plan, the query executor is configured to perform: an operation of generating synopsis data based on a machine learning model and performing the query processing using the generated synopsis data (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model); and an operation of performing the query processing using pre-generated synopsis data according to a syntax in a previous query form (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model). With respect to claim 20, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the apparatus of claim 17, wherein, in the case of the operation of accessing the prediction result to perform the query processing according to the optimal execution plan, the query executor is configured to perform: an operation of predicting a query prediction result through a result inference type model (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model); and an operation of performing the query processing using the query prediction result (Jain in Column 1 line 55 – Column 2 line 3 and in Column 4 lines 29-32 discloses machine learning models trained with data sets, performing machine learning tasks and data analysis for users, request to perform machine learning task comprises request to train a machine learning model based on data set; Jain in Column 5 lines 16-47 discloses performing a machine learning task comprises a request to predict an outcome based on a plurality of data sets; Jain in Column 14 lines 16-30 and Column 15 line 45 – Column 16 line 55 discloses machine learning task including accessing summary of aggregated data for a data set, using previously collected or generated data; Jain in Column 35 discloses data access requests include request to view, search, or download data, perform machine learning task based on the data; Jain in Column 38 lines 13-36, in Column 40 lines 9-31, and in Column 42 lines 24-50 discloses machine learning task results include a trained machine learning model for downloading summarized or aggregated data set, and predictions or inferences determined using a machine learning model). Claim(s) 2, 3, 12, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US Pub 2017/0277750) in view of Kapp (US Pub 2023/0267120) in view of Kandukuri (US Pat 11, 494,413) and in further view of Jain (US Pat 11,315,041) and in further view of Jezewski (US Pub 2021/0374563). With respect to claim 2, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the method of claim 1, wherein the approximate query language extension interface provides a query grammar extension function that allows a user… (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string)…, however, Fan, Kapp, Kandukuri, and Jain do not explicitly disclose: …allows a user to select desired accuracy and timeliness. The Jezewski reference discloses allowing a user to select desired accuracy and timeliness (Jezewski in [0004] and [0232] discloses obtaining a problem statement from a user including required solution metrics, user interaction module presents an input to enter the problem statement and a set of filters to refine the problem statement, and an input for common and other solution metrics, such as solution finding time/cost, solution implementing time/cost, accuracy, using data sources generated from initial or previous queries etc. to a user; Jezewski in [0308] and [0309] discloses solution metrics can include optimization of error metrics). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, Kandukuri, Jain, and Jezewski, to have combined Fan, Kapp, Kandukuri, Jain, and Jezewski. The motivation to combine Fan, Kapp, Kandukuri, Jain, and Jezewski would be to automate problem solving for clearly defined problem statements by obtaining a problem statement from a user including required solution metrics (Jezewski: [0001] and [0004]). With respect to claim 3, Fan in view of Kapp in view of Kandukuri in view of Jain and in further view of Jezewski discloses the method of claim 2, wherein the user requirement includes information on an error tolerance range corresponding to the accuracy and information on a query processing allowable time corresponding to the timeliness (Jezewski in [0004] and [0232] discloses obtaining a problem statement from a user including required solution metrics, user interaction module presents an input to enter the problem statement and a set of filters to refine the problem statement, and an input for common and other solution metrics, such as solution finding time/cost, solution implementing time/cost, accuracy, using data sources generated from initial or previous queries etc. to a user; Jezewski in [0308] and [0309] discloses solution metrics can include optimization of error metrics). With respect to claim 12, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the apparatus of claim 11, wherein the approximate query language extension interface provides a query grammar extension function that allows a user… (Kapp in [0007] and [0047] discloses user specifying a query; Kapp in [0191] and [0198] discloses parser identifying tokens from an input query string to construct an intermediate representation, generating an SQL query, using language grammar to identify tokens in the input string)…, however, Fan, Kapp, Kandukuri, and Jain do not explicitly disclose: …allows a user to select desired accuracy and timeliness. The Jezewski reference discloses allowing a user to select desired accuracy and timeliness (Jezewski in [0004] and [0232] discloses obtaining a problem statement from a user including required solution metrics, user interaction module presents an input to enter the problem statement and a set of filters to refine the problem statement, and an input for common and other solution metrics, such as solution finding time/cost, solution implementing time/cost, accuracy, using data sources generated from initial or previous queries etc. to a user; Jezewski in [0308] and [0309] discloses solution metrics can include optimization of error metrics). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, Kandukuri, Jain, and Jezewski, to have combined Fan, Kapp, Kandukuri, Jain, and Jezewski. The motivation to combine Fan, Kapp, Kandukuri, Jain, and Jezewski would be to automate problem solving for clearly defined problem statements by obtaining a problem statement from a user including required solution metrics (Jezewski: [0001] and [0004]). With respect to claim 13, Fan in view of Kapp in view of Kandukuri in view of Jain and in further view of Jezewski discloses the apparatus of claim 12, wherein the user requirement includes information on an error tolerance range corresponding to the accuracy and information on a query processing allowable time corresponding to the timeliness (Jezewski in [0004] and [0232] discloses obtaining a problem statement from a user including required solution metrics, user interaction module presents an input to enter the problem statement and a set of filters to refine the problem statement, and an input for common and other solution metrics, such as solution finding time/cost, solution implementing time/cost, accuracy, using data sources generated from initial or previous queries etc. to a user; Jezewski in [0308] and [0309] discloses solution metrics can include optimization of error metrics). Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US Pub 2017/0277750) in view of Kapp (US Pub 2023/0267120) in view of Kandukuri (US Pat 11, 494,413) and in further view of Jain (US Pat 11,315,041) and in further view of Field (US Pat 11,461,351). With respect to claim 19, Fan in view of Kapp in view of Kandukuri and in further view of Jain discloses the apparatus of claim 11, Kapp discloses a metadata storage unit configured to store and manage table and column information of data, however, Fan, Kapp, Kandukuri, and Jain do not explicitly disclose: a metadata storage unit configured to store and manage table and column information of raw data for accessing the raw data, an ML model, and a model instance. The Field reference discloses a metadata storage unit configured to store and manage table and column information of raw data for accessing the raw data, an ML model, and a model instance (Field in Column 1 lines 17-31 discloses processing and storing data for machine learning models within a database system; Field in Claim 1 discloses raw input data from source table provided by a machine learning development environment, source table comprising multiple rows and columns, datasets included in the raw data provided for machine learning models, accessing the machine learning development environment by plurality of users, generating table metadata corresponding to the source table based on the received input data). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teachings of Fan, Kapp, Kandukuri, Jain, and Field, to have combined Fan, Kapp, Kandukuri, Jain, and Field. The motivation to combine Fan, Kapp, Kandukuri, Jain, and Field would be to efficiently process datasets for machine learning by processing and storing data for machine learning models within a database system (Field: Column 1, lines 17-31). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to REZWANUL MAHMOOD whose telephone number is (571)272-5625. The examiner can normally be reached M-F 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at 571-272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.M/Examiner, Art Unit 2159 /ANN J LO/Supervisory Patent Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Apr 26, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §101, §103
Sep 09, 2025
Response Filed
Dec 13, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579192
PROMISE KEYS FOR RESULT CACHES OF DATABASE SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12548309
LABEL INHERITANCE FOR SOFT LABEL GENERATION IN INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12541537
DEVICE DISCOVERY SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12524465
SYSTEMS AND METHODS FOR BROWSER EXTENSIONS AND LARGE LANGUAGE MODELS FOR INTERACTING WITH VIDEO STREAMS
2y 5m to grant Granted Jan 13, 2026
Patent 12450226
EFFICIENTLY ANALYZING TRACE DATA
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
81%
With Interview (+34.7%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 402 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month