Prosecution Insights
Last updated: April 18, 2026
Application No. 18/812,799

METHOD FOR EVALUATING LANGUAGE MODEL AND ELECTRONIC DEVICE

Non-Final OA §101§103
Filed
Aug 22, 2024
Examiner
HASSAN, ALI MOHAMAD
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Pegatron Corporation
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
7 granted / 10 resolved
+8.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
29
Total Applications
across all art units

Statute-Specific Performance

§101
30.8%
-9.2% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application claims priority to foreign application with application number TW112148170 dated 12/11/2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 8/22/2024 and 10/14/2024 has been considered and placed in the application file. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 6, Further claim 1 recites A method for evaluating a language model, executed by an electronic device, comprising: obtaining a prompt and a reference query syntax corresponding to the prompt; obtaining a first reference query result by querying a database with the reference query syntax and organizing the first reference query result into a second reference query result presented in a preset format based on the preset format; obtaining a first query syntax generated by a first language model in response to the prompt; obtaining a first query result by querying the database with the first query syntax and organizing the first query result into a second query result presented in the preset format based on the preset format; evaluating a first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result. Further claim 6 states a storage circuit storing a program code; and a processor coupled to the storage circuit and accessing the program code to execute: The limitation of “obtaining…”, “obtaining…”, “obtaining…”,“obtaining…”, and “evaluating …” , as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a librarian receiving a prompt if a book is available. Where the librarian would enter the book information a certain way to retrieve the information from the database. Where the librarian would enter the author, title, and when it was published in a certain order for the database to recognize it. Where the librarian can have an example of the format and syntax when she searches for something. Further she can search the title of the book and see if it’s available. When she receives these results, she would than see if they both gave a similar answer. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements that are computer components “processor” (paragraph 48) and “memory” (paragraphs 47) recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using the computer components amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Claims 2 additionally recite The method of claim 1, wherein the step of evaluating the first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result comprises: determining that the first query syntax provided by the first language model is valid in response to determining that the second query result completely comprises the second reference query result; and determining that the first query syntax provided by the first language model is invalid in response to determining that the second query result does not completely comprise the second reference query result. However, this limitation does not prevent a human from performing the steps mentally as described above. Further, the librarian would see if the results of searching the title and the syntax search (author, title, publication date) give a similar answer. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible. Claims 3 additionally recites The method of claim 2, wherein the second reference query result comprises at least one reference data combination, each of the reference data combinations comprises a first reference query target and a first reference data corresponding to the first reference query target, the second query result comprises at least one first data combination, and each of the first data combinations comprises a first query target and a first data corresponding to the first query target. However, these limitations encompass the librarian have the query and the result if it’s available in a pair. Further when the search was completed writing down the query (being the syntax query or the title of the book) with the result of where the book is to retrieve it. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible. Claims 4 additionally recites the method of claim 1, wherein the reference query syntax is a correct database query syntax designed to query the database in response to the prompt. However, these limitations encompass the librarian inputting the book information in the system in a certain way (card catalog). Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible. Claims 5 additionally recites The method of claim 1, further comprising: obtaining a second query syntax generated by a second language model in response to the prompt; obtaining a third query result by querying the database with the second query syntax and organizing the third query result into a fourth query result presented in the preset format based on the preset format; evaluating a second validity of the second query syntax provided by the second language model based on whether the fourth query result completely comprises the second reference query result; determining a comparison result of the first language model and the second language model by comparing the first validity and the second validity. However, these limitations encompass a librarian receiving a prompt if a book is available. Where the librarian would enter the book information a certain way to retrieve the information from the database. Where the librarian would enter the author, title, and when it was published in a certain order for the database to recognize it. Where the librarian can have an example of the formant and syntax when she searches for something. Further she can search the title of the book and see if it’s available. When she receives these results, she would than see if they both gave a similar answer. Thus, these claims are directed towards a mental process. Similar to above, no additional limitations are provided that provide a practical application, or amount to significantly more than the abstract idea. Therefore, the claims are not patent eligible. Claim 7 contains limitations similar to those found in claims 2 and therefore are not patent eligible for the same reasons. Claim 8 contains limitations similar to those found in claims 3 and therefore are not patent eligible for the same reasons. Claim 9 contains limitations similar to those found in claims 4 and therefore are not patent eligible for the same reasons. Claim 10 contains limitations similar to those found in claims 5 and therefore are not patent eligible for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1,2,4,6,7,9 are rejected under 35 U.S.C. 103 as obvious over Gao, Dawei, et al. "Text-to-sql empowered by large language models: A benchmark evaluation." arXiv preprint arXiv:2308.15363 (2023). in view of US Patent US 20230129994 A1, (Mukherjee; Maharaj.). Claim 1 Regarding Claim 1, Gao teach A method for evaluating a language model, executed by an electronic device, comprising: obtaining a prompt and a reference query syntax corresponding to the prompt; (page 4 Section 3.2 in-context learning text-to-SQL " In Text-to-SQL, given a set of triples Q = {(𝑞𝑖,𝑠𝑖, D𝑖)}, where 𝑞𝑖 and 𝑠𝑖 are natural language question and its corresponding SQL query on database D𝑖, the target of in-context learning for Text to-SQL is to maximize the possibility of LLM M generating the correct SQL 𝑠∗ on the target question 𝑞 and database D as follows:") obtaining a first reference query result by querying a database with the reference query syntax and organizing the first reference query result into a second reference query result presented in a preset format based on the preset format; (page 6 section 4.1 setting: metric: " To make a fair comparison, we follow prior study [61] to use exact-set-match accuracy (EM) and execution accuracy (EX). The exact-set-match accuracy measures the matched SQL keywords between the predicted SQL query and its corresponding ground truth. The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. " Format is being interpreted as a default output ) obtaining a first query syntax generated by a first language model in response to the prompt; (page 3 section 3.1 question representation: basic prompt: "Basic Prompt (BS𝑃). Basic Prompt [37] is a simple representation showninListing1. It is consisted of table schemas, natural language question prefixed by “Q: ” and a response prefix “A: SELECT” to prompt LLM to generate SQL. In this paper we named it as Basic Prompt due to its absence of instructions." Page 4 section 3.2 in context learning for text-to-SQL "In Text-to-SQL, given a set of triples Q = {(𝑞𝑖,𝑠𝑖, D𝑖)}, where 𝑞𝑖 and 𝑠𝑖 are natural language question and its corresponding SQL query on database D𝑖, the target of in-context learning for Text to-SQL is to maximize the possibility of LLM M generating the correct SQL 𝑠∗ on the target question 𝑞 and database D as follows:" page 1 section 1 introduction " Different from prior studies, the core problem in LLM-based Text-to-SQL solution is how to prompt LLM to generate correct SQL queries, namely prompt engineering. Such prompt engineering involves question representations [7,13,33,37], examples selection [14, 28, 29], and example organization [14].") obtaining a first query result by querying the database with the first query syntax and organizing the first query result into a second query result presented in the preset format based on the preset format; (Page 6 section 4.1 setting: metric: " The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. " Format is being interpreted as a default output ) evaluating a first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result. (does not teach the bold) (page 6 section 4.1 setting: metric: " To make a fair comparison, we follow prior study [61] to use exact-set-match accuracy (EM) and execution accuracy (EX). The exact-set-match accuracy measures the matched SQL keywords between the predicted SQL query and its corresponding ground truth. The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. This metric provides a more precise estimate of the model’s performance …" ) Gao do not explicitly teach all of evaluating a first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result. (the bolded) However, Mukherjee teaches evaluating a first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result. (Paragraph 26 "In accordance with one or more embodiments of the disclosure, a computing platform comprising at least one processor, a communication interface, and memory storing computer-readable instructions may receive a source query, which may be formatted in a first format for execution on a source database. The computing platform may execute the source query on the source database to produce a first data result. The computing platform may input the first data result into a reversal logic engine to produce a target query formatted in a second format corresponding to a target database. The computing platform may execute the target query on the target database to produce a second data result. The computing platform may compare the second data result to the first data result to identify whether or not the second data result matches the first data result. Based on identifying that the second data result matches the first data result, the computing platform may validate the target query. Based on identifying that the second data result does not match the first data result, the computing platform may adjust the reversal logic engine based on a discrepancy between the second data result and the first data result.") It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Mukherjee to provide a “evaluating a first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result.” Doing so would Ensure correctness of the generated result, as recognized by Mukherjee. (Paragraph 26). Claim 6 Regarding Claim 6, Gao in view of Mukherjee, further Gao teach a storage circuit storing a program code; and (Page 1 section 1 introduction " Different from prior studies, the core problem in LLM-based Text-to-SQL solution is how to prompt LLM to generate correct SQL queries, namely prompt engineering. Such prompt engineering involves question representations [7,13,33,37], examples selection [14, 28, 29], and example organization [14]." page 4 Section 3.2 in-context learning text-to-SQL " In Text-to-SQL, given a set of triples Q = {(𝑞𝑖,𝑠𝑖, D𝑖)}, where 𝑞𝑖 and 𝑠𝑖 are natural language question and its corresponding SQL query on database D𝑖, the target of in-context learning for Text to-SQL is to maximize the possibility of LLM M generating the correct SQL 𝑠∗ on the target question 𝑞 and database D as follows:" It would be inherent to have a program code, processor, and memory) a processor coupled to the storage circuit and accessing the program code to execute: (Page 1 section 1 introduction " Different from prior studies, the core problem in LLM-based Text-to-SQL solution is how to prompt LLM to generate correct SQL queries, namely prompt engineering. Such prompt engineering involves question representations [7,13,33,37], examples selection [14, 28, 29], and example organization [14]." page 4 Section 3.2 in-context learning text-to-SQL " In Text-to-SQL, given a set of triples Q = {(𝑞𝑖,𝑠𝑖, D𝑖)}, where 𝑞𝑖 and 𝑠𝑖 are natural language question and its corresponding SQL query on database D𝑖, the target of in-context learning for Text to-SQL is to maximize the possibility of LLM M generating the correct SQL 𝑠∗ on the target question 𝑞 and database D as follows:" It would be inherent to have a program code, processor, and memory) Claim 6 contains limitations similar to those found in claims 1 and therefore are not patent eligible for the same reasons. Claim 2 and 7 Regarding Claim 2 and 7, Gao in view of Mukherjee, further Gao teaches 2. The method of claim 1, wherein the step of evaluating the first validity of the first query syntax provided by the first language model based on whether the second query result completely comprises the second reference query result comprises: determining that the first query syntax provided by the first language model is valid in response to determining that the second query result completely comprises the second reference query result; and (except the bolded) (page 6 section 4.1 setting: metric: " To make a fair comparison, we follow prior study [61] to use exact-set-match accuracy (EM) and execution accuracy (EX). The exact-set-match accuracy measures the matched SQL keywords between the predicted SQL query and its corresponding ground truth. The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. This metric provides a more precise estimate of the model’s performance …" ) determining that the first query syntax provided by the first language model is invalid in response to determining that the second query result does not completely comprise the second reference query result. (except the bolded) (page 6 section 4.1 setting: metric: " To make a fair comparison, we follow prior study [61] to use exact-set-match accuracy (EM) and execution accuracy (EX). The exact-set-match accuracy measures the matched SQL keywords between the predicted SQL query and its corresponding ground truth. The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. This metric provides a more precise estimate of the model’s performance …" ) Gao in view of Mukherjee, further Mukherjee teaches determining that the first query syntax provided by the first language model is valid in response to determining that the second query result completely comprises the second reference query result; and(Paragraph 26 "In accordance with one or more embodiments of the disclosure, a computing platform comprising at least one processor, a communication interface, and memory storing computer-readable instructions may receive a source query, which may be formatted in a first format for execution on a source database. The computing platform may execute the source query on the source database to produce a first data result. The computing platform may input the first data result into a reversal logic engine to produce a target query formatted in a second format corresponding to a target database. The computing platform may execute the target query on the target database to produce a second data result. The computing platform may compare the second data result to the first data result to identify whether or not the second data result matches the first data result. Based on identifying that the second data result matches the first data result, the computing platform may validate the target query. Based on identifying that the second data result does not match the first data result, the computing platform may adjust the reversal logic engine based on a discrepancy between the second data result and the first data result.") determining that the first query syntax provided by the first language model is invalid in response to determining that the second query result does not completely comprise the second reference query result.(Paragraph 26 "In accordance with one or more embodiments of the disclosure, a computing platform comprising at least one processor, a communication interface, and memory storing computer-readable instructions may receive a source query, which may be formatted in a first format for execution on a source database. The computing platform may execute the source query on the source database to produce a first data result. The computing platform may input the first data result into a reversal logic engine to produce a target query formatted in a second format corresponding to a target database. The computing platform may execute the target query on the target database to produce a second data result. The computing platform may compare the second data result to the first data result to identify whether or not the second data result matches the first data result. Based on identifying that the second data result matches the first data result, the computing platform may validate the target query. Based on identifying that the second data result does not match the first data result, the computing platform may adjust the reversal logic engine based on a discrepancy between the second data result and the first data result.") See claim one for rationale. Claim 4 and 9 Regarding Claim 4 and 9, Gao in view of Mukherjee, further Gao teaches The method of claim 1, wherein the reference query syntax is a correct database query syntax designed to query the database in response to the prompt. (Section 4 experiment: datasets " Each instance is consisted of a natural language question on a specific database and its corresponding SQL query. ") Claims 3 and 8 are rejected under 35 U.S.C. 103 as obvious over Gao, Dawei, et al. "Text-to-sql empowered by large language models: A benchmark evaluation." arXiv preprint arXiv:2308.15363 (2023). in view of US Patent US 20230129994 A1, (Mukherjee; Maharaj.) in further view of Saeed, Mohammed, Nicola De Cao, and Paolo Papotti. "Querying large language models with SQL." arXiv preprint arXiv:2304.00472 (2023). Claim 3 and 8 Regarding Claim 3 and 8, Gao in view of Mukherjee do not explicitly teach all of 3. The method of claim 2, wherein the second reference query result comprises at least one reference data combination, each of the reference data combinations comprises a first reference query target and a first reference data corresponding to the first reference query target, the second query result comprises at least one first data combination, and each of the first data combinations comprises a first query target and a first data corresponding to the first query target. However, Saeed teaches The method of claim 2, wherein the second reference query result comprises at least one reference data combination, each of the reference data combinations comprises a first reference query target and a first reference data corresponding to the first reference query target, the second query result comprises at least one first data combination, and each of the first data combinations comprises a first query target and a first data corresponding to the first query target. (see figure one "Querying a pre-trained LLM with SQL is different from question answering (QA). We assume a user SQL query as input. Galois executes the query, and obtains relations, by retrieving data from a LLM(1).The corresponding QA task consumes and produces natural language text (2)." page 2 section 2 background " Indeed, QA systems are optimized for answering questions with a text, while SQL queries return results in the form of tuples, possibly with complex operations to combine intermediate values, such as aggregates, where LLMs fail short [45]." Page 5 section 5 experiments: evaluation: " For Galois, all output relations have the expected schema, this is obtained by construction from the execution of the query plan, i.e., every 𝑅𝑀 has the same schema as every 𝑅𝐷." Rd= ground truth result Rm = generated results Reference data combination = a row from the reference (ground truth ) result First data combination = a row from the models results.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao in view of Mukherjee to incorporate the teachings of Saeed to provide a “3. The method of claim 2, wherein the second reference query result comprises at least one reference data combination, each of the reference data combinations comprises a first reference query target and a first reference data corresponding to the first reference query target, the second query result comprises at least one first data combination, and each of the first data combinations comprises a first query target and a first data corresponding to the first query target.” Doing so would Have answers in a consistent schema, as recognized by Saeed. (Page 5 section 5 experiments: evaluation:). Claims 5 and 10 are rejected under 35 U.S.C. 103 as obvious over Gao, Dawei, et al. "Text-to-sql empowered by large language models: A benchmark evaluation." arXiv preprint arXiv:2308.15363 (2023). in view of US Patent US 20230129994 A1, (Mukherjee; Maharaj.) in further view of US Patent US 12554709 B2, (Sun; Ruoxi.) Claim 5 and 10 Regarding Claim 5 and 10, Gao teach 5. The method of claim 1, further comprising: obtaining a second query syntax generated by a second language model in response to the prompt; (except the bolded ) (page 4 Section 3.2 in-context learning text-to-SQL " In Text-to-SQL, given a set of triples Q = {(𝑞𝑖,𝑠𝑖, D𝑖)}, where 𝑞𝑖 and 𝑠𝑖 are natural language question and its corresponding SQL query on database D𝑖, the target of in-context learning for Text to-SQL is to maximize the possibility of LLM M generating the correct SQL 𝑠∗ on the target question 𝑞 and database D as follows:") obtaining a third query result by querying the database with the second query syntax and organizing the third query result into a fourth query result presented in the preset format based on the preset format; (page 6 section 4.1 setting: metric: " To make a fair comparison, we follow prior study [61] to use exact-set-match accuracy (EM) and execution accuracy (EX). The exact-set-match accuracy measures the matched SQL keywords between the predicted SQL query and its corresponding ground truth. The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. " Format is being interpreted as a default output ) evaluating a second validity of the second query syntax provided by the second language model based on whether the fourth query result completely comprises the second reference query result; (except the bolded ) (page 6 section 4.1 setting: metric: " To make a fair comparison, we follow prior study [61] to use exact-set-match accuracy (EM) and execution accuracy (EX). The exact-set-match accuracy measures the matched SQL keywords between the predicted SQL query and its corresponding ground truth. The execution accuracy, on the other hand, compares the execution output of the predicted SQL query with that of the ground truth SQL query on some database instances. This metric provides a more precise estimate of the model’s performance …" ) Gao do not explicitly teach all of obtaining a second query syntax generated by a second language model in response to the prompt; (the bolded ) evaluating a second validity of the second query syntax provided by the second language model based on whether the fourth query result completely comprises the second reference query result; (bolded ) a second validity of the second query syntax provided by the second language model based on whether the fourth query result completely comprises the second reference query result. determining a comparison result of the first language model and the second language model by comparing the first validity and the second validity. However, Mukherjee teaches completely comprises the second reference query result.(Paragraph 26 "In accordance with one or more embodiments of the disclosure, a computing platform comprising at least one processor, a communication interface, and memory storing computer-readable instructions may receive a source query, which may be formatted in a first format for execution on a source database. The computing platform may execute the source query on the source database to produce a first data result. The computing platform may input the first data result into a reversal logic engine to produce a target query formatted in a second format corresponding to a target database. The computing platform may execute the target query on the target database to produce a second data result. The computing platform may compare the second data result to the first data result to identify whether or not the second data result matches the first data result. Based on identifying that the second data result matches the first data result, the computing platform may validate the target query. Based on identifying that the second data result does not match the first data result, the computing platform may adjust the reversal logic engine based on a discrepancy between the second data result and the first data result.") determining a comparison result of the first language model and the second language model by comparing the first validity and the second validity. (Paragraph 26 "In accordance with one or more embodiments of the disclosure, a computing platform comprising at least one processor, a communication interface, and memory storing computer-readable instructions may receive a source query, which may be formatted in a first format for execution on a source database. The computing platform may execute the source query on the source database to produce a first data result. The computing platform may input the first data result into a reversal logic engine to produce a target query formatted in a second format corresponding to a target database. The computing platform may execute the target query on the target database to produce a second data result. The computing platform may compare the second data result to the first data result to identify whether or not the second data result matches the first data result. Based on identifying that the second data result matches the first data result, the computing platform may validate the target query. Based on identifying that the second data result does not match the first data result, the computing platform may adjust the reversal logic engine based on a discrepancy between the second data result and the first data result.") See claim one for rationale. Gao in view of Mukherjee do not explicitly teach all of obtaining a second query syntax generated by a second language model in response to the prompt; a second validity of the second query syntax provided by the second language model based on whether the fourth query result However, Sun teach obtaining a second query syntax generated by a second language model in response to the prompt;(col2 lines 55-64 "In an example, converting the natural language query into a database language query includes: generating various database description prompts; sampling one or more large language models (LLMs) multiple times with the various database description prompts to generate a plurality of potential database language queries; executing the plurality of potential database language queries to generate a plurality of potential results; and selecting the database language query that provides a result consistent with a threshold amount of the plurality of potential results.") a second validity of the second query syntax provided by the second language model based on whether the fourth query result (col2 lines 55-64 "In an example, converting the natural language query into a database language query includes: generating various database description prompts; sampling one or more large language models (LLMs) multiple times with the various database description prompts to generate a plurality of potential database language queries; executing the plurality of potential database language queries to generate a plurality of potential results; and selecting the database language query that provides a result consistent with a threshold amount of the plurality of potential results." Col 6 lines 18-23 "The execution engine 112 can be configured to execute the potential database query languages to generate corresponding potential results. The execution engine 112 can be further configured to remove errors from the corresponding results. The execution engine 112 can also be configured to concatenate the corresponding results.") It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gao in view of Mukherjee to incorporate the teachings of Sun to provide a “obtaining a second query syntax generated by a second language model in response to the prompt; a second validity of the second query syntax provided by the second language model based on whether the fourth query result ” Doing so would add robustness against individual model failures, as recognized by Sun . (col2 lines 55-64 & col 6 lines 24-34). Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication US 20250005018 A1 to Liu; Haocheng discloses generating SQL by prompts. US Patent Publication US 20210056108 A1 to SHMUELI; Oded discloses evaluating relationships between query results using containment. A. Attawar, S. Vora, P. Narechania, V. Sawant and H. Vora, "NLSQL: Generating and Executing SQL Queries via Natural Language Using Large Language Models," 2023 International Conference on Advanced Computing Technologies and Applications (ICACTA), Mumbai, India, 2023, pp. 1-6, doi: 10.1109/ICACTA58201.2023.10392861 discloses generating a SQL result and comparing it. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALI M HASSAN whose telephone number is (571)272-5331. The examiner can normally be reached Monday - Friday 8:00am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALI M HASSAN/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 04/04/2026
Read full office action

Prosecution Timeline

Aug 22, 2024
Application Filed
Apr 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598014
CONTENT DRIVEN INTEGRATED BROADCAST SYSTEM WITH ONE OR MORE SELECTABLE AUTOMATED BROADCAST PERSONALITY AND METHOD FOR ITS USE
2y 5m to grant Granted Apr 07, 2026
Patent 12572852
LEXICAL DROPOUT FOR NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 12541540
INFORMATION PROCESSING DEVICE, TERMINAL DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+33.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month