Prosecution Insights
Last updated: April 19, 2026
Application No. 18/892,144

CONVERTING NATURAL LANGUAGE QUERIES TO SQL QUERIES USING ONTOLOGICAL CODES AND PLACEHOLDERS

Non-Final OA §101§103§DP
Filed
Sep 20, 2024
Examiner
RAJAPUTRA, SUMAN
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
114 granted / 164 resolved
+14.5% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
15.2%
-24.8% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 164 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered. DETAILED ACTION 2. This Office Action is in response to the filing with the office dated 12/23/2025. Claims 21-23, 28-30 and 35-37 have been amended. Claims 21, 28 and 35 are independent claims. Claims 21-40 are presented for examination. Priority 3. Applicant’s claim for the benefit of parent Application No. 17/473,146 filed on 09/13/2021 is acknowledged by the examiner. Response to amendment/arguments 4. Applicant’s arguments with respect to the rejection of claims under 35 U.S.C. § 101 as the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more, have been fully considered. However, Examiner respectfully disagrees with the applicant’s argument. See response to arguments section. The rejection has been maintained. 5. Applicant’s arguments with respect to the rejection of claims under the nonstatutory double patenting rejection have been fully considered. However, Examiner respectfully disagrees with the applicant’s argument. The rejection has been maintained. 6. Applicant’s arguments with respect to the rejection of claims under 35 U.S.C. § 102 (a)(i) and 103(a) have been fully considered but are moot in view of the new grounds of rejection. Double Patenting The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Langi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528,163 USPQ 644 (CCPA 1969).--- A timely filed terminal disclaimer in compliance with 37 CFR 1.321 (c) or 1.321 (d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(1) (1) - 706.02(1) (3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321 (b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based e-Terminal Disclaimer may be filled out completely online using web-screens. An e-Terminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about e-Terminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-l.isp. 7. Claims 21-40 are rejected on the ground of non-statutory double patenting as being unpatentable over claims 1-20 of US Patent Application (US 12124440 B1) in view of Tunstall-Pedoe; William (US 20180096025 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the Patent Application either anticipate or render obvious the claims in the instant application. The table below is comparing the instant application to the patent application. Instant US Application (18/892,144) Patent Application (US 12124440 B1) 21. A system, comprising: one or more processors; and one or more memories, wherein the one or more memories have stored thereon instructions, which when executed by the one or more processors, cause the one or more processors to: send, to a service of a remote provider network via an interface of the service, a natural language query; receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model; send, to the service, an indication to change the code that was previously assigned by the service to the portion of the natural language query to a different code, wherein the model is updated based at least on the different code sent to the service; and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. 1. A system, comprising: one or more processors; and one or more memories, wherein the one or more memories have stored thereon instructions, which when executed by the one or more processors of a provider network, cause the one or more processors to, for individual ones of a plurality of clients of the provider network, implement an NLQ-SQLQ service to: receive, from a client via an interface of the NLQ-SQLQ service, a natural language query; for one or more portions of the natural language query: determine that the portion of the natural language query is associated with one or more codes of an ontology, wherein a given ontology comprises a plurality of codes that are respectively associated with one or more words; and assign, based on one or more criteria, one of the one or more codes to the portion of the natural language query; replace the one or more portions of the natural language query with a different argument placeholder to generate a modified natural language query comprising one or more argument placeholders, wherein the one or more argument placeholders for the one or more portions are associated with the one or more codes assigned to the one or more portions; provide the modified natural language query as input to a trained model, wherein the trained model is selected for generating SQL queries based at least on a configuration input received by the NLQ-SQLQ service; convert, by the trained model, the modified natural language query into an initial structured query language (SQL) query, wherein the initial SQL query comprises the one or more argument placeholders and one or more subquery placeholders; generate a final SQL query based at least on: the initial SQL query that was previously generated by the NLQ-SQLQ service before the final SQL query based on the modified natural language query, replacement of the one or more subquery placeholders located within the initial SQL query that was generated by the trained model with the one or more predefined SQL subquery templates that were previously selected based at least on configuration input previously received by the NLQ-SQLQ service from the client, and the one or more codes associated with the one or more argument placeholders of the initial SQL query that was previously generated by the NLQ-SQLQ service before the final SQL query; execute the final SQL query to generate one or more results; and provide, via the interface of the NLQ-SQLQ service, the one or more results to the client. 23. (New) The system as recited in claim 22, wherein the model is further updated based on one or more other portions of the natural language query. 4. The system as recited in claim 1, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: provide, to the client, an indication of the one or more codes assigned to the one or more portions of the natural language query; receive, from the client, an indication to change a particular one of the one or more codes that is assigned to a particular one of the one or more portions of the natural language query to a different code; and in response to the reception of the indication to change the particular code to the different code, assign the different code to the particular portion of the natural language query. 24. (New) The system as recited in claim 21, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: send, to the service, an indication to execute the final SQL query; and receive, from the service, one or more results of the execution of the final SQL query. 25. (New) The system as recited in claim 21, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: send, from the service, an indication of a modification to be applied to the final SQL query, wherein the service executes a modified final SQL query based on the modification to be applied; and receive, from the service, one or more results of the execution of the modified final SQL query. 4. The system as recited in claim 1, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: provide, to the client, an indication of the one or more codes assigned to the one or more portions of the natural language query; receive, from the client, an indication to change a particular one of the one or more codes that is assigned to a particular one of the one or more portions of the natural language query to a different code; and in response to the reception of the indication to change the particular code to the different code, assign the different code to the particular portion of the natural language query. 27. (New) The system as recited in claim 21, wherein the different code is one of a plurality of codes of an ontology maintained by the service. 2. The system as recited in claim 1, wherein the trained model is trained based at least on data associated with the healthcare and life sciences domain, and wherein to determine that the portion of the natural language query is associated with the one or more codes of an ontology, the instructions, when executed by the one or more processors, cause the one or more processors to: determine that the portion of the natural language query is associated with one or more codes of a medical ontology. 28. A method, comprising: performing, by one or more computing devices: send, to a service of a remote provider network via an interface of the service, a natural language query; receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model; send, to the service, an indication to change the code that was previously assigned by the service to the portion of the natural language query to a different code, wherein the model is updated based at least on the different code sent to the service; receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. 6. A method, comprising: performing, by an NLQ-SQLQ service implemented by one or more computing devices of a provider network: receiving, from a client via an interface of the NLQ-SQLQ service, a natural language query; for one or more portions of the natural language query: assigning, based on one or more criteria, one of a plurality of codes of an ontology to the portion of the natural language query; generating, by a trained model, an initial structured query language (SQL) query based at least on the natural language query, wherein the trained model is selected for generating SQL queries based at least on a configuration input received by the NLQ-SQLQ service, and wherein the initial SQL query comprises one or more argument placeholders that correspond to the one or more portions of the natural language query and one or more subquery placeholders, wherein the one or more argument placeholders are associated with the one or more codes assigned to the one or more portions; generating a final SQL query based at least on: the initial SQL query that was previously generated by the NLQ-SQLQ service before the final SQL query based on the modified natural language query, replacement of the one or more subquery placeholders located within the initial SQL query that was generated by the trained model with the one or more predefined SQL subquery templates that were previously selected based at least on configuration input previously received by the NLQ-SQLQ service from the client, and the one or more codes associated with the one or more argument placeholders of the initial SQL query that was previously generated by the NLQ-SQLQ service before the final SQL query; executing the final SQL query to generate one or more results; and provide, via the interface of the NLQ-SQLQ service, the one or more results to the client. 30. (New) The method as recited in claim 29, further comprising: sending, to the service, another natural language query; receiving, from the service, an indication of the different code assigned by the service to a portion of the other natural language query, wherein the different code is assigned by the service based on processing of the natural language query by the updated model. 9. The method as recited in claim 6, wherein generating a final SQL query comprises: modifying the one or more predefined SQL subquery templates to include the one or more codes. 29. (Currently amended) The method as recited in claim 28, wherein the model is further updated based on one or more other portions of the natural language query, based on the different code sent to the service, the model is updated to generate an updated model. 30. (Currently amended) The method as recited in claim 28, further comprising: sending, to the service, another natural language query; receiving, from the service, an ind. 30. (New) The method as recited in claim 29, further comprising: sending, to the service, another natural language query; receiving, from the service, an indication of the different code assigned by the service to a portion of the other natural language query, wherein the different code is assigned by the service based on processing of the natural language query by the updated model. 14. The method as recited in claim 6, wherein generating the initial SQL query based at least on the natural language query comprises generating, using at least a trained model, the initial SQL query based at least on the natural language query, and further comprising: receiving, from the client, feedback regarding one or more of the initial SQL query, the final SQL query, or the one or more results; and based on receiving the feedback, perform one or more updates to the trained model. 31. (New) The method as recited in claim 28, further comprising: sending, to the service, an indication to execute the final SQL query; and receiving, from the service, one or more results of the execution of the final SQL query. 11. The method as recited in claim 6, wherein assigning, based on one or more criteria, one of a plurality of codes of an ontology to the portion of the natural language query comprises: determining that the portion of the natural language query is associated with a plurality of codes of the ontology; calculating, based at least on analysis of the natural language query, different confidence values for the plurality of codes of the ontology, wherein a given confidence value for a given code is proportional to a likelihood that the given code is a correct match for the portion of the natural language query; and determining that a confidence level calculated for the code is highest among the different confidence values for the plurality of codes. 34. (New) The method as recited in claim 28, wherein the different code is one of a plurality of codes of an ontology maintained by the service. 12. The method as recited in claim 6, further comprising: for another portion of the natural language query: assigning, based on the one or more criteria, another code of a different ontology to the other portion of the natural language query. 35. (New) One or more non-transitory computer-accessible storage media storing program instructions that when executed on or across one or more processors cause the one or more processors to: send, to a service of a remote provider network via an interface of the service, a natural language query; receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model; send, to the service, an indication to change the code that was previously assigned by the service to the portion of the natural language query to a different code; wherein the model is updated based at least on the different code sent to the service; and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. 31. (Previously presented) The method as recited in claim 28, further comprising: sending, to the service, an indication to execute the final SQL query; and receiving, from the service, one or more results of the execution of the final SQL query. 32. (Previously presented) The method as recited in claim 28, further comprising: sending, to the service, an indication of a modification to be applied to the final SQL query, wherein the service executes a modified final SQL query based on the modification to be applied; and receiving, from the service, one or more results of the execution of the modified final SQL query. 33. (Previously presented) The method as recited in claim 32, wherein the modification to be applied comprises one or more of a changing a table, a condition, or a column of the final SQL query. 34. (Previously presented) The method as recited in claim 28, wherein the different code is one of a plurality of codes of an ontology maintained by the service. 35. (Currently amended) One or more non-transitory computer-accessible storage media storing program instructions that when executed on or across one or more processors cause the one or more processors to: send, to a service of a remote provider network via an interface of the service, a natural language query; receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model; send, to the service, an indication to change the code that was previously assigned by the service to the portion of the natural language query to a different code, wherein the model is updated based at least on the different code sent to the service; and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. 15. One or more non-transitory computer-accessible storage media storing program instructions that when executed on or across one or more processors of a provider network cause the one or more processors to implement an NLQ-SQLQ service to: receive, from a client via an interface of the NLQ-SQLQ service, a natural language query; for one or more portions of the natural language query: assign, based on one or more criteria, one of a plurality of codes of an ontology to the portion of the natural language query; generate, by a trained model, an initial structured query language (SQL) query based at least on the natural language query, wherein the trained model is selected for generating SQL queries based at least on a configuration input received by the NLQ-SQLQ service, and wherein the initial SQL query comprises one or more argument placeholders that correspond to the one or more portions of the natural language query and one or more subquery placeholders, wherein the one or more argument placeholders are associated with the one or more codes assigned to the one or more portions; generate a final SQL query based at least on: the initial SQL query that was previously generated by the NLQ-SQLQ service before the final SQL query based on the modified natural language query, replacement of the one or more subquery placeholders located within the initial SQL query that was generated by the trained model with the one or more predefined SQL subquery templates that were previously selected based at least on configuration input previously received by the NLQ-SQLQ service from the client, and the one or more codes associated with the one or more argument placeholders of the initial SQL query that was previously generated by the NLQ-SQLQ service before the final SQL query; execute the final SQL query to generate one or more results; and provide, via the interface of the NLQ-SQLQ service, the one or more results to the client. 34. (New) The method as recited in claim 28, wherein the different code is one of a plurality of codes of an ontology maintained by the service 16. The one or more non-transitory computer-accessible storage media as recited in claim 15, wherein to assign, based on one or more criteria, one of a plurality of codes of an ontology to the portion of the natural language query, the program instructions when executed on or across the one or more processors further cause the one or more processors to: determine that the portion of the natural language query is associated with the code of the plurality of codes of a medical ontology. 37. (New) The one or more non-transitory computer-accessible storage media as recited in claim 35, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, another natural language query; and receive, from the service, an indication of the different code assigned by the service to a portion of the other natural language query, wherein the different code is assigned by the service based on processing of the natural language query by the updated model. 18. The one or more non-transitory computer-accessible storage media as recited in claim 15, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: provide, to the client, an indication of the one or more codes assigned to the one or more portions of the natural language query; receive, from the client, an indication to change a particular one of the one or more codes that is assigned to a particular one of the one or more portions of the natural language query to a different code; and in response to the reception of the indication to change the particular code to the different code, assign the different code to the particular portion of the natural language query. 38. (New) The one or more non-transitory computer-accessible storage media as recited in claim 35, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, an indication to execute the final SQL query; and receive, from the service, one or more results of the execution of the final SQL query. 19. The one or more non-transitory computer-accessible storage media as recited in claim 18, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: perform one or more updates to a classification model based on one or more of the different code, the particular portion of the natural language query, and one or more other portions of the natural language query. 39. (New) The one or more non-transitory computer-accessible storage media as recited in claim 35, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, an indication of a modification to be applied to the final SQL query, wherein the service executes a modified final SQL query based on the modification to be applied; and receive, from the service, one or more results of the execution of the modified final SQL query. 20. The one or more non-transitory computer-accessible storage media as recited in claim 15, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: provide, to the client, an indication of the final SQL query; receive, from the client, a modification to be applied to the final SQL query; and in response to the reception of the modification to be applied to the final SQL query, apply the modification to the final SQL query before the execution of the final SQL query 8. As noted in the table above, the cited claims of the issued patent cover most of the limitations of the claims of the instant application, except for, “wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query”. However, Tunstall-Pedoe; William (US 20180096025 A1) teaches, “wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query” (Paragraphs [0078], [0382]-[0412]). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of D'Souza et al by receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query as taught by Tunstall-Pedoe et al (Paragraphs [0078], [0382]-[0412]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, the advantage is that the user has a chance to see where an incorrect answer came from and do something about the incorrect fact or facts that resulted in that incorrect response as taught by Tunstall-Pedoe et al (Paragraph[0252]). Response to 101 Rejection 9. Applicant’s arguments on page 13 regarding claim 1 recites, “Applicant respectfully submits that amended claim 21 does not fall within the subject matter grouping of "mental processes." For example, "send, to the service, an indication to change the code that was previously assigned by the service to the portion of the natural language query to a different code, wherein the model is updated based at least on the different code sent to the service" and "receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query," as recited by amended claim 21, does not fall within a subject matter grouping of a "mental process." Therefore, amended claim 21 does not recite a judicial exception under Prong 1 according to the October 2019 Update and is therefore eligible. Therefore, Applicant respectfully submits that claims 21-40 are eligible. And Applicant’s arguments on page 12 regarding claim 1 recites, “Applicant respectfully submits that amended claim 21 does not fall within the subject matter grouping of "mental processes." For example, "wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model". Therefore, amended claim 21 does not recite a judicial exception under Prong 1 according to the October 2019 Update and is therefore eligible. Therefore, Applicant respectfully submits that claims 21-40 are eligible”. Examiner respectfully disagrees with the applicant because, the amended claims recite “wherein the model is updated based at least on the different code sent to the service; wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query”. These limitations, at the high level of generality as drafted would encompass a user to receive a text/ natural language and can assign codes that are previously stored in a database, and when a query is received, user can search the database and finds multiple codes stored at different times and provides the user plurality of codes. Based on user selection the code becomes the current code and the results are produced. Assigning the codes such as receiving and/ or sending the codes by a service/ computer using a model can be programmed to classify the terms from the text/ Natural language and assign a code to particular term is can be programmed with logic using a computer as a tool. Therefore these limitations under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. There is nothing in the claim element which precludes the step from practically being performed in the human mind. Additionally, the mere nominal recitation of a generic computer components, or a programmed computer does not take the claim limitation out of the mental processes grouping. The combination of these additional elements is no more than mere instructions to apply the exception using series of steps. Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of, the invention being the “service”. Based on specification Paragraph [0013], Examiner interprets service as a tool/ machine learning model is recited at a high level of generality as generic computer components. These additional elements amount to nothing more than mere instructions to apply the recited abstract idea on a computer, under MPEP 2106.05(f). These additional elements are no more than mere instructions to apply the exception using series of steps and outputting the result of the mental process. Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the recitation of generic computing components is still mere instructions to apply the exception under MPEP 2106.05(f) and does not provide significantly more. Thus the claims are abstract. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 10. Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Determining whether claims are statutory under 35 U.S.C. 101 involves a two-step analysis. Step 1 requires a determination of whether the claims are directed to the statutory categories of invention. Step 2 requires a determination of whether the claims are directed to a judicial exception without significantly more. Step 2 is divided into two prongs, with the first prong having a part 1 and part 2. See MPEP 2106; See 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG). Pursuant to Step 1, claims 21-27 recite a system which are directed to the statutory category of a machine. Claims 35-40 recite a non-transitory computer readable storage medium, which are directed to a manufacture. Regarding claims 21, 28 and 35, Pursuant to Step 2A, part 1, claims are analyzed to determine whether they are directed to an abstract idea. Under the 2019 PEG, claims are deemed to be directed to an abstract idea if they fall within one of the enumerated categories of (a) mathematical concepts, (b) certain methods of organizing human activity, and (c) mental processes. Here, claim 1 are directed to an abstract idea categorized under mental processes. Courts consider a mental process if it “can be performed in the human mind, or by a human using a pen and paper.” MPEP 2016(a)(2)(III). Courts also consider a mental process as one that can be performed in the human mind and is merely using a computer as a tool to perform the concept. MPEP 2016(a)(2)(III)(C)(3). Claim 1 recites a mental process because the recited steps recite actions of generating SQL query based on natural language query. but is recited at a high level of generality that merely used computers as a tool to perform the processes. See MPEP 2106(a)(2)(III). For example, claim 1 recites limitations of “sending …a natural language query, receiving a code…, sending …an indication to change code…., receiving …SQL query….”.These limitations, at the high level of generality as drafted, would encompass a user to receive a natural language query and convert at least one part of the query to a predefined code, changing the code and based on the changed code generate a SQL query are essentially steps of generating and manipulating data at a high level of generality, which can be performed by a person using a computer as a tool. which is mentally performable as an evaluation or judgement. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Pursuant to Step 2A, part 2, claims are analyzed to determine whether the recited abstract idea is integrated into a practical application. In this case, as explained above, claim 1 merely recite a mental process. These limitations describe receiving a query and manipulate the data and generate a SQL query. While claim 1 recites additional components in the form of processors, a memory, and system, these components are recited at a high level of generality, which do not add meaningful limits on the recited abstract idea to integrate it into a practical application by providing an improvement to the functioning of a computer or technology, implementing the abstract idea with a particular machine or manufacture that is integral to the claim, effecting a transformation or reduction of a particular article to a different state or thing, nor applying the abstract idea in some meaningful way beyond linking its use to computer technology. See 2019 PEG. Neither of these additional limitations recite sufficient limitations to adequately convey the asserted improvement of the claimed invention. Accordingly, even in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the recitation of generic computing components is still mere instructions to apply the exception under MPEP 2106.05(f) and does not provide significantly more. Considering the additional elements in combination and the claim as a whole does not change the analysis, and does not amount to significantly more. Thus the claims are abstract. Regarding claims 22, 29, 36 limitation recites “based on the different code sent to the service, the model is updated to generate an updated model”. These limitations, at the high level of generality as drafted, would encompass a user to look at the data and the changing the code and updating a model/ algorithm is essentially steps of generating and manipulating data at a high level of generality, which can be performed by a person using a computer as a tool. which is mentally performable as an evaluation or judgement. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Regarding claims 23-26, 30-33, 37-39 limitation recites “receiving ….”, executing…”, modifying…”, “executing…”. These limitations, at the high level of generality as drafted, would encompass a user to generate another natural language query based on the changed code, based on the changed code, modifying the final SQL query and executing are essentially steps of generating and manipulating data at a high level of generality, which can be performed by a person using a computer as a tool, which is mentally performable as an evaluation or judgement. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Regarding claims 27, 29, 36, limitation recite “wherein the different code is one of a plurality of codes of an ontology maintained by the service”. These limitations, at the high level of generality as drafted, would encompass a user to generate a different/ changing the code based on the ontology is essentially steps of generating and manipulating data at a high level of generality, which can be performed by a person using a computer as a tool, which is mentally performable as an evaluation or judgement. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 11. Claims 21-40 are rejected under 35 U.S.C. 103 as being unpatentable over D'Souza; Howard Maurice (US 20170323061 A1) in view of Tunstall-Pedoe; William (US 20180096025 A1). Regarding independent claim 21, D'Souza; Howard Maurice (US 20170323061 A1) teaches, a system, comprising: one or more processors; and one or more memories, wherein the one or more memories have stored thereon instructions, which when executed by the one or more processors Paragraph [179], [0181] the computer system 1100 may include one or more processors 1110 and one or more tangible, non-transitory computer-readable storage media (e.g., volatile storage 1120 and one or more non-volatile storage media 1130, which may be formed of any suitable non-volatile data storage media), cause the one or more processors to: send, to a service of a remote provider network via an interface of the service, a natural language query (Paragraph [0053] discloses sending a natural language query to a fact extraction service. Also see Paragraphs [0004 [0092]); receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model (Fig. 7A. Paragraph [0011] applying a natural language understanding engine, implemented via at least one processor, to a free-form text documenting a clinical patient encounter, to automatically derive one or more engine-suggested medical billing codes for the clinical patient encounter; presenting the engine-suggested medical billing codes for the clinical patient encounter in a graphical user interface (GUI) via at least one display. Paragraph [0067] discloses, processing of natural language query is based on statistical models. Also see Paragraphs [0120], [0123] (Examiner interprets deriving codes to the query is assigning codes to the portions of the query) send, to the service, an indication to change the code that was previously assigned by the service to the portion of the natural language query to a different code (Paragraph [0129] discloses, an indication to change the automatically assigned code to the portion of the natural language query with a replacement code (Examiner interprets code that was previously assigned by the service as the codes that are derived/ assigned automatically by a model). Also see Paragraph [0012); wherein the model is updated based at least on the different code sent to the service (Paragraph [0125] discloses, training the NLU engine based on modified/ different code. Examiner interprets updating the model is training the model). D'Souza et al fails to explicitly teach, and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. Tunstall-Pedoe; William (US 20180096025 A1) teaches, and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code(Paragraph [0078] System assessment is done on all newly added facts to the static knowledge base and the user who has added a fact that is contradicted by other facts in the static knowledge base is given an opportunity to draw attention to and potentially change the status of any of those facts which they believe to be untrue), and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of D'Souza et al by receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query as taught by Tunstall-Pedoe et al (Paragraphs [0078], [0382]-[0412]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, the advantage is that the user has a chance to see where an incorrect answer came from and do something about the incorrect fact or facts that resulted in that incorrect response as taught by Tunstall-Pedoe et al (Paragraph[0252]). Regarding dependent claim 22, D'Souza et al and Tunstall-Pedoe et al teach, the system as recited in claim 21. D'Souza et al further teaches, wherein the model is further updated based on one or more other portions of the natural language query (Paragraph [0125] discloses, training the NLU engine based on modified/ different code. Examiner interprets updating the model is training the model). Tunstall-Pedoe et al also further teaches, wherein the model is further updated based on one or more other portions of the natural language query (Paragraph [0378] discloses, the model is updated based on one or more portions of the natural language query). Regarding dependent claim 23, D'Souza et al and Tunstall-Pedoe et al teach, the system as recited in claim 21. D'Souza et al further teaches, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: send, to the service, another natural language query; and receive, from the service, an indication of the different code assigned by the service to a portion of the other natural language query, wherein the different code is assigned by the service based on processing of the natural language query by the updated model (Paragraph [0130] Any suitable technique(s) may be utilized to adjust the NLU engine based on the feedback from the coding/review process. Exemplary techniques for NLU engine adjustment based on user corrections in a CLU system). Regarding dependent claim 24, D'Souza et al and Tunstall-Pedoe et al teach, the system as recited in claim 21. Tunstall-Pedoe et al teaches, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: send, to the service, an indication to execute the final SQL query; and receive, from the service, one or more results of the execution of the final SQL query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Regarding dependent claim 25, D'Souza et al and Tunstall-Pedoe et al teach, the system as recited in claim 21. D'Souza et al further teaches, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: send, from the service, an indication of a modification to be applied to the final query, wherein the service executes a modified final query based on the modification to be applied; and receive, from the service, one or more results of the execution of the modified final query (Paragraph [0129] discloses, modifying the code assigned to the portion of the natural language query to a different code and processing the natural language query using the different code query). Tunstall-Pedoe et al also further teaches, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: send, from the service, an indication of a modification to be applied to the final query, wherein the service executes a modified final query based on the modification to be applied; and receive, from the service, one or more results of the execution of the modified final query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Regarding dependent claim 26, D'Souza et al and Tunstall-Pedoe et al teach, the system as recited in claim 25. D'Souza et al further teaches, wherein the modification to be applied comprises one or more of a changing a table, a condition, or a column of the final SQL query (Fig. 7c, 7E, Paragraph [0126] GUI 700 may also allow the user to replace a code with a different code, instead of rejecting the code outright, e.g., using the context menu of FIG. 7C. In the example illustrated in FIG. 7E, the user has replaced code 482.9 with code 482.1, and indicator 738 shows that the new code was user-added. 482.9 (Pneumonia due to Pseudomonas) is a more specific diagnosis applicable to the patient encounter than the suggested 482.1 (Bacterial Pneumonia, Unspecified), so the user may provide “More specific code needed” as the reason for the replacement. SQL query is taught by Tunstall-Pedoe et al (Paragraph [0269]). Regarding dependent claim 27, D'Souza et al and Tunstall-Pedoe et al teach, the system as recited in claim 21. D'Souza et al further teaches, wherein the different code is one of a plurality of codes of an ontology maintained by the service (Paragraph [0089] In some embodiments, the normalization/coding process may output a single hypothesis for the standard form and/or code corresponding to each extracted fact. For example, the single output hypothesis may correspond to the concept in the ontology (and/or the corresponding code in a medical code system) linked to the term that is most similar to the token(s) in the text from which the fact is extracted. Regarding independent claim 28, D'Souza; Howard Maurice (US 20170323061 A1) teaches, a method, comprising: performing, by one or more computing devices: send, to a service of a remote provider network via an interface of the service, a natural language query (Paragraph [0053] discloses sending a natural language query to a fact extraction service. Also see Paragraphs [0004 [0092]); receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model (Fig. 7A. Paragraph [0011] applying a natural language understanding engine, implemented via at least one processor, to a free-form text documenting a clinical patient encounter, to automatically derive one or more engine-suggested medical billing codes for the clinical patient encounter; presenting the engine-suggested medical billing codes for the clinical patient encounter in a graphical user interface (GUI) via at least one display. Paragraph [0067] discloses, processing of natural language query is based on statistical models. Also see Paragraphs [0120], [0123] (Examiner interprets deriving codes to the query is assigning codes to the portions of the query) send, to the service, an indication to change the code that was previously assigned by the service assigned to the portion of the natural language query to a different code; and receive, from the service, an indication of a final query, wherein the final query is based on processing of the natural language query using the different code (Paragraph [0129] discloses, an indication to change the automatically assigned code to the portion of the natural language query with a replacement code (Examiner interprets code that was previously assigned by the service as the codes that are derived/ assigned automatically by a model). Also see Paragraph [0012); wherein the model is updated based at least on the different code sent to the service (Paragraph [0125] discloses, training the NLU engine based on modified/ different code. Examiner interprets updating the model is training the model). D'Souza et al fails to explicitly teach, and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. Tunstall-Pedoe; William (US 20180096025 A1) teaches, and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code(Paragraph [0078] System assessment is done on all newly added facts to the static knowledge base and the user who has added a fact that is contradicted by other facts in the static knowledge base is given an opportunity to draw attention to and potentially change the status of any of those facts which they believe to be untrue), and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of D'Souza et al by receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query as taught by Tunstall-Pedoe et al (Paragraphs [0078], [0382]-[0412]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, the advantage is that the user has a chance to see where an incorrect answer came from and do something about the incorrect fact or facts that resulted in that incorrect response as taught by Tunstall-Pedoe et al (Paragraph[0252]). Regarding dependent claim 29, D'Souza et al and Tunstall-Pedoe et al teach, the method as recited in claim 28. D'Souza et al further teaches, wherein the model is further updated based on one or more other portions of the natural language query (Paragraph [0125] discloses, training the NLU engine based on modified/ different code. Examiner interprets updating the model is training the model). Tunstall-Pedoe et al also further teaches, wherein the model is further updated based on one or more other portions of the natural language query (Paragraph [0378] discloses, the model is updated based on one or more portions of the natural language query). Regarding dependent claim 30, D'Souza et al and Tunstall-Pedoe et al teach, the method as recited in claim 28. D'Souza et al further teaches, further comprising: sending, to the service, another natural language query; receiving, from the service, an indication of the different code assigned by the service to a portion of the other natural language query, wherein the different code is assigned by the service based on processing of the natural language query by the updated model (Paragraph [0130] Any suitable technique(s) may be utilized to adjust the NLU engine based on the feedback from the coding/review process. Exemplary techniques for NLU engine adjustment based on user corrections in a CLU system). Regarding dependent claim 31, D'Souza et al and Tunstall-Pedoe et al teach, the method as recited in claim 28. Tunstall-Pedoe et al teaches, further comprising: sending, to the service, an indication to execute the final SQL query; and receiving, from the service, one or more results of the execution of the final SQL query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Regarding dependent claim 32, D'Souza et al and Tunstall-Pedoe et al teach, the method as recited in claim 28. D'Souza et al further teaches, further comprising: sending, to the service, an indication of a modification to be applied to the final SQL query, wherein the service executes a modified final SQL query based on the modification to be applied; and receiving, from the service, one or more results of the execution of the modified final SQL query (Paragraph [0129] discloses, modifying the code assigned to the portion of the natural language query to a different code and processing the natural language query using the different code query). Tunstall-Pedoe et al also further teaches, further comprising: sending, to the service, an indication of a modification to be applied to the final SQL query, wherein the service executes a modified final SQL query based on the modification to be applied; and receiving, from the service, one or more results of the execution of the modified final SQL query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Regarding dependent claim 33, D'Souza et al and Tunstall-Pedoe et al teach, the method as recited in claim 32. D'Souza et al further teaches, wherein the modification to be applied comprises one or more of a changing a table, a condition, or a column of the final SQL query (Fig. 7c, 7E, Paragraph [0126] GUI 700 may also allow the user to replace a code with a different code, instead of rejecting the code outright, e.g., using the context menu of FIG. 7C. In the example illustrated in FIG. 7E, the user has replaced code 482.9 with code 482.1, and indicator 738 shows that the new code was user-added. 482.9 (Pneumonia due to Pseudomonas) is a more specific diagnosis applicable to the patient encounter than the suggested 482.1 (Bacterial Pneumonia, Unspecified), so the user may provide “More specific code needed” as the reason for the replacement. SQL query is taught by Tunstall-Pedoe et al (Paragraph [0269]). Regarding dependent claim 34, D'Souza et al and Tunstall-Pedoe et al teach, the method as recited in claim 28. D'Souza et al further teaches, wherein the different code is one of a plurality of codes of an ontology maintained by the service (Paragraph [0089] In some embodiments, the normalization/coding process may output a single hypothesis for the standard form and/or code corresponding to each extracted fact. For example, the single output hypothesis may correspond to the concept in the ontology (and/or the corresponding code in a medical code system) linked to the term that is most similar to the token(s) in the text from which the fact is extracted). Regarding independent claim 35, D'Souza; Howard Maurice (US 20170323061 A1) teaches, one or more non-transitory computer-accessible storage media storing program instructions that when executed on or across one or more processors (Paragraph [179], [0181] the computer system 1100 may include one or more processors 1110 and one or more tangible, non-transitory computer-readable storage media (e.g., volatile storage 1120 and one or more non-volatile storage media 1130, which may be formed of any suitable non-volatile data storage media) cause the one or more processors to: send, to a service of a remote provider network via an interface of the service, a natural language query (Paragraph [0053] discloses sending a natural language query to a fact extraction service. Also see Paragraphs [0004 [0092]); receive, from the service, an indication of a code assigned by the service to a portion of the natural language query, wherein the code is assigned by the service to the portion of the natural language query based at least on processing of the natural language query by a model (Fig. 7A. Paragraph [0011] applying a natural language understanding engine, implemented via at least one processor, to a free-form text documenting a clinical patient encounter, to automatically derive one or more engine-suggested medical billing codes for the clinical patient encounter; presenting the engine-suggested medical billing codes for the clinical patient encounter in a graphical user interface (GUI) via at least one display. Paragraph [0067] discloses, processing of natural language query is based on statistical models. Also see Paragraphs [0120], [0123] (Examiner interprets deriving codes to the query is assigning codes to the portions of the query); send, to the service, an indication to change the code that was previously assigned by the service assigned to the portion of the natural language query to a different code; and receive, from the service, an indication of a final query, wherein the final query is based on processing of the natural language query using the different code (Paragraph [0129] discloses, modifying the code assigned to the portion of the natural language query to a different code and processing the natural language query using the different code query); wherein the model is updated based at least on the different code sent to the service (Paragraph [0125] discloses, training the NLU engine based on modified/ different code. Examiner interprets updating the model is training the model). D'Souza et al fails to explicitly teach, and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query. Tunstall-Pedoe; William (US 20180096025 A1) teaches, and receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code(Paragraph [0078] System assessment is done on all newly added facts to the static knowledge base and the user who has added a fact that is contradicted by other facts in the static knowledge base is given an opportunity to draw attention to and potentially change the status of any of those facts which they believe to be untrue), and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of D'Souza et al by receive, from the service, an indication of a final SQL query, wherein the final SQL query is based on processing of the natural language query using the different code, and wherein the indication of the final SQL query comprises an indication of the different code within the final SQL query that corresponds to the portion of the natural language query instead of the code that was previously assigned by the service to the portion of the natural language query as taught by Tunstall-Pedoe et al (Paragraphs [0078], [0382]-[0412]). One of the ordinary skill in the art would have been motivated to make this modification, by doing so, the advantage is that the user has a chance to see where an incorrect answer came from and do something about the incorrect fact or facts that resulted in that incorrect response as taught by Tunstall-Pedoe et al (Paragraph[0252]). Regarding dependent claim 36, D'Souza et al and Tunstall-Pedoe et al teach, the one or more non-transitory computer-accessible storage media as recited in claim 35. D'Souza et al further teaches, wherein the model is further updated based on one or more other portions of the natural language query (Paragraph [0125] discloses, training the NLU engine based on modified/ different code. Examiner interprets updating the model is training the model). Tunstall-Pedoe et al also further teaches, wherein the model is further updated based on one or more other portions of the natural language query (Paragraph [0378] discloses, the model is updated based on one or more portions of the natural language query). Regarding dependent claim 37, D'Souza et al and Tunstall-Pedoe et al teach, the one or more non-transitory computer-accessible storage media as recited in claim 35. D'Souza et al further teaches, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, another natural language query; and receive, from the service, an indication of the different code assigned by the service to a portion of the other natural language query, wherein the different code is assigned by the service based on processing of the natural language query by the updated model (Paragraph [0130] Any suitable technique(s) may be utilized to adjust the NLU engine based on the feedback from the coding/review process. Exemplary techniques for NLU engine adjustment based on user corrections in a CLU system). Regarding dependent claim 38, D'Souza et al and Tunstall-Pedoe et al teach, the one or more non-transitory computer-accessible storage media as recited in claim 35. Tunstall-Pedoe et al teaches, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, an indication to execute the final SQL query; and receive, from the service, one or more results of the execution of the final SQL query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Regarding dependent claim 39, D'Souza et al and Tunstall-Pedoe et al teach, the one or more non-transitory computer-accessible storage media as recited in claim 35. D'Souza et al further teaches, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, an indication of a modification to be applied to the final query, wherein the service executes a modified final SQL query based on the modification to be applied; and receive, from the service, one or more results of the execution of the modified final query (Paragraph [0129] discloses, modifying the code assigned to the portion of the natural language query to a different code and processing the natural language query using the different code query). Tunstall-Pedoe et al also further teaches, wherein the program instructions when executed on or across the one or more processors further cause the one or more processors to: send, to the service, an indication of a modification to be applied to the final query, wherein the service executes a modified final SQL query based on the modification to be applied; and receive, from the service, one or more results of the execution of the modified final query (Paragraphs [0382]-[0412] discloses, processing the final query which is based on processing the natural language query using the selected fact from paragraph [0078] and the result in the prior art is for the query “The French city of Paris” for the query “What is the capital of France?” Also see [0133], [0330] (Examiner interprets the indication of final SQL query is applying the current fact in the query, which would be the different attribute than the previous one)). Regarding dependent claim 40, D'Souza et al and Tunstall-Pedoe et al teach, the one or more non-transitory computer-accessible storage media as recited in claim 39. D'Souza et al further teaches, wherein the modification to be applied comprises one or more of a changing a table, a condition, or a column of the final SQL query Fig. 7c, 7E, Paragraph [0126] GUI 700 may also allow the user to replace a code with a different code, instead of rejecting the code outright, e.g., using the context menu of FIG. 7C. In the example illustrated in FIG. 7E, the user has replaced code 482.9 with code 482.1, and indicator 738 shows that the new code was user-added. 482.9 (Pneumonia due to Pseudomonas) is a more specific diagnosis applicable to the patient encounter than the suggested 482.1 (Bacterial Pneumonia, Unspecified), so the user may provide “More specific code needed” as the reason for the replacement. SQL query is taught by Tunstall-Pedoe et al (Paragraph [0269]).. Closest Prior Art 12. The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure. Saha; Diptikalyan (US 20200073787 A1) teaches, Methods, systems, and computer program products for automated generation of test cases for analyzing natural-language-interface-to-database systems are provided herein. A computer-implemented method includes identifying sources of ambiguity from input to a natural-language-interface-to-database system and a precondition corresponding to each of the identified sources of ambiguity; generating test cases which analyze capabilities of the natural-language-interface-to-database system, wherein generating the one or more test cases comprises determining validity of the preconditions within the context of the capabilities of the natural-language-interface-to-database system; automatically generating an ontology-dependent structured query template based at least in part on the generated test cases; automatically generating natural language queries based at least in part on the ontology-dependent structured query template; and outputting, to at least one user, the ontology-dependent structured query template and the natural language queries (Abstract) 13. Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968))). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUMAN RAJAPUTRA whose telephone number is (571) 272-4669. The examiner can normally be reached between 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi (571) 272-4078 can be reached. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/ patents/ apply/ patent-center for more information about Patent Center and https://www.uspto.gov/ patents/ docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S. R./ Examiner, Art Unit 2163 /ALEX GOFMAN/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Jun 12, 2025
Non-Final Rejection — §101, §103, §DP
Sep 16, 2025
Response Filed
Oct 18, 2025
Final Rejection — §101, §103, §DP
Dec 23, 2025
Response after Non-Final Action
Jan 23, 2026
Request for Continued Examination
Jan 30, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12455878
SYSTEM AND METHOD FOR SQL SERVER RESOURCES AND PERMISSIONS ANALYSIS IN IDENTITY MANAGEMENT SYSTEMS
2y 5m to grant Granted Oct 28, 2025
Patent 12436988
KEYPHRASE GENERATION
2y 5m to grant Granted Oct 07, 2025
Patent 12423367
SEARCH ENGINE INTERFACE USING TAG/OPERATOR SEARCH CHIP OBJECTS
2y 5m to grant Granted Sep 23, 2025
Patent 12424304
Systems and Methods for Analyzing Longitudinal Health Information and Generating a Dynamically Structured Electronic File
2y 5m to grant Granted Sep 23, 2025
Patent 12412664
ADDICTION PREDICTOR AND RELAPSE DETECTION SUPPORT TOOL
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+37.6%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 164 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month