DETAILED ACTION
This communication is in response to the Application filed on 02/17/2026. Claims 1-4, 6, 8-12, 13, 15-18, and 20-25 are pending and have been examined.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/17/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
With respect to the 35 U.S.C. 103 rejections for claims 1-4, 6, 8-11, 13, 15-18, and 20-25, the applicant has amended the claim language to introduce new limitations. Any arguments regarding the amended claim language are considered moot in view of an updated prior art search necessitated by the changes. Details on the newly rejected amendments can be found in the 35 U.S.C. 103 section below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6, 8-11, 13, 15-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over “A Tale of Two Linkings: Dynamically Gating between Schema Linking and Structural Linking for Text-to-SQL Parsing” (Chen et al.) in view of “RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers” (Wang et al.), “Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation” (Guo et al.), and US Patent Publication US 11409738 B2 (Teja et al.).
Regarding Claims 1, 8, and 15, Chen et al. teaches a computer-implemented method comprising: generating an input string by concatenating a natural language utterance with a database schema representation for a database;
(The task of Text-to-SQL semantic parsing is to predict a SQL query S based on input (Q, G) where Q = {q1, . . . , q|Q|} is the NL question and G = (V, E) is the DB schema being queried.) (Section 2.1, Paragraph 1).
(To augment our model with pretrained BERT embeddings, we follow Hwang et al. (2019) and Zhang et al. (2019) to feed the concatenation of NL question and the textual descriptions of DB entities to BERT and use the top layer hidden states of BERT as the input embeddings.) (Section 3, Paragraph 5).
Chen et al. teaches using the combination of a natural language utterance and a database schema as the input. A concatenation is done using a pretrained BERT model.
wherein the database schema representation for the database includes a link attribute that refers to an entry in a table without referring to a name of the table to condense the database schema representation.
(G = (V, E) is the DB schema being queried) (Section 2.1, Paragraph 1).
(E = {(e (s) 1 , e (t) 1 , l1), . . . ,(e (s) |E|, e (t) |E|, l|E|)} contains the relations l between source entity e(s) and target entity e(t), e.g., table-column relationships, foreign-primary key relationships, etc.) (Section 2.1, Paragraph).
(In SQL, a foreign key in one table is used to refer to a primary key in another table to link these two tables together for joint queries.) (Page 2902, Footnote 2).
The database schema is shown to contain component E. Component E of the DB schema is then shown to contain foreign-primary key relationships. The foreign-primary key relationships are how tables are referred to. The use of the keys to access tables rather than referring to the name is present and these keys are provided to the database schema representation.
based on the input string, generating, by a Pre-trained Language Model (PLM), one or more embeddings of the natural language utterance and the database schema representation;
(The NL encoder takes the NL question tokens Q as input, maps them to word embeddings EQ…) (Section 2.1, Paragraph 3).
(To augment our model with pretrained BERT embeddings, we follow Hwang et al. (2019) and Zhang et al. (2019) to feed the concatenation of NL question and the textual descriptions of DB entities to BERT and use the top layer hidden states of BERT as the input embeddings.) (Section 3, Paragraph 5).
Chen et al. generates embeddings of the input. One method for doing this uses a pretrained BERT as the first encoder which is a type of PLM.
encoding, by a (Relation-Aware Transformer (RAT)) (taught by Wang et al.), relations between elements in the database schema representation and words in the natural language utterance based on the one or more embeddings;
(The task of Text-to-SQL semantic parsing is to predict a SQL query S based on input (Q, G) where Q = {q1, . . . , q|Q|} is the NL question and G = (V, E) is the DB schema being queried.) (Section 2.1, Paragraph 1).
(The schema encoder takes G as input and builds a relation-aware entity representation for every entity in the schema.) (Section 2.1, Paragraph 4).
Chen et al. uses a second encoder for relationship information in the database schema. The sequential order of the NL encoder coming first and the schema encoder coming second can be seen in Fig. 1.
Chen et al. does not explicitly teach: encoding, by a Relation-Aware Transformer (RAT). generating, by a grammar-based decoder, an intermediate database query representation based on the encoded relations and the one or more embeddings; selecting an interface specification for converting to a selected database query language of a plurality of potential database query languages; and based on the intermediate database query representation and the interface specification, generating a database query in the selected database query language.
However, Wang et al. teaches encoding, by a Relation-Aware Transformer (RAT).
(In this work, we present a unified framework, called RAT-SQL, for encoding relational structure in the database schema and a given question. It uses relation-aware self-attention to combine global reasoning over the schema entities and question words with structured reasoning over predefined schema relations.) (Section 1, Paragraph 8).
Wang et al. discloses a framework known as RAT-SQL. This performs the same task as the second encoder by finding relations in a database schema.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the schema encoder as taught by Chen et al. to be a Relation-Aware Transformer as taught by Wang et al. This would have been an obvious substitution as they both perform the same task of encoding the input with relational data from a database schema (Wang et al. Section 1, Paragraph 8).
Chen et al. in view of Wang et al. does not explicitly teach: generating, by a grammar-based decoder, an intermediate database query representation based on the encoded relations and the one or more embeddings; selecting an interface specification for converting to a selected database query language of a plurality of potential database query languages; and based on the intermediate database query representation and the interface specification, generating a database query in the selected database query language.
However, Guo et al. teaches a generating, by a grammar-based decoder, an intermediate database query representation based on the encoded relations and the one or more embeddings.
(…a SemQL query, which is an intermediate representation (IR) that we design to bridge NL and SQL.) (Section 1, Paragraph 4).
(The goal of the decoder is to synthesize SemQL queries. Given the tree structure of SemQL, we use a grammar-based decoder (Yin and Neubig, 2017, 2018) which leverages a LSTM to model the generation process of a SemQL query via sequential applications of actions) (Section 2.3, Decoder Subsection).
(When the decoder is going to select a column, it first makes a decision on whether to select from the memory or not, and then selects a column from the memory or the schema based on the decision.) (Section 2.3, Decoder Subsection, Paragraph 3).
Guo et al. teaches an intermediate representation called SemQL which is formed using a grammar-based decoder. It can be seen in Fig. 4 that the decoder takes information from natural language embeddings produced by a NL Encoder (one or more embeddings), and the schema relations produced by a Schema Encoder (encoded relations).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the Text-to-SQL method as taught by Chen et al. in view of Wang et al. to implement an intermediate database query representation as taught by Guo et al. This would have been an obvious improvement to better understand the mismatch in intent between an NL utterance and a SQL query as well as reduce the negative impact out-of-domain words in an NL utterance have when forming a SQL query (Guo et al. Section 1, Paragraphs 2-3).
Chen et al. in view of Wang et al. and Guo et al. does not explicitly teach: selecting an interface specification for converting to a selected database query language of a plurality of potential database query languages; and based on the intermediate database query representation and the interface specification, generating a database query in the selected database query language.
However, Teja et al. teaches a selecting an interface specification for converting to a selected database query language of a plurality of potential database query languages;
(The system comprises a user interface to receive a natural language query from a user, an intermediate query generator module to convert the natural language query into an intermediate query language 2 (IQL2), a parser to analyze, extract, and structure data from the generated IQL2, a query engine selector module comprising a selector component to identify one or more database query format based on a type of one or more data source and a query engine builder module comprising a composer component to generate one or more database query.) (Col. 3, Lines 10-21).
(The Query Engine Selector (QES) 102 component is used to select appropriate query engine framework for processing IQL2. It uses the IQL2 parser component 105 to identify one or more data sources and select an appropriate query engine(s) to process IQL2.) (Col. 4, Lines 12-16).
(The database specific query can be an SQL query type 1, SQL query type 2, No-SQL elastic search query or like. The Query Builder (QB) component 103 uses the IQL2 parser component 105 to identify elements required for query construction i.e., dimensions, measures, filters, actions and grouping.) (Col. 6, Lines 60-65).
Teja et al. teaches a system that converts an intermediate query language (IQL2) to a selected database query language
and based on the intermediate database query representation and the interface specification, generating a database query in the selected database query language.
(The database specific query can be an SQL query type 1, SQL query type 2, No-SQL elastic search query or like. The Query Builder (QB) component 103 uses the IQL2 parser component 105 to identify elements required for query construction i.e., dimensions, measures, filters, actions and grouping.) (Col. 6, Lines 60-65).
(Checks for the type of query to be built; Identifies the table level operations to be performed like inner join, outer join, union etc; … Select Template basis the QE selection, type of query to be built and basis operations to be performed; If template available for all the above selection criteria\ Constructs query by replacing the variable elements in template with actual values from IQL2 parsed data;) (Col. 5, Lines 36-48).
Furthermore, Chen et al. in view of Wang et al. and Guo et al. does not explicitly teach the limitation of claim 8: one or more processors; and one or more computer-readable media storing instructions. Nor do they teach the limitation of claim 15: One or more non-transitory computer-readable media storing instructions which, when executed by one or more processors, cause a system to perform operations comprising.
However, Teja et al. teaches a one or more processors; and one or more computer-readable media storing instructions.
(The SQL query is executed against the database warehouse to extract the required data from the database warehouse and answer in form of graphs or narration and/or voice response is generated from query output) (Col. 1, Lines 61-64).
(Once the QE or QEs are selected, QES component further pings the QE server to check its availability. If not available, repeats the above steps for a new QE selection.) (Col. 4, Lines 44-47).
(Connect to the underlying data source using a QE or a Native Connector; Fetch data and store it in local memory) (Col. 6, Lines 9-11).
One or more non-transitory computer-readable media storing instructions which, when executed by one or more processors, cause a system to perform operations comprising.
(The SQL query is executed against the database warehouse to extract the required data from the database warehouse and answer in form of graphs or narration and/or voice response is generated from query output) (Col. 1, Lines 61-64).
(Once the QE or QEs are selected, QES component further pings the QE server to check its availability. If not available, repeats the above steps for a new QE selection.) (Col. 4, Lines 44-47).
(Connect to the underlying data source using a QE or a Native Connector; Fetch data and store it in local memory) (Col. 6, Lines 9-11).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the Text-to-SQL method as taught by Chen et al. in view of Wang et al. and Guo et al. to select a desired database language to convert the natural language to as taught by Teja et al. This would have been an obvious improvement to allow the system to fetch data from different types of databases (Teja et al. Col. 2, Lines 34-40).
Regarding Claims 2, 9, and 16, Chen et al. in view of Wang et al., Guo et al., and Teja et al. teach the method of claims 1, 8, and 15,
Furthermore, Chen et al. teaches further comprising: providing, (to the RAT,) (Taught by Wang et al.) schema-linking relations that link elements in the database schema representation and words in the natural language utterance, wherein the embeddings are further generated based on the schema-linking relations.
(Text-to-SQL semantic parsers should learn to recognize an entity mention in the NL question and link it to the corresponding unique entity in the DB schema.) (Section 1, Schema linking Subsection).
(The initial representation of an entity is a combination of its words embeddings and type information. Then self-attention (Zhang et al., 2019; Shaw et al., 2019) or graph-based models (Bogin et al., 2019a; Wang et al., 2020) are utilized to exploit the relational information between each pair of) (Section 2.1, Paragraph 4).
In Chen et al. the schema encoder, which performs the second embedding, is described to utilize relational information within the database schema in combination with the word embeddings.
Furthermore, Wang et al. teaches providing, to the RAT
(In this work, we present a unified framework, called RAT-SQL, for encoding relational structure in the database schema and a given question. It uses relation-aware self-attention to combine global reasoning over the schema entities and question words with structured reasoning over predefined schema relations.) (Section 1, Paragraph 8).
As discussed previously, Wang et al. discloses a framework known as RAT-SQL. This performs the same task as the schema encoder in Chen et al. by finding relations in a database schema making it a simple substitution of the encoders.
Regarding Claims 3, 10, and 17, Chen et al. in view of Wang et al., Guo et al., and Teja et al. teach the method of claims 2, 9, and 16,
Furthermore, Chen et al. teaches wherein the schema-linking relations comprise metadata specifying synonyms for words.
(However, in practice, the solution is often relatively easy when a particular entity is well realized with similar wording in both the NL question and DB schema. As shown in Table 1, in Spider (Yu et al., 2018), the underlined mentions can almost exactly match the corresponding schema entities.) (Section 1, Schema linking Subsection).
The second example in Table 1 shows synonyms occurring in natural language questions (France and French) (Table 1). Furthermore, when describing the schema linking process Chen et al. discusses handling similar wording in both NL question and DB schema. The mention of corresponding schema entities for this example shows that a schema entity exists for the synonym “French” matching to the database term “France”.
Regarding Claims 4, 11, and 18, Chen et al. in view Wang et al., Guo et al., and Teja et al. teach the method of claims 1, 8, and 15,
Furthermore, Guo et al. teaches further comprising: providing, to the grammar-based decoder, relational algebra grammar that represents the intermediate database query representation as a tree, wherein the intermediate database query representation is further based on the relational algebra grammar.
(The decoder interacts with three types of actions to generate a SemQL query, including APPLYRULE, SELECTCOLUMN and SELECTTABLE. APPLYRULE(r) applies a production rule r to the current derivation tree of a SemQL query.) (Section 2.3, Decoder Subsection).
Relational algebra, such as union and intersect operations, being used in the construction of the SemQL intermediate database query is shown in Fig. 2. Then SemQL being formed as a tree is shown in Fig. 3 using the same elements, such as Select and Filter, as the relational algebra used. The grammar-based decoder operates on the information in the SemQL intermediate database query representation tree.
Regarding Claims 6, 13, and 20, Chen et al. in view of Wang et al., Guo et al., and Teja et al. teach the method of claims 1, 8, and 15,
Furthermore, Teja et al. teaches further comprising: executing the database query on the database to retrieve data responsive to the natural language utterance.
(The method comprises receiving a natural language query from a user, converting the natural language query into an intermediate query language 2 (IQL2), parsing the generated IQL2 to identify one or more database query format based on a type of one or more data source using a selector component of a query engine selector module and generating one or more database query using a composer component of a query engine builder module.) (Col. 2, Lines 62-67).
(The Data Retriever (DR) component 104 is used to fetch data from multiple data sources. After fetching from one data source, the data retriever component 104 checks if any other data source is identified in IQL2 and fetches data from that particular source. This process continues until no other data source is left according to the IQL2. Then, the data retriever component 104 joins data by referring to join columns as identified in the IQL2 to produce an output file. The output file may be a JSON file, XML file or like.) (Col. 5 Line 63 to Col. 6, Line 4).
Teja et al. receives a natural language query, executes on databases, and provides output to the user.
Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over “A Tale of Two Linkings: Dynamically Gating between Schema Linking and Structural Linking for Text-to-SQL Parsing” (Chen et al.) in view of “RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers” (Wang et al.), “Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation” (Guo et al.), US Patent Publication US 11409738 B2 (Teja et al.), and further in view of “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” (Devlin et al.).
Regarding Claim 21, Chen et al. in view of Wang et al., Guo et al., and Teja et al. teach the method of claim 1,
Chen et al. in view of Wang et al., Guo et al., and Teja et al. does not explicitly teach: wherein the PLM comprises an input layer, one or more decoding layers, and an output layer.
However, Devlin et al. teaches wherein the PLM comprises an input layer, one or more decoding layers, and an output layer.
(our input representation is able to unambiguously represent both a single sentence and a pair of sentences (e.g., h Question, Answeri) in one token sequence.) (Section 3, Sub-section “Input/Output Representation”, Paragraph 1).
(In this work, we denote the number of layers (i.e., Transformer blocks) as L, the hidden size as H, and the number of self-attention heads as A. 3 We primarily report results on two model sizes: BERTBASE (L=12, H=768, A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M).) (Section 3, Subsection “Model Architecture”, Paragraph 2).
(left-context-only version is referred to as a “Transformer decoder” since it can be used for text generation.) (Page 4 Footnote)
(BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer) (Abstract).
Devlin et al. describes the BERT architecture that Chen et al. used as a Pre-trained Language Model (PLM) (See Claim 1 rejection). As can be seen in the above quotes, and more clearly visualized in Fig. 1, The models architecture consists of an input layer, many transformer layers for decoding, and an output layer.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the first encoder implemented by Chen et al. to be a PLM in the form of BERT as taught by Devlin et al. This would have been an obvious as Chen et al. directly states the use of BERT and Devlin et al. is merely detailing the structure (Chen et al. Section 1, Subsection “Structural Linking”, Paragraph 3).
Regarding Claim 22, Chen et al. in view of Wang et al., Guo et al., Teja et al., and Devlin et al. teach the method of claim 21,
Furthermore, Devlin et al. teaches wherein the one or more decoding layers include multiple transformer layers.
(In this work, we denote the number of layers (i.e., Transformer blocks) as L, the hidden size as H, and the number of self-attention heads as A. 3 We primarily report results on two model sizes: BERTBASE (L=12, H=768, A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M).) (Section 3, Subsection “Model Architecture”, Paragraph 2).
Devlin et al teaches the BERT pre-trained model which uses many transformer layers, primarily 12 or 24.
Claims 23 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over “A Tale of Two Linkings: Dynamically Gating between Schema Linking and Structural Linking for Text-to-SQL Parsing” (Chen et al.) in view of “RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers” (Wang et al.), “Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation” (Guo et al.), US Patent Publication US 11409738 B2 (Teja et al.), and further in view of “A Syntactic Neural Model for General-Purpose Code Generation” (Yin et al.).
Regarding Claim 23, Chen et al. in view of Wang et al., Guo et al., and Teja et al. teach the method of claim 1,
Chen et al. in view of Wang et al., Guo et al., and Teja et al. does not explicitly teach: further comprising: training the grammar-based decoder using training data comprising natural language utterances labeled with corresponding intermediate database query representations.
(Given the tree structure of SemQL, we use a grammar-based decoder (Yin and Neubig, 2017, 2018) which leverages a LSTM to model the generation process of a SemQL query via sequential applications of actions.) (Guo et al., Section 2.3, Subsection “Decoder”).
The above quote, from Guo et al., shows that their method uses the grammar-based decoder taught by Yin et al. Guo et al. was relied upon to teach the intermediate database in the independent claim.
However, Yin et al. teaches further comprising: training the grammar-based decoder using training data comprising natural language utterances labeled with corresponding intermediate database query representations.
(Given a dataset of pairs of NL descriptions xi and code snippets ci , we parse ci into its AST yi and decompose yi into a sequence of oracle actions under the grammar model. The model is then optimized by maximizing the log-likelihood of the oracle action sequence.) (Yin et al. Section 4.3, Paragraph 1).
From Yin et al., the grammar-based decoder uses natural language descriptions paired with the code snippets which as implemented by Guo et al. the code snippets would be the SemQL intermediate representation. Yin et al. provides the detailed context of the grammar decoder used by Guo et al.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the natural language to database query transformation method as taught by Chen et al. in view of Wang et al., Guo et al., and Teja et al. to use the grammar-based decoder method as taught by Yin et al. This would have been an obvious inclusion as Guo et al. directly cites the use of the Yin et al. grammar-based decoder in their architecture. (Guo et al. Section 1, Section 2.3, Subsection “Decoder”).
Regarding Claim 24, Chen et al. in view of Wang et al., Guo et al., Teja et al., and Yin et al. teach the method of claim 23,
Furthermore, Guo et al. teaches wherein the training data further comprises database schema information.
(The decoder interacts with three types of actions to generate a SemQL query, including APPLYRULE, SELECTCOLUMN and SELECTTABLE. APPLYRULE(r) applies a production rule r to the current derivation tree of a SemQL query. SELECTCOLUMN(c) and SELECTTABLE(t) selects a column c and a table t from the schema, respectively) (Guo et al. Section 2.3, Subsection “Decoder”).
The grammar-based decoder that Guo et al. deploys uses actions in order to convert natural language to the intermediate database. In Guo et al. the particular actions used are for selecting column and tables which means the decoder was trained for this purpose and thus trained on intermediate schema information.
(To this end, we propose a syntax-driven neural code generation model. The backbone of our approach is a grammar model (§ 3) which formalizes the generation story of a derivation AST into sequential application of actions that either apply production rules (§ 3.1), or emit terminal tokens (§ 3.2). The underlying syntax of the PL is therefore encoded in the grammar model a priori as the set of possible actions.) (Yin et al. Section 1, Paragraph 5).
The above quote reinforces the previous statement by providing information from Yin et al. who designed the grammar-based decoder that Guo et al. is deploying.
Claim 25 are rejected under 35 U.S.C. 103 as being unpatentable over “A Tale of Two Linkings: Dynamically Gating between Schema Linking and Structural Linking for Text-to-SQL Parsing” (Chen et al.) in view of “RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers” (Wang et al.), “Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation” (Guo et al.), US Patent Publication US 11409738 B2 (Teja et al.), and further in view of “Querylizer: An Interactive Platform for Database Design and Text to SQL Conversion” (Dashpande et al.).
Regarding Claim 25, Chen et al. in view of Wang et al., Guo et al., and Teja et al. teach the method of claim 1,
Chen et al. in view of Wang et al., Guo et al., and Teja et al. does not explicitly teach: wherein the database schema representation is not a foreign-primary key representation.
However, Dashpande et al. teaches wherein the database schema representation is not a foreign-primary key representation.
(The encoder used here is called the RAT-SQL encoder [12]. The encoder also provides a joint contextualised representation of the utterance and schema. The utterance is concatenated to a linear form of the schema and passed through a stack of transformer layers. The encoder is also based on a relation-aware self-attention mechanism that enables address schema encoding, feature representation and schema linking within a text2SQL encoder. This mechanism encodes the structure of the schema and other prior knowledge of the relations between the encoded tokens.) (Section 5, Subsection 2, Paragraph 3).
Deshpande et al. teaches encoding the natural language utterance and database schema together using address schema encoding to link the structures of the encoded tokens. This is a method of providing a database schema representation without using a foreign-primary key representation.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the natural language to database query transformation method as taught by Chen et al. in view of Wang et al., Guo et al., and Teja et al. to use address-based schema linking method as taught by Dashpande et al. This would have been an obvious replacement as it serves the same purpose of linking the natural language utterance to the database schema as the foreign-primary key method does in Chen et al. (Dashpande et al. Section 5, Subsection 2, Paragraph 3).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS DANIEL LOWEN whose telephone number is (571)272-5828. The examiner can normally be reached Mon-Fri 8:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS D LOWEN/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
03/05/2026