Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
2. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
3. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Roychowdhury (U.S. Publication No. 20250110951).
Regarding claim 1, Roychowdhury discloses a computer-implemented method for configuring cloud-based applications ([0011] - where the set of servers 120 may include a set of non-transitory storage media storing program instructions to perform one or more operations of subsystems 121-125. By performing such operations using one or more components shown in the system 100, some embodiments may generate queries adapted to different types of schema of a data structure), the method being executed by one or more processors and comprising:
determining a set of queries corresponding to a set of configuration settings of an application ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
for each query in the set of queries, querying a database to return a set of chunks, each chunk in each set of chunks comprising a portion of a requirements document ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
providing a set of prompts, each prompt corresponding to a query in the set of queries and comprising a respective set of chunks as context ([0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
receiving, from a large language model (LLM), a set of responses, each response corresponding to a prompt in the set of prompts ([0041] - provide a first text string representing the first query to a large language model trained using operations described in this disclosure, where the trained large language model may output a second text string representing a query in the SQL query language [0043] - embodiments may use a prompt preprocessor associated with a first language model used to generate queries. The prompt preprocessor may perform operations such as performing preprocessing operations, where the preprocessing operations may include stemming, lemmatizing, or tokenizing a model);
querying a knowledge graph based on the set of responses to provide a set of knowledge graph results ([0001] - queries are used to retrieve any useful information from databases, knowledge graphs, or other types of data structures. [0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
providing a configuration file using the set of responses and the set of knowledge graph results ([0019] - language models based on context information, such as a language or environment of an initial query provided by a user or a target database or target environment provided by the user. For example, some embodiments may determine that a user-provided query is a GraphQL query and that a target database is “DB2.” Some embodiments may then determine which combination of context parameters matches with a stored set of context parameters, where the stored set of context parameters is associated with a language model or a set of parameters of a liquid model);
and configuring the application using the configuration file ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc. [0027] - configure a first language model for generating queries with a schema of a target database, and a first set of queries, as indicated by block 304. The first language model may be a query-generating first language model designed to output a query in a target query language. Some embodiments may preprocess the input or output for a language model based on a schema for a target database. For example, some embodiments may preprocess the input or output of a language model to indicate a type of field in place of a specific field name, and the type of field may be stored in metadata associated with a database or other data structure. Some embodiments may then train a query-generating language model based on a stored set of training queries. For example, some embodiments may use training data that includes a set of model queries, where the set of model queries may be retrieved from a database of previously used queries).
Regarding claim 2, Roychowdhury discloses the method, wherein each response in the set of responses comprises computer-executable code for a respective configuration setting in the set of configuration settings ([0018] - determine that a user-provided set of inputs includes a text string, where the text string is source code for a user-provided query. Some embodiments may then determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query).
Regarding claim 3, Roychowdhury discloses the method, wherein each query in the set of queries comprises a natural language query corresponding to a configuration setting in the set of configuration settings ([0018] - determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query).
Regarding claim 4, Roychowdhury discloses the method, wherein at least one knowledge graph result comprises a set of dependencies associated with a configuration setting in the set of configuration settings ([0047] - trained query-generating model to generate a function that includes a set of lookahead functions to perform operations such as determining the sub-fields “SF1,” “SF2,” and “SF3” or determining backend fields from sub-fields and a dependency map).
Regarding claim 5, Roychowdhury discloses the method, further comprising:
determining a query corresponding to a configuration setting that has been added to the set of configuration settings of the application ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
querying the database to return a set of chunks responsive to the query ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
providing a prompt corresponding to the query and comprising the set of chunks as context ([0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
receiving, from the LLM, a response corresponding to the prompt ([0041] - provide a first text string representing the first query to a large language model trained using operations described in this disclosure, where the trained large language model may output a second text string representing a query in the SQL query language [0043] - embodiments may use a prompt preprocessor associated with a first language model used to generate queries. The prompt preprocessor may perform operations such as performing preprocessing operations, where the preprocessing operations may include stemming, lemmatizing, or tokenizing a model);
and using the response, configuring the configuration setting that has been added to the set of configuration settings of the application ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc. [0027] - configure a first language model for generating queries with a schema of a target database, and a first set of queries, as indicated by block 304. The first language model may be a query-generating first language model designed to output a query in a target query language. Some embodiments may preprocess the input or output for a language model based on a schema for a target database. For example, some embodiments may preprocess the input or output of a language model to indicate a type of field in place of a specific field name, and the type of field may be stored in metadata associated with a database or other data structure. Some embodiments may then train a query-generating language model based on a stored set of training queries. For example, some embodiments may use training data that includes a set of model queries, where the set of model queries may be retrieved from a database of previously used queries).
Regarding claim 6, Roychowdhury discloses the method, wherein the query is added to the set of queries in response to addition of the configuration setting to the set of configuration settings ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc. [0027] - configure a first language model for generating queries with a schema of a target database, and a first set of queries, as indicated by block 304. The first language model may be a query-generating first language model designed to output a query in a target query language. Some embodiments may preprocess the input or output for a language model based on a schema for a target database. For example, some embodiments may preprocess the input or output of a language model to indicate a type of field in place of a specific field name, and the type of field may be stored in metadata associated with a database or other data structure. Some embodiments may then train a query-generating language model based on a stored set of training queries. For example, some embodiments may use training data that includes a set of model queries, where the set of model queries may be retrieved from a database of previously used queries).
Regarding claim 7, Roychowdhury discloses the method, further comprising:
providing an initial configuration filed based on the set of responses ([0017] - determine that a specific data field is associated with a data table and augment or otherwise associate an initial input query with the identifier of the data table to increase an accuracy of the query);
and updating the initial configuration file using the set of knowledge graph results to provide the configuration file ([0001] - queries are used to retrieve any useful information from databases, knowledge graphs, or other types of data structures. [0017] - determine that a specific data field is associated with a data table and augment or otherwise associate an initial input query with the identifier of the data table to increase an accuracy of the query).
Regarding claim 8, Roychowdhury discloses a non-transitory computer-readable storage medium coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations for configuring cloud-based applications ([0011] - where the set of servers 120 may include a set of non-transitory storage media storing program instructions to perform one or more operations of subsystems 121-125. By performing such operations using one or more components shown in the system 100, some embodiments may generate queries adapted to different types of schema of a data structure), the operations comprising:
determining a set of queries corresponding to a set of configuration settings of an application ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
for each query in the set of queries, querying a database to return a set of chunks, each chunk in each set of chunks comprising a portion of a requirements document ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
providing a set of prompts, each prompt corresponding to a query in the set of queries and comprising a respective set of chunks as context ([0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
receiving, from a large language model (LLM), a set of responses, each response corresponding to a prompt in the set of prompts ([0041] - provide a first text string representing the first query to a large language model trained using operations described in this disclosure, where the trained large language model may output a second text string representing a query in the SQL query language [0043] - embodiments may use a prompt preprocessor associated with a first language model used to generate queries. The prompt preprocessor may perform operations such as performing preprocessing operations, where the preprocessing operations may include stemming, lemmatizing, or tokenizing a model);
querying a knowledge graph based on the set of responses to provide a set of knowledge graph results ([0001] - queries are used to retrieve any useful information from databases, knowledge graphs, or other types of data structures. [0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
providing a configuration file using the set of responses and the set of knowledge graph results ([0019] - language models based on context information, such as a language or environment of an initial query provided by a user or a target database or target environment provided by the user. For example, some embodiments may determine that a user-provided query is a GraphQL query and that a target database is “DB2.” Some embodiments may then determine which combination of context parameters matches with a stored set of context parameters, where the stored set of context parameters is associated with a language model or a set of parameters of a liquid model);
and configuring the application using the configuration file ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc. [0027] - configure a first language model for generating queries with a schema of a target database, and a first set of queries, as indicated by block 304. The first language model may be a query-generating first language model designed to output a query in a target query language. Some embodiments may preprocess the input or output for a language model based on a schema for a target database. For example, some embodiments may preprocess the input or output of a language model to indicate a type of field in place of a specific field name, and the type of field may be stored in metadata associated with a database or other data structure. Some embodiments may then train a query-generating language model based on a stored set of training queries. For example, some embodiments may use training data that includes a set of model queries, where the set of model queries may be retrieved from a database of previously used queries).
Dependent claims 9-14 are analogous in scope to claims 2-7, and are rejected according to the same reasoning.
Regarding claim 15, Roychowdhury discloses a system, comprising:
a computing device ([0011] - A system 100 includes a client device 102);
and a computer-readable storage device coupled to the computing device and having instructions stored thereon which, when executed by the computing device, cause the computing device to perform operations for configuring cloud-based applications ([0011] - where the set of servers 120 may include a set of non-transitory storage media storing program instructions to perform one or more operations of subsystems 121-125. By performing such operations using one or more components shown in the system 100, some embodiments may generate queries adapted to different types of schema of a data structure), the operations comprising:
determining a set of queries corresponding to a set of configuration settings of an application ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
for each query in the set of queries, querying a database to return a set of chunks, each chunk in each set of chunks comprising a portion of a requirements document ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc.);
providing a set of prompts, each prompt corresponding to a query in the set of queries and comprising a respective set of chunks as context ([0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
receiving, from a large language model (LLM), a set of responses, each response corresponding to a prompt in the set of prompts ([0041] - provide a first text string representing the first query to a large language model trained using operations described in this disclosure, where the trained large language model may output a second text string representing a query in the SQL query language [0043] - embodiments may use a prompt preprocessor associated with a first language model used to generate queries. The prompt preprocessor may perform operations such as performing preprocessing operations, where the preprocessing operations may include stemming, lemmatizing, or tokenizing a model);
querying a knowledge graph based on the set of responses to provide a set of knowledge graph results ([0001] - queries are used to retrieve any useful information from databases, knowledge graphs, or other types of data structures. [0018] - determine one or more context values based on the set of inputs, where a context value may be one or more values of the set of inputs themselves or may be values derived from the set of inputs. For example, some embodiments may determine, as a context value, a language type for a text string, where the language type may include categories such as “natural language,” “Python,” “GraphQL,” etc. Some embodiments may then use these other types of user-provided input to select a language model, configure a language model, or otherwise determine parameters for a language model used to generate a query);
providing a configuration file using the set of responses and the set of knowledge graph results ([0019] - language models based on context information, such as a language or environment of an initial query provided by a user or a target database or target environment provided by the user. For example, some embodiments may determine that a user-provided query is a GraphQL query and that a target database is “DB2.” Some embodiments may then determine which combination of context parameters matches with a stored set of context parameters, where the stored set of context parameters is associated with a language model or a set of parameters of a liquid model);
and configuring the application using the configuration file ([0013] - the set of databases 130 may store training data to train a language model, model parameters used to configure a language model, generated queries, issue indicators associated with portions of a query, etc. [0027] - configure a first language model for generating queries with a schema of a target database, and a first set of queries, as indicated by block 304. The first language model may be a query-generating first language model designed to output a query in a target query language. Some embodiments may preprocess the input or output for a language model based on a schema for a target database. For example, some embodiments may preprocess the input or output of a language model to indicate a type of field in place of a specific field name, and the type of field may be stored in metadata associated with a database or other data structure. Some embodiments may then train a query-generating language model based on a stored set of training queries. For example, some embodiments may use training data that includes a set of model queries, where the set of model queries may be retrieved from a database of previously used queries).
Dependent claims 16-20 are analogous in scope to claims 2-6, and are rejected according to the same reasoning.
Conclusion
4. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Barkan (U.S. Patent No. 12524403) teaches context-based prompt generation for automated translation between natural language and query language. Krishnan (U.S. Publication No. 20250005050) teaches a generative summarization dialog-based information retrieval system.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ETHAN DANIEL KIM whose telephone number is (571) 272-1405. The examiner can normally be reached on Monday - Friday 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached on (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ETHAN DANIEL KIM/
Examiner, Art Unit 2658
/RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658