Prosecution Insights
Last updated: April 19, 2026
Application No. 18/639,470

Machine Learning for Automated Development of Data Schema from Natural Language Inputs

Non-Final OA §101§103§112
Filed
Apr 18, 2024
Examiner
SOLTANZADEH, AMIR
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
340 granted / 421 resolved
+25.8% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
456
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 421 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 6-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites the limitation “wherein the operations further comprise”. The term "the operations" lacks clear antecedent basis. Claim 1 recites "a computer-implemented method... comprising" followed by three steps, but does not explicitly introduce "operations." Dependent Claim 7 is also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to cure the deficiencies of its independent claim. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “An application development platform … configured to…” in claim 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1 and 9 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “generate, …, a data schema for the software application; and inserting, …, the data schema generated by the machine-learned language model into a declarative model associated with the software application” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “a computing system comprising one or more computing devices,” “as an output of the machine-learned language model,” “by the computing system, the natural language description with a machine-learned language model”, and “obtaining a natural language description of a software application”. The additional elements “a computing system comprising one or more computing devices,” “as an output of the machine-learned language model,” “by the computing system, the natural language description with a machine-learned language model” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “obtaining a natural language description of a software application” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “a computing system comprising one or more computing devices,” “as an output of the machine-learned language model,” “by the computing system, the natural language description with a machine-learned language model” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “obtaining a natural language description of a software application” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 2 and 10 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “generating, … a set of application code for the software application based on the declarative model that includes the data schema” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “by a code generation system of the computing system”, which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 3 and 11 further define the “software application” as part of the “obtaining” function set forth in the claims from which they depend, thus, does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, they are not sufficient to amount to significantly more than the judicial exception. The courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 4-5, and 12-13 further define the “data source” as part of the “obtaining” function set forth in the claims from which they depend, thus, does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, they are not sufficient to amount to significantly more than the judicial exception. The courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 6 and 14 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “generate, …, an updated data schema for the software application; and inserting, …, the updated data schema generated by the machine-learned language model into a declarative model associated with the software application” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “a computing system comprising one or more computing devices,” “as an output of the machine-learned language model,” “processing, by the computing system, the second natural language description with the machine-learned language model”, and “obtaining a second natural language description of the software application, wherein the second natural language description specifies one or more requested changes to the software application”. The additional elements “a computing system comprising one or more computing devices,” “as an output of the machine-learned language model,” “processing, by the computing system, the second natural language description with the machine-learned language model” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “obtaining a second natural language description of the software application, wherein the second natural language description specifies one or more requested changes to the software application” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “a computing system comprising one or more computing devices,” “as an output of the machine-learned language model,” “processing, by the computing system, the second natural language description with the machine-learned language model” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “obtaining a second natural language description of the software application, wherein the second natural language description specifies one or more requested changes to the software application” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 7 and 15 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “concatenating, … the data schema with the second natural language description to generate a concatenated input; and processing … the concatenated input with the machine-learned language model to generate, …, the updated data schema for the software application” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “computing system,” “as an output of the machine-learned language model,” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “computing system,” “as an output of the machine-learned language model,” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 8 and 16 further define the “natural language description” as part of the “obtaining” function set forth in the claims from which they depend, thus, does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, they are not sufficient to amount to significantly more than the judicial exception. The courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 17 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “generate, … a predicted data schema for the software application; evaluating, … a loss function that generates a loss value based on a comparison of the ground truth data schema with the predicted data schema; and modifying, … one or more parameter values of the language model based on the loss function” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “One or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system, cause the computing system to perform operation” “processing, by the computing system, the natural language description with a language model” “as an output of the language model”, and “obtaining … a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application”. The additional elements “One or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system, cause the computing system to perform operation” “processing, by the computing system, the natural language description with a language model” “as an output of the language model” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “obtaining … a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “One or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system, cause the computing system to perform operation” “processing, by the computing system, the natural language description with a language model” “as an output of the language model” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “obtaining … a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claims 18 further define the “natural language description” as part of the “obtaining” function set forth in the claims from which they depend, thus, does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea, to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, they are not sufficient to amount to significantly more than the judicial exception. The courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 19 recites the following additional elements “wherein the language model comprises a pre-trained language model and said modifying comprises fine-tuning the language model” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 20 recites the following additional elements “wherein fine-tuning the language model comprises training the language model on a custom dataset that provides example data schemas responsive to example natural language descriptions” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Spring (US 8,126,832 B2) in view of Chen (US 2024/0020116 A1) further in view of Gidugu (US 10,296,848 B1). Regarding Claim 1, Spring (US 8,126,832 B2) teaches A computer-implemented method for automated software development, the method comprising: obtaining, by a computing system comprising one or more computing devices, a natural language description of a software application; (Col 3: ln 21-43, "The input is received in a first format. The input may then be translated into a second format such as, for example, an integer format. The input may be broken into tokens which are each assigned a corresponding integer. The input is stored in an input array, which is a one-dimensional linear array that contains one token for each integer or item of input."; Col 18: ln 56-63, “A system according to the present invention may operate with natural speech of a user.”) Examiner Comments: This passage describes receiving and translating natural language input into structured tokens and arrays, teaching obtaining a natural language description for software configuration/development as the system processes user inputs to build application structures. processing, by the computing system, the natural language description with a machine-learned language model to generate, as an output of the machine-learned language model, a data schema for the software application; (Col 3: ln 53-67, "The input array is expanded into a multi-dimensional array. The first step in constructing the multi-dimensional array is determining the root token, if available, for every token in the input array... The input array is then expanded into a multi-dimensional array including all tokens that are related to the tokens in the input array or alternatively all tokens that may be derived from the root token... The multi-dimensional array may be constructed having a number of linear arrays with the number of linear arrays determined by the number of tokens in the root token array and the number of tokens corresponding to each root token.") Examiner Comments: This passage describes using language processing models (including ML-like HMMs for matching) to expand NL inputs into multi-dimensional data structures/schemas, teaching generation of a data schema as output from the processed natural language description. inserting, by the computing system, the data schema generated by the machine-learned language model into a declarative model associated with the software application. (Col 121: ln 13-60, "The system stores a weight value for each concept. The weight value for each concept is then correlated with or assigned to a token in the input array... The weight may be used to signify the importance of the various elements in the array... The various elements in the linear array are assigned a weight based on how frequently the elements appear in the various databases attached to the system... The system will access a database such as an external database to obtain weight information.") Examiner Comments: This passage describes inserting weighted concepts and structures into database-linked models, teaching insertion of the generated schema into a declarative database model for the application. Spring did not specifically teach the machine-learned language model specifically for software application code generation aspects. However, Chen (US 2024/0020116 A1) teaches the machine-learned language model specifically for software application (Para [0005], "a method for generating computer code based on natural language input may include receiving a docstring representing natural language text specifying a digital programming result; generating, using a trained machine-learning model and based on the docstring, one or more computer code samples configured to produce respective candidate results; causing each of the one or more computer code samples to be executed; identifying, based on the executing, at least one of the computer code samples configured to produce a particular candidate result associated with the digital programming result.") Examiner Comments: This passage describes using a trained ML model to generate code from NL input, teaching the language model for automated software development from natural language descriptions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring’s teaching into Chen’s in order to enhance NL processing with code generation capabilities, enabling automated software creation from descriptions for improved development efficiency and accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text (Chen [abstract/Summary]). Spring and Chen did not specifically teach data source details like spreadsheet or SQL. However, Gidugu (US 2011/0264754 A1) teaches data source details (Para [0074], "This service could allow the user to select the type of repository, whether it is a database (e.g., Oracle, Microsoft Access database, or other types of databases), flat file, spreadsheet (e.g., Microsoft Excel), etc. The repository can be any type of database, and the user 22 ( FIG. 1 ) can build the query to extract the required specific value/data.") Examiner Comments: This passage describes selecting data repositories including spreadsheets and SQL databases in an application development platform, teaching integration of such structured data sources in browser-based app development systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring and Chen’s teaching into Gidugu ’s in order to support diverse data sources in NL-driven applications, allowing flexible data retrieval for software functionality in web-based development environments by generate a reusable web service based at least in part on the user input, and construct an application workflow based at least in part on the reusable web service. Regarding Claim 2, Spring, Chen and Gidugu teach The computer-implemented method of claim 1. Chen further teaches generating, by a code generation system of the computing system, a set of application code for the software application based on the declarative model that includes the data schema. (Para [0005], "generating, using a trained machine-learning model and based on the docstring, one or more computer code samples configured to produce respective candidate results; causing each of the one or more computer code samples to be executed.") Examiner Comments: This passage describes generating and executing code from NL-based models, teaching code generation based on the declarative schema/model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring’s teaching into Chen’s in order to enhance NL processing with code generation capabilities, enabling automated software creation from descriptions for improved development efficiency and accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text (Chen [abstract/Summary]). Regarding Claim 3, Spring, Chen and Gidugu teach The computer-implemented method of claim 1. Spring further teaches the software application operates to retrieve data from a data source, and wherein the data source is structured according to the data schema. (Col 21: ln 50-60, "The system will access a database such as an external database to obtain weight information. The external database may be accessed over a network such as the Internet and may contain information such as a dictionary or encyclopedia.") Examiner Comments: This passage describes retrieving from structured databases based on processed schemas, teaching data retrieval from schema-structured sources. Regarding Claim 4, Spring, Chen and Gidugu teach The computer-implemented method of claim 3. Gidugu further teaches the data source comprises a spreadsheet. (Para [0074], "This service could allow the user to select the type of repository, whether it is a database (e.g., Oracle, Microsoft Access database, or other types of databases), flat file, spreadsheet (e.g., Microsoft Excel), etc.") Examiner Comments: This passage explicitly mentions spreadsheets as a selectable data repository type, teaching the spreadsheet limitation in the context of data sources for application development. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring and Chen’s teaching into Gidugu ’s in order to support diverse data sources in NL-driven applications, allowing flexible data retrieval for software functionality in web-based development environments by generate a reusable web service based at least in part on the user input, and construct an application workflow based at least in part on the reusable web service. Regarding Claim 5, Spring, Chen and Gidugu teach The computer-implemented method of claim 3. Gidugu further teaches the data source comprises a SQL database. (Para [0148], "Save the data into the data records and insert it into the database (can be any type of database such as a SQL server, Oracle database, Access database, spreadsheet).") Examiner Comments: This passage explicitly mentions SQL server as a database type for data insertion, teaching the SQL database limitation in the context of data sources for software applications. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring and Chen’s teaching into Gidugu ’s in order to support diverse data sources in NL-driven applications, allowing flexible data retrieval for software functionality in web-based development environments by generate a reusable web service based at least in part on the user input, and construct an application workflow based at least in part on the reusable web service. Regarding Claim 6, Spring, Chen and Gidugu teach The computer-implemented method of claim 1. Spring further teaches obtaining, by the computing system, a second natural language description of the software application, wherein the second natural language description specifies one or more requested changes to the software application; processing, by the computing system, the second natural language description with the machine-learned language model to generate, as an output of the machine-learned language model, an updated data schema for the software application; and inserting, by the computing system, the updated data schema generated by the machine-learned language model into the declarative model associated with the software application. (Col 29: ln 5-30, "Alternatively, a device used to implement a system according to the present invention which stores databases may periodically access a remote database maintainer or provider to download updates to any stored databases. The downloads may occur through a personal computer connected to the Internet or wirelessly, through the Internet or directly.") Examiner Comments: This passage describes downloading updates to databases from remote sources, teaching processing additional inputs (second NL) to update schemas/models in the declarative database. Regarding Claim 7, Spring, Chen and Gidugu teach The computer-implemented method of claim 6. Spring further teaches processing, by the computing system, the second natural language description with the machine-learned language model comprises: concatenating, by the computing system, the data schema with the second natural language description to generate a concatenated input; and processing, by the computing system, the concatenated input with the machine-learned language model to generate, as the output of the machine-learned language model, the updated data schema for the software application. (Col 25: ln 43-67, "In the dynamic output construction phase, a variation to a Hidden Markov Model (VHMM) algorithm is employed to construct an output sentence using an output pattern as a starting point. The VHMM algorithm starts with the output pattern and constructs a sentence around the incomplete output pattern by accessing a database of complete sentences... determining the best words to be added to the incomplete output pattern based on the words that most often occur next to each other in the database of complete sentences.") Examiner Comments: This passage describes concatenating patterns with additional inputs via VHMM for updates, teaching concatenation for generating updated structures. Regarding Claim 8, Spring, Chen and Gidugu teach The computer-implemented method of claim 1. Chen further teaches the natural language description comprises a textual description contained in a dialog between a user and a chatbot. (Para [0114], "chatbots and virtual assistants (e.g., generating natural language responses for chatbots and virtual assistants, which may help make them more natural and engaging)") Examiner Comments: This passage mentions generating NL for chatbots, teaching dialog-based NL inputs for the description. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring’s teaching into Chen’s in order to enhance NL processing with code generation capabilities, enabling automated software creation from descriptions for improved development efficiency and accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text (Chen [abstract/Summary]). Regarding Claim 9, is a platform claim corresponding to the method claim above (Claim 1) and, therefore, is rejected for the same reasons set forth in the rejection of claim 1. Regarding Claim 10, is a platform claim corresponding to the method claim above (Claim 2) and, therefore, is rejected for the same reasons set forth in the rejection of claim 2. Regarding Claim 11, is a platform claim corresponding to the method claim above (Claim 3) and, therefore, is rejected for the same reasons set forth in the rejection of claim 3. Regarding Claim 12, is a platform claim corresponding to the method claim above (Claim 4) and, therefore, is rejected for the same reasons set forth in the rejection of claim 4. Regarding Claim 13, is a platform claim corresponding to the method claim above (Claim 5) and, therefore, is rejected for the same reasons set forth in the rejection of claim 5. Regarding Claim 14, is a platform claim corresponding to the method claim above (Claim 6) and, therefore, is rejected for the same reasons set forth in the rejection of claim 6. Regarding Claim 15, is a platform claim corresponding to the method claim above (Claim 7) and, therefore, is rejected for the same reasons set forth in the rejection of claim 7. Regarding Claim 16, is a platform claim corresponding to the method claim above (Claim 8) and, therefore, is rejected for the same reasons set forth in the rejection of claim 8. Claim(s) 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Spring (US 8,126,832 B2) in view of Chen (US 2024/0020116 A1) and Gidugu (US 10,296,848 B1), further in view of Harkous (US 11,847,424 B1). Regarding Claim 17, Spring (US 8,126,832 B2) teaches One or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system, cause the computing system to perform operations, the operations comprising: obtaining, by the computing system, a training data pair comprising: a natural language description of a software application and [a ground truth data schema] for the software application; (Col 3: ln 21-43, "The input is received in a first format. The input may then be translated into a second format such as, for example, an integer format. The input may be broken into tokens which are each assigned a corresponding integer. The input is stored in an input array, which is a one-dimensional linear array that contains one token for each integer or item of input."; Col 18: ln 56-63, “A system according to the present invention may operate with natural speech of a user.”) Examiner Comments: This passage describes receiving and translating natural language input into structured tokens and arrays, teaching obtaining a natural language description for software configuration/development as the system processes user inputs to build application structures. processing, by the computing system, the natural language description with a language model to generate, as an output of the language model, a predicted data schema for the software application; (Col 3: ln 53-67, "The input array is expanded into a multi-dimensional array. The first step in constructing the multi-dimensional array is determining the root token, if available, for every token in the input array... The input array is then expanded into a multi-dimensional array including all tokens that are related to the tokens in the input array or alternatively all tokens that may be derived from the root token... The multi-dimensional array may be constructed having a number of linear arrays with the number of linear arrays determined by the number of tokens in the root token array and the number of tokens corresponding to each root token.") Examiner Comments: This passage describes using language processing models (including ML-like HMMs for matching) to expand NL inputs into multi-dimensional data structures/schemas, teaching generation of a data schema as output from the processed natural language description. Spring did not specifically teach the machine-learned language model specifically for software application code generation aspects obtaining, by the computing system, a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application evaluating, by the computing system, a loss function that generates a loss value based on a comparison of the ground truth data schema with the predicted data schema; and modifying, by the computing system, one or more parameter values of the language model based on the loss function. However, Chen (US 2024/0020116 A1) teaches the machine-learned language model specifically for software application (Para [0005], "a method for generating computer code based on natural language input may include receiving a docstring representing natural language text specifying a digital programming result; generating, using a trained machine-learning model and based on the docstring, one or more computer code samples configured to produce respective candidate results; causing each of the one or more computer code samples to be executed; identifying, based on the executing, at least one of the computer code samples configured to produce a particular candidate result associated with the digital programming result.") Examiner Comments: This passage describes using a trained ML model to generate code from NL input, teaching the language model for automated software development from natural language descriptions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring’s teaching into Chen’s in order to enhance NL processing with code generation capabilities, enabling automated software creation from descriptions for improved development efficiency and accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text (Chen [abstract/Summary]). Spring and Chen did not specifically teach data source details like spreadsheet or SQL obtaining, by the computing system, a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application evaluating, by the computing system, a loss function that generates a loss value based on a comparison of the ground truth data schema with the predicted data schema; and modifying, by the computing system, one or more parameter values of the language model based on the loss function. However, Gidugu (US 2011/0264754 A1) teaches data source details (Para [0074], "This service could allow the user to select the type of repository, whether it is a database (e.g., Oracle, Microsoft Access database, or other types of databases), flat file, spreadsheet (e.g., Microsoft Excel), etc. The repository can be any type of database, and the user 22 ( FIG. 1 ) can build the query to extract the required specific value/data.") Examiner Comments: This passage describes selecting data repositories including spreadsheets and SQL databases in an application development platform, teaching integration of such structured data sources in browser-based app development systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Spring and Chen’s teaching into Gidugu ’s in order to support diverse data sources in NL-driven applications, allowing flexible data retrieval for software functionality in web-based development environments by generate a reusable web service based at least in part on the user input, and construct an application workflow based at least in part on the reusable web service. Spring, Chen and Gidugu did not specifically teach obtaining, by the computing system, a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application evaluating, by the computing system, a loss function that generates a loss value based on a comparison of the ground truth data schema with the predicted data schema; and modifying, by the computing system, one or more parameter values of the language model based on the loss function. However, Harkous (US 11,847,424 B1) teaches obtaining, by the computing system, a training data pair comprising: a natural language description of a software application and a ground truth data schema for the software application; (Col 2, Lines 19-44, "Such methods can be trained with (data, text) tuples (sometimes referred to herein as “data-text tuples”).") Examiner Comments: This passage describes obtaining training pairs consisting of structured data (ground truth schema) and corresponding natural language text (description), teaching the training data pair limitation because the tuples pair NL text with structured data schemas for model training. processing, by the computing system, the natural language description with a language model to generate, as an output of the language model, a predicted data schema for the software application; (Col 5, Lines 5-43, "The pre-trained language model is updated in order to be able to generate semantic text representations of input structured data (e.g., triples).") Examiner Comments: This passage describes processing input (which can be reversed as NL to structured in combination) with the language model to generate predicted outputs, teaching the processing limitation because the model generates structured representations from inputs, analogous to predicting a data schema from NL descriptions. evaluating, by the computing system, a loss function that generates a loss value based on a comparison of the ground truth data schema with the predicted data schema; (Col 9, Lines 16-40, "The training objective may be a language-modeling objective where the aim is to find the set of weights θ that minimizes the following cross-entropy loss: ℓ = ∑{i = |D| + 2}^{|S|} log P_θ (s_i | s_0, … s{i-1}) (1)") Examiner Comments: This passage describes evaluating a cross-entropy loss function comparing predicted outputs to ground truth, teaching the loss evaluation limitation because the loss value is generated based on discrepancies between predicted and ground truth representations. modifying, by the computing system, one or more parameter values of the language model based on the loss function. (Col 5, Lines 43-67, "Parameters of the data-to-text language model 130 may be updated using a dataset comprising data, text tuples (D_i, T_i).") Examiner Comments: This passage describes updating model parameters using the training dataset and loss minimization, teaching the modification limitation because parameters are adjusted to minimize the loss function during training. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Cognex, OpenAI and Graef’s teaching into Harkous’s in order to incorporate supervised training techniques with loss functions and parameter updates for language models, enabling accurate prediction of structured outputs from natural language inputs in automated software development platforms by having a first machine learned model may generate first output data comprising a first natural language representation of the first data and a second machine learning model may determine second data indicating that the first natural language representation is a semantically accurate representation of the first data (Harkous [Summary]). Regarding Claim 18, Spring, Chen, Gidugu and Harkous teach The one or more non-transitory computer-readable media of claim 17. Harkous further teaches the natural language description of the software application was generated by a human annotator. (Col 2, Lines 19-43, "training data for pipeline-based approaches may require training data with semantic alignments between sections of the training text and/or portions of the meaning representation (e.g., the data being transformed into text).") Examiner Comments: This passage describes training data with semantic alignments, which imply human annotation for creating paired NL descriptions and structured representations, teaching the human annotator limitation because alignments require human input to associate NL text with ground truth schemas. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Cognex, OpenAI and Graef’s teaching into Harkous’s in order to incorporate supervised training techniques with loss functions and parameter updates for language models, enabling accurate prediction of structured outputs from natural language inputs in automated software development platforms by having a first machine learned model may generate first output data comprising a first natural language representation of the first data and a second machine learning model may determine second data indicating that the first natural language representation is a semantically accurate representation of the first data (Harkous [Summary]). Regarding Claim 19, Spring, Chen, Gidugu and Harkous teach The one or more non-transitory computer-readable media of claim 17. Harkous further teaches the language model comprises a pre-trained language model and said modifying comprises fine-tuning the language model. (Col 2, Lines 44-65, "In various embodiments, the data-to-text systems use a pre-trained language model with fine-grained state embeddings of the input data representations to achieve generalization across domains, as described in further detail below.") Examiner Comments: This passage explicitly describes using a pre-trained language model and fine-tuning it with state embeddings, teaching the pre-trained and fine-tuning limitation because modifying parameters during fine-tuning adapts the model to specific tasks like schema generation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Cognex, OpenAI and Graef’s teaching into Harkous’s in order to incorporate supervised training techniques with loss functions and parameter updates for language models, enabling accurate prediction of structured outputs from natural language inputs in automated software development platforms by having a first machine learned model may generate first output data comprising a first natural language representation of the first data and a second machine learning model may determine second data indicating that the first natural language representation is a semantically accurate representation of the first data (Harkous [Summary]). Regarding Claim 20, Spring, Chen, Gidugu and Harkous teach The one or more non-transitory computer-readable media of claim 19. Harkous further teaches fine-tuning the language model comprises training the language model on a custom dataset that provides example data schemas responsive to example natural language descriptions. (Col 6, Lines 16-39, "The system may perform synthetic data transformations 107 on instances of the training data tuples (D_i, T_i) in order to generate classifier training data 108.") Examiner Comments: This passage describes creating custom datasets through synthetic transformations on training tuples, providing example structured data (schemas) responsive to NL examples, teaching the custom dataset limitation because synthetic data creates tailored examples for fine-tuning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Cognex, OpenAI and Graef’s teaching into Harkous’s in order to incorporate supervised training techniques with loss functions and parameter updates for language models, enabling accurate prediction of structured outputs from natural language inputs in automated software development platforms by having a first machine learned model may generate first output data comprising a first natural language representation of the first data and a second machine learning model may determine second data indicating that the first natural language representation is a semantically accurate representation of the first data (Harkous [Summary]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SOLTANZADEH whose telephone number is (571)272-3451. The examiner can normally be reached M-F, 9am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR SOLTANZADEH/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602225
IDENTIFYING THE TRANLATABILITY OF HARD-CODED STRINGS IN SOURCE CODE VIA POS TAGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12591414
CENTRALIZED INTAKE AND CAPACITY ASSESSMENT PLATFORM FOR PROJECT PROCESSES, SUCH AS WITH PRODUCT DEVELOPMENT IN TELECOMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12561134
Function Code Extraction
2y 5m to grant Granted Feb 24, 2026
Patent 12561136
METHOD, APPARATUS, AND SYSTEM FOR OUTPUTTING SOFTWARE DEVELOPMENT INSIGHT COMPONENTS IN A MULTI-RESOURCE SOFTWARE DEVELOPMENT ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12561118
SYSTEM AND METHOD FOR AUTOMATED TECHNOLOGY MIGRATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+16.9%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 421 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month