Prosecution Insights
Last updated: April 19, 2026
Application No. 18/594,218

DERIVING HETEROGENOUS CONTEXT FOR EVALUATING GENERATED CODE

Non-Final OA §101§102
Filed
Mar 04, 2024
Examiner
SHAH, PARAS D
Art Unit
2653
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
474 granted / 645 resolved
+11.5% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
24 currently pending
Career history
669
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/04/2024 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 9, and 16 relate to the statutory category of method/process and machine/apparatus. The independent claim 1 recites “…receiving a first code-generation prompt submitted to a first machine learning model, the first code-generation prompt requesting generation of the first code in a first programming language; receiving first code generated by the first machine learning model in response to the first machine learning model receiving the first code-generation prompt; comparing the first code to a first knowledge base built for the first programming language to find corresponding code and a first natural language explanation associated with the corresponding code; performing, via a second machine learning model, semantic comparison of the first natural language explanation to the first code-generation prompt to generate a first semantic correctness score; and presenting the first semantic correctness score”. Independent claim 9, recites “…receiving a first code-generation prompt submitted to a first machine learning model, the first code-generation prompt requesting generation of the first code in a first programming language; receiving first code generated by the first machine learning model in response to first machine learning model receiving the first code-generation prompt; comparing the first code to a first knowledge base built for a second programming language to find corresponding code and a first natural language explanation associated with the corresponding code, the second programming language being different than the first programming language; performing, via a second machine learning model, semantic comparison of the first natural language explanation to the first code-generation prompt to generate a first semantic correctness score; and presenting the first semantic correctness score”. Independent claim 16 recites “…receiving a first code-generation prompt submitted to a first machine learning model, the first code-generation prompt requesting generation of the first code in a first programming language; receiving first code generated by the first machine learning model in response to the first machine learning model receiving the first code-generation prompt; inputting the first code into a second machine learning model so that the second machine learning model generates a first set of natural language explanations that describe the first code; inputting the first set of natural language explanations into a third machine learning model so that the third machine learning model generates a first text summarization of the first set of natural language explanations; performing, via a fourth machine learning model, semantic comparison of the first text summarization to the first code-generation prompt to generate a first semantic correctness score; and presenting the first semantic correctness score”. The limitations of claim 1 of “receiving…”, “receiving…”, “comparing…”, “performing…”, “presenting…” as drafted covers mental activity. More specifically, for claim 1, a human after receiving a computer code written in a particular programming language, compares the code to a table/list associated with that programming language. The human that compares the code to the listed language to determine how correct the code that was presented was to the actual programming language. The limitations of claim 9 of “receiving…”, “receiving…”, “comparing…”, “performing…”, “presenting…” as drafted covers mental activity. More specifically, for claim 9, a human after receiving a computer code written in a particular programming language, compares the code to a table/list associated with that programming language. If it is determined that the code does not match the programming language in the first list/table, they then compares the code to a second table/list. The human then compares the code to the second listed language to determine how correct the code that was presented was to the actual programming language. The limitations of claim 16 of “receiving…”, “receiving…”, “inputting…”, “inputting…”, “performing…”, “presenting…” as drafted covers mental activity. More specifically, for claim 16, a human after receiving a computer code written in a particular programming language. The code is then reviewed to determine what the code is actually trying to accomplish. The human that compares the code to the listed language to determine how correct the code that was presented was to the actual programming language. This judicial exception is not integrated into a practical application. In particular, claims 9 and 16 recite the additional elements of “ computer” and “processor” which are recited generally in the specification. For example, in paragraph [0046] of the as filed specification, there is a description of using a general purpose computing environment. Accordingly, these additional elements do not integrate the abstract idea int a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Also, the additional elements of “machine learning model” in claims 1, 9, and 16 are recited generally in the specification. For example, in paragraph [0015] of the as filed specification, there is a description of machine learning models which are designed to generate text like a human and which can be refined using human feedback (paragraph [0017]). Specific structure as to how this can only be accomplished without human intervention, has not been incorporated in to the claims. Without the structure, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer as a general computer is noted. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. With respect to claim 2, the claim relates to receiving computer code and comparing the code to a particular programming language. If the code doesn’t match, comparing the code to a different programming language. The claim relates to a mental activity of determining what programming language is being used. No additional limitations are present. With respect to claim 3-5, 10, and 11, the claims relate to how closely the code is to the second programming language. The claims relate to a mental activity of comparing the code to the programming language. No additional limitations are present. With respect to claim 6 and 13, the claim relates to determining what the computer code is actually trying to accomplish. The claim relates to a mental activity of comparing the code to the explanation of what is to be done and determine how correct the code is. No additional limitations are present. With respect to claims 7, 8, 14, 15, 19, and 20, the claims relates to verifying how correct the code is and updating the table/list with the correct code. No additional limitations are present. With respect to claims 12, 17, and 18, the claims relate to comparing the computer code to a particular programming language. If the code doesn’t match, comparing the code to a different programming language. The claims relate to a mental activity of determining what programming language is being used. No additional limitations are present. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gandhi et al. (US 2024/0378399). Regarding Claim 1, Gandhi et al discloses a computer-implemented method comprising: receiving a first code-generation prompt submitted to a first machine learning model, the first code-generation prompt requesting generation of the first code in a first programming language (The prompt construction unit 220 constructs a prompt to the LLM 222, which is a generative model which generates DSL program code as an output that implements the user intent expressed in the natural language query) (page 5, paragraph [0034]); receiving first code generated by the first machine learning model in response to the first machine learning model receiving the first code-generation prompt (The LLM 222 receives the prompt from the prompt construction unit 220, analyzes the prompt, and outputs DSL program code to implement the user intent expressed in the natural language query) (page 5, paragraph [0036]); comparing the first code to a first knowledge base built for the first programming language to find corresponding code and a first natural language explanation associated with the corresponding code (The DSL selection pipeline 226 analyzes the natural language query and the document context and selects sample DSL code examples from the DSL sample datastore 228. The sample DSL code examples are included in the prompt provided to the LLM 222 to help the generative model produce a valid DSL program) (pages 4 and 5, paragraph [0033]); performing, via a second machine learning model (The language check unit 442 performs another check on the content using a second model configured to analyze the words and/or phrase used in textual content to identify potentially offensive language) (page 7, paragraph [0049]), semantic comparison of the first natural language explanation to the first code-generation prompt to generate a first semantic correctness score (The semantic DSL search unit 508 calculates a vector similarity score that reflects a similarity between the of the natural language query q input by the user and each E(b.sub.i). The semantic DSL search unit 508 calculates the cosine similarity score s.sub.i between E(q) and each E(b.sub.i) by taking the inner product s.sub.i=E(b.sub.i).sup.TE(q) before proceeding to the dynamic selection of the DSL program samples to be output by the DSL sample selection pipeline 226 to provide to the prompt construction unit 220) (page 9, paragraph [0055]); and presenting the first semantic correctness score (the sample program code having the highest relevance score comes at the end of the collection and will come at the end of the sample DSL program code included in the prompt to the LLM 222) (page 9, paragraph [0061]). Regarding Claim 2, Gandhi et al discloses the computer-implemented method, further comprising: receiving a second code-generation prompt submitted to the first machine learning model the second code-generation prompt requesting generation of second code in the first programming language (The prompt construction unit 220 constructs a prompt to the LLM 222, which is a generative model which generates DSL program code as an output that implements the user intent expressed in the natural language query) (page 5, paragraph [0034]); receiving the second code generated by the first machine learning model in response to the first machine learning model receiving the second code-generation prompt (The LLM 222 receives the prompt from the prompt construction unit 220, analyzes the prompt, and outputs DSL program code to implement the user intent expressed in the natural language query) (page 5, paragraph [0036]); comparing the second code to the first knowledge base (The DSL selection pipeline 226 analyzes the natural language query and the document context and selects sample DSL code examples from the DSL sample datastore 228. The sample DSL code examples are included in the prompt provided to the LLM 222 to help the generative model produce a valid DSL program) (pages 4 and 5, paragraph [0033]); and in response to the comparison of the second code to the first knowledge base not finding code in the first knowledge base that corresponds to the second code, comparing the second code to a second knowledge base built for a second programming language to find corresponding code and a natural language explanation associated with the corresponding code, the second programming language being different from the first programming language (Otherwise, if the DSL selection pipeline 226 is unable to identify an exact match for the DSL required to implement the intent represented in the natural language query, the DSL selection pipeline 226 provides a set of sample DSL samples to the prompt construction unit 220 to generate a prompt to the LLM 222 to generate a new DSL program. The DSL selection pipeline 226 provides a set of sample DSL programs to the LLM 222 as part of the prompt to generate the new DSL) (pages 4 and 5, paragraph [0033]). Regarding Claim 3, Gandhi et al discloses the computer-implemented method, wherein the comparison of the second code to the second knowledge base comprises: generating a first embedding from the second code (The search engine 608 provides the search results to the delegation processing unit 604, and the delegation processing unit 604 provides the search results to the knowledge-based embeddings unit 606) (page 9 and 10, paragraph [0065]); generating additional embeddings from code from the second knowledge base (The knowledge-based embeddings unit 606 analyzes the search results to generate embeddings for each of the search results) (page 10, paragraph [0066]); comparing the first embedding to the additional embeddings, respectively, to produce similarity rankings to find a best match (The embeddings combination unit 612 ranks the knowledge-based embeddings by comparing the knowledge-based embeddings to the query embeddings) (page 10, paragraph [0068]); and forwarding the natural language explanation that corresponds to the corresponding code that corresponds to the best match of the additional embeddings (The knowledge-based embeddings most relevant to the query embeddings are selected for inclusion in the knowledge-grounded prompt. The embeddings combination unit 612 determines this embedding by determining the cosine similarity scores for the each of the knowledge-based embeddings to the query embeddings. Those knowledge-based embeddings that are closest to the query embeddings in the embedding space are selected for inclusion in the knowledge-grounded prompt) (page 10, paragraph [0068]). Regarding Claim 4, Gandhi et al discloses the computer-implemented method, wherein a similarity score for the best match is above a pre-determined threshold value The embeddings combination unit 612 determines this embedding by determining the cosine similarity scores for the each of the knowledge-based embeddings to the query embeddings. Those knowledge-based embeddings that are closest to the query embeddings in the embedding space are selected for inclusion in the knowledge-grounded prompt) (page 10, paragraph [0068]). Regarding Claim 5, Gandhi et al discloses the computer-implemented method, further comprising: performing, via the second machine learning model (The language check unit 442 performs another check on the content using a second model configured to analyze the words and/or phrase used in textual content to identify potentially offensive language) (page 7, paragraph [0049]), semantic comparison of the forwarded natural language explanation to the second code-generation prompt to generate a second semantic correctness score (The semantic DSL search unit 508 calculates a vector similarity score that reflects a similarity between the of the natural language query q input by the user and each E(b.sub.i). The semantic DSL search unit 508 calculates the cosine similarity score s.sub.i between E(q) and each E(b.sub.i) by taking the inner product s.sub.i=E(b.sub.i).sup.TE(q) before proceeding to the dynamic selection of the DSL program samples to be output by the DSL sample selection pipeline 226 to provide to the prompt construction unit 220) (page 9, paragraph [0055]); and presenting the second semantic correctness score (the sample program code having the highest relevance score comes at the end of the collection and will come at the end of the sample DSL program code included in the prompt to the LLM 222) (page 9, paragraph [0061]). Regarding Claim 6, Gandhi et al discloses the computer-implemented method, further comprising: receiving a third code-generation prompt submitted to the first machine learning model, the third code-generation prompt requesting generation of the first code in the first programming language (The prompt construction unit 220 constructs a prompt to the LLM 222, which is a generative model which generates DSL program code as an output that implements the user intent expressed in the natural language query) (page 5, paragraph [0034]); receiving third code generated by the first machine learning model in response to the first machine learning model receiving the third code-generation prompt (The LLM 222 receives the prompt from the prompt construction unit 220, analyzes the prompt, and outputs DSL program code to implement the user intent expressed in the natural language query) (page 5, paragraph [0036]); inputting the third code into a third machine learning model so that the third machine learning model generates a first set of natural language explanations that describe the third code (The DSL selection pipeline 226 analyzes the natural language query and the document context and selects sample DSL code examples from the DSL sample datastore 228. The sample DSL code examples are included in the prompt provided to the LLM 222 to help the generative model produce a valid DSL program) (pages 4 and 5, paragraph [0033]); inputting the first set of natural language explanations into a fourth machine learning model so that the third machine learning model generates a first text summarization of the first set of natural language explanations (If the DSL selection pipeline 226 finds an exact match of the DSL needed to implement the user intent represented in the natural language query, the DSL selection pipeline 226 provides the sample DSL to the DSL validation and correction pipeline 224 to identify and correct any errors in the sample DSL) (pages 4 and 5, paragraph [0033]); performing, via the second machine learning model (The language check unit 442 performs another check on the content using a second model configured to analyze the words and/or phrase used in textual content to identify potentially offensive language) (page 7, paragraph [0049]), semantic comparison of the first text summarization to the third code-generation prompt to generate a third semantic correctness score (The semantic DSL search unit 508 calculates a vector similarity score that reflects a similarity between the of the natural language query q input by the user and each E(b.sub.i). The semantic DSL search unit 508 calculates the cosine similarity score s.sub.i between E(q) and each E(b.sub.i) by taking the inner product s.sub.i=E(b.sub.i).sup.TE(q) before proceeding to the dynamic selection of the DSL program samples to be output by the DSL sample selection pipeline 226 to provide to the prompt construction unit 220) (page 9, paragraph [0055]); and presenting the third semantic correctness score (the sample program code having the highest relevance score comes at the end of the collection and will come at the end of the sample DSL program code included in the prompt to the LLM 222) (page 9, paragraph [0061]). Regarding Claim 7, Gandhi et al discloses the computer-implemented method, further comprising presenting the generated first code to a subject matter expert for at least one of verification and enhancement, wherein the presenting is performed in response to the first semantic correctness score passing a test with a pre-determined threshold value (A technical benefit of this approach is that the correctness of the DSL program code output by the LLM 222 is improved by ordering the samples in the prompt in this manner. The DSL program code samples are then provided as an input to the prompt construction unit 220) (page 9, paragraph [0061]). Regarding Claim 8, Gandhi et al discloses the computer-implemented method, further comprising updating the first knowledge base based on the at least one of the verification and the enhancement (The dynamic list check unit 446 provides a dynamic list that can be quickly updated by administrators to add additional prohibited words and/or phrases. The dynamic list may be updated to address problems such as words or phrases becoming offensive that were not previously deemed to be offensive. The words and/or phrases added to the dynamic list may be periodically migrated to the guard list as the guard list is updated) (page 7, paragraph [0049]). Regarding Claim 9, Gandhi et al discloses a computer program product comprising: a set of one or more computer-readable storage media (Accordingly, the memory 932, 934, the storage unit 936, memory in processors 910, and memory in I/O components 950 are examples of machine-readable media) (pages 13 and 14, paragraph [0096]); and program instructions, collectively stored in the set of one or more computer-readable storage media, the program instructions causing a processor set to perform computer operations (The storage unit 936 and memory 932, 934 store instructions 916 embodying any one or more of the functions described herein. The memory/storage 930 may also store temporary, intermediate, and/or long-term data for processors 910. The instructions 916 may also reside, completely or partially, within the memory 932, 934, within the storage unit 936, within at least one of the processors 910 (for example, within a command buffer or cache memory), within memory at least one of I/O components 950, or any suitable combination thereof, during execution thereof) (pages 13 and 14, paragraph [0096]) comprising: receiving a first code-generation prompt submitted to a first machine learning model, the first code-generation prompt requesting generation of the first code in a first programming language (The prompt construction unit 220 constructs a prompt to the LLM 222, which is a generative model which generates DSL program code as an output that implements the user intent expressed in the natural language query) (page 5, paragraph [0034]); receiving first code generated by the first machine learning model in response to first machine learning model receiving the first code-generation prompt (The LLM 222 receives the prompt from the prompt construction unit 220, analyzes the prompt, and outputs DSL program code to implement the user intent expressed in the natural language query) (page 5, paragraph [0036]); comparing the first code to a first knowledge base built for a second programming language to find corresponding code and a first natural language explanation associated with the corresponding code (The DSL selection pipeline 226 analyzes the natural language query and the document context and selects sample DSL code examples from the DSL sample datastore 228. The sample DSL code examples are included in the prompt provided to the LLM 222 to help the generative model produce a valid DSL program) (pages 4 and 5, paragraph [0033]), the second programming language being different than the first programming language (Otherwise, if the DSL selection pipeline 226 is unable to identify an exact match for the DSL required to implement the intent represented in the natural language query, the DSL selection pipeline 226 provides a set of sample DSL samples to the prompt construction unit 220 to generate a prompt to the LLM 222 to generate a new DSL program. The DSL selection pipeline 226 provides a set of sample DSL programs to the LLM 222 as part of the prompt to generate the new DSL) (pages 4 and 5, paragraph [0033]); performing, via a second machine learning model (The language check unit 442 performs another check on the content using a second model configured to analyze the words and/or phrase used in textual content to identify potentially offensive language) (page 7, paragraph [0049]), semantic comparison of the first natural language explanation to the first code-generation prompt to generate a first semantic correctness score (The semantic DSL search unit 508 calculates a vector similarity score that reflects a similarity between the of the natural language query q input by the user and each E(b.sub.i). The semantic DSL search unit 508 calculates the cosine similarity score s.sub.i between E(q) and each E(b.sub.i) by taking the inner product s.sub.i=E(b.sub.i).sup.TE(q) before proceeding to the dynamic selection of the DSL program samples to be output by the DSL sample selection pipeline 226 to provide to the prompt construction unit 220) (page 9, paragraph [0055]); and presenting the first semantic correctness score (the sample program code having the highest relevance score comes at the end of the collection and will come at the end of the sample DSL program code included in the prompt to the LLM 222) (page 9, paragraph [0061]). Claim 10 is rejected for the same reason as claim 3. Claim 11 is rejected for the same reason as claim 4. Regarding Claim 12, Gandhi et al discloses the computer program product, wherein the computer operations further comprise comparing the first code to a second knowledge base built for the first programming language to find corresponding code and a first natural language explanation associated with the corresponding code (If the DSL selection pipeline 226 finds an exact match of the DSL needed to implement the user intent represented in the natural language query, the DSL selection pipeline 226 provides the sample DSL to the DSL validation and correction pipeline 224 to identify and correct any errors in the sample DSL) (pages 4 and 5, paragraph [0033]); wherein the comparing of the first code to the first knowledge base built for the second programming language occurs in response to the comparing of the first code to the second knowledge base not finding any code that corresponds to the first code (Otherwise, if the DSL selection pipeline 226 is unable to identify an exact match for the DSL required to implement the intent represented in the natural language query, the DSL selection pipeline 226 provides a set of sample DSL samples to the prompt construction unit 220 to generate a prompt to the LLM 222 to generate a new DSL program. The DSL selection pipeline 226 provides a set of sample DSL programs to the LLM 222 as part of the prompt to generate the new DSL) (pages 4 and 5, paragraph [0033]). Regarding Claim 13, Gandhi et al discloses the computer program product, wherein the computer operations further comprise: receiving a second code-generation prompt submitted to the first machine learning model, the second code-generation prompt requesting generation of the first code in the first programming language (The prompt construction unit 220 constructs a prompt to the LLM 222, which is a generative model which generates DSL program code as an output that implements the user intent expressed in the natural language query) (page 5, paragraph [0034]); receiving second code generated by the first machine learning model in response to the first machine learning model receiving the second code-generation prompt (The LLM 222 receives the prompt from the prompt construction unit 220, analyzes the prompt, and outputs DSL program code to implement the user intent expressed in the natural language query) (page 5, paragraph [0036]); inputting the second code into a third machine learning model so that the third machine learning model generates a first set of natural language explanations that describe the third code (The DSL selection pipeline 226 analyzes the natural language query and the document context and selects sample DSL code examples from the DSL sample datastore 228. The sample DSL code examples are included in the prompt provided to the LLM 222 to help the generative model produce a valid DSL program) (pages 4 and 5, paragraph [0033]); inputting the first set of natural language explanations into a fourth machine learning model so that the fourth machine learning model generates a first text summarization of the first set of natural language explanations (If the DSL selection pipeline 226 finds an exact match of the DSL needed to implement the user intent represented in the natural language query, the DSL selection pipeline 226 provides the sample DSL to the DSL validation and correction pipeline 224 to identify and correct any errors in the sample DSL) (pages 4 and 5, paragraph [0033]); performing, via the second machine learning model (The language check unit 442 performs another check on the content using a second model configured to analyze the words and/or phrase used in textual content to identify potentially offensive language) (page 7, paragraph [0049]), semantic comparison of the first text summarization to the second code-generation prompt to generate a second semantic correctness score (The semantic DSL search unit 508 calculates a vector similarity score that reflects a similarity between the of the natural language query q input by the user and each E(b.sub.i). The semantic DSL search unit 508 calculates the cosine similarity score s.sub.i between E(q) and each E(b.sub.i) by taking the inner product s.sub.i=E(b.sub.i).sup.TE(q) before proceeding to the dynamic selection of the DSL program samples to be output by the DSL sample selection pipeline 226 to provide to the prompt construction unit 220) (page 9, paragraph [0055]); and presenting the second semantic correctness score (the sample program code having the highest relevance score comes at the end of the collection and will come at the end of the sample DSL program code included in the prompt to the LLM 222) (page 9, paragraph [0061]). Claims 14 and 19 are rejected for the same reason as claim 7. Claims 15 and 20 are rejected for the same reason as claim 8. Regarding Claim 16, Gandhi et al discloses a computer system comprising: a processor set (The machine 900 may include processors 910) (page 13, paragraph [0095]); a set of one or more computer-readable storage media (Accordingly, the memory 932, 934, the storage unit 936, memory in processors 910, and memory in I/O components 950 are examples of machine-readable media) (pages 13 and 14, paragraph [0096]); and program instructions, collectively stored in the set of the one or more computer-readable storage media, wherein execution of the program instructions by the processor set causes performance of computer operations (The storage unit 936 and memory 932, 934 store instructions 916 embodying any one or more of the functions described herein. The memory/storage 930 may also store temporary, intermediate, and/or long-term data for processors 910. The instructions 916 may also reside, completely or partially, within the memory 932, 934, within the storage unit 936, within at least one of the processors 910 (for example, within a command buffer or cache memory), within memory at least one of I/O components 950, or any suitable combination thereof, during execution thereof) (pages 13 and 14, paragraph [0096]) comprising: receiving a first code-generation prompt submitted to a first machine learning model, the first code-generation prompt requesting generation of the first code in a first programming language (The prompt construction unit 220 constructs a prompt to the LLM 222, which is a generative model which generates DSL program code as an output that implements the user intent expressed in the natural language query) (page 5, paragraph [0034]); receiving first code generated by the first machine learning model in response to the first machine learning model receiving the first code-generation prompt (The LLM 222 receives the prompt from the prompt construction unit 220, analyzes the prompt, and outputs DSL program code to implement the user intent expressed in the natural language query) (page 5, paragraph [0036]); inputting the first code into a second machine learning model so that the second machine learning model generates a first set of natural language explanations that describe the first code (The DSL selection pipeline 226 analyzes the natural language query and the document context and selects sample DSL code examples from the DSL sample datastore 228. The sample DSL code examples are included in the prompt provided to the LLM 222 to help the generative model produce a valid DSL program) (pages 4 and 5, paragraph [0033]); inputting the first set of natural language explanations into a third machine learning model so that the third machine learning model generates a first text summarization of the first set of natural language explanations (If the DSL selection pipeline 226 finds an exact match of the DSL needed to implement the user intent represented in the natural language query, the DSL selection pipeline 226 provides the sample DSL to the DSL validation and correction pipeline 224 to identify and correct any errors in the sample DSL) (pages 4 and 5, paragraph [0033]); performing, via a fourth machine learning model, semantic comparison of the first text summarization to the first code-generation prompt to generate a first semantic correctness score (The semantic DSL search unit 508 calculates a vector similarity score that reflects a similarity between the of the natural language query q input by the user and each E(b.sub.i). The semantic DSL search unit 508 calculates the cosine similarity score s.sub.i between E(q) and each E(b.sub.i) by taking the inner product s.sub.i=E(b.sub.i).sup.TE(q) before proceeding to the dynamic selection of the DSL program samples to be output by the DSL sample selection pipeline 226 to provide to the prompt construction unit 220) (page 9, paragraph [0055]); and presenting the first semantic correctness score (the sample program code having the highest relevance score comes at the end of the collection and will come at the end of the sample DSL program code included in the prompt to the LLM 222) (page 9, paragraph [0061]). Claim 17 is rejected for the same reason as claim 12. Regarding Claim 18, Gandhi et al discloses the computer system, wherein the computer operations further comprise comparing the first code to a first knowledge base built for a second programming language to find corresponding code and a first natural language explanation associated with the corresponding code, the second programming language being different than the first programming language If the DSL selection pipeline 226 finds an exact match of the DSL needed to implement the user intent represented in the natural language query, the DSL selection pipeline 226 provides the sample DSL to the DSL validation and correction pipeline 224 to identify and correct any errors in the sample DSL) (pages 4 and 5, paragraph [0033]); wherein the inputting of the first code into the second machine learning model occurs in response to the comparing of the first code to the first knowledge base not finding any code that corresponds to the first code (Otherwise, if the DSL selection pipeline 226 is unable to identify an exact match for the DSL required to implement the intent represented in the natural language query, the DSL selection pipeline 226 provides a set of sample DSL samples to the prompt construction unit 220 to generate a prompt to the LLM 222 to generate a new DSL program. The DSL selection pipeline 226 provides a set of sample DSL programs to the LLM 222 as part of the prompt to generate the new DSL) (pages 4 and 5, paragraph [0033]). Cited Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Clement et al. (US 2022/0308848) discloses semi-supervised translation of source-code programming using neural networks. Gotmare et al. (US 2022/0374595) discloses semantic code search. Chen et al. (US 2024/0020096) discloses generating code using language models trained on computer codes. Chen et al. (US 2024/0020116) discloses generating natural language using language models trained on computer codes. Dinu et al. (US 2024/0354319) discloses runtime alignment of language models in conversational AI. Radhakrishna et al. (US 2025/0060944) discloses automated data extraction pipeline for large language model training. Mandal et al. (US 2025/0245253) discloses searching programming code repositories using latent semantic analysis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SATWANT K SINGH/Primary Examiner, Art Unit 2653
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §102
Apr 01, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586591
SOUND SIGNAL DECODING METHOD, SOUND SIGNAL DECODER, PROGRAM, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579367
TWO-TOWER NEURAL NETWORK FOR CONTENT-AUDIENCE RELATIONSHIP PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579360
LEARNING SUPPORT APPARATUS FOR CREATING MULTIPLE-CHOICE QUIZ
2y 5m to grant Granted Mar 17, 2026
Patent 12562173
WEARABLE DEVICE CONTROL BASED ON VOICE COMMAND OF VERIFIED USER
2y 5m to grant Granted Feb 24, 2026
Patent 12559026
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month