Prosecution Insights
Last updated: April 19, 2026
Application No. 18/523,552

CODE GENERATION USING MACHINE LEARNING MODELS

Non-Final OA §101§103§112
Filed
Nov 29, 2023
Examiner
KANG, INSUN
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
515 granted / 655 resolved
+23.6% vs TC avg
Strong +40% interview lift
Without
With
+40.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
23 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responding to the amendment filed on 2/17/2026. Claims 1-22 and 27-34 are pending in the application. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-22 and 27-34 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The limitation, apply, based on identification and correction of syntax errors in the set of output samples, a static analysis is not supported by the specification. The amended limitation implies that the identification and correction of syntax errors are performed first and then a static analysis is applied based on the performed identification and correction. However, this concept is not found and nor the detailed implementation is not provided in the specification. In paragraphs [0058]; [0073], the static analysis performs detecting syntax errors and a fixing operation to correct syntax to generate corrected syntax samples. This is not the same as identifying and correcting syntax errors to apply the static analysis. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-22 and 27-34 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Per claims, 1, 12, and 27, the limitation, “apply, based on identification and correction of syntax errors in the set of output samples, a static analysis” is not clear in its scope as the static analysis itself is a process of syntax error detection for correction, however, error identification and correction are used for a static analysis, therefore, it is unclear if the error detection and correction processes are used to analyze the code again during the static analysis. Interpretation: applying the static analysis includes identifying syntax errors for correction of the syntax errors. Per claims 2-11, 13-22 and 28-34, these claims are rejected because they depend from claims 1, 12 and 27 respectively. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-22 and 27-34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Specifically, claims 1-30 are directed to an abstract idea. Per claim 1, the claim is directed to an idea of itself, mental processes that can be performed in the human mind, or by a human using a pen and paper. The steps of generating second input data, a prompt, applying a beam search and static analysis and outputting the set of samples can be pure mental processes. The claim does not recite how the generating, applying and outputting steps are performed or implemented in a particular manner. Particularly, the beam search is a mere heuristic search algorithm and static analysis are mere examination of code without executing the code, therefore, they can be performed by a human (e.g. a developer). For a non-deterministic beam search, this can be conceptualized as a mental heuristic for making decisions under uncertainty with limited random choices as data does not need to be large. The additional limitations, at least one memory, processor are described at a high level of generality for applying or performing the abstract idea and do not indicate any integration of the abstract idea into a practical application as the mental steps are merely applied with a generic computing component(s). See MPEP see MPEP 2106.05(f) /2106.05(h). It is noted that employing generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not add significantly more, similar to how limiting the abstract idea in Flook to petrochemical and oil-refining industries was insufficient. Therefore, the additional limitations do not integrate the abstract idea into a practical application. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components or insignificant extra solution activities (e.g. processors, devices, program instructions), then it falls within the "Mental Processes" grouping of abstract ideas (2019 PEG step 2A, Prong 1: Abstract idea grouping? Yes, Mental Process). Viewing the limitations individually and as a combination, the additional elements perform the mental steps using generic computing components as tools without integrating the abstract idea into a practical application. For at least these reasons, claim 1 is not patent eligible. Per claims 2-11, these claims are directed to the same idea itself as in claim 1, reciting details of the data (claim 2) and mental steps (claim 3, 4, 6, 7, 8, 9) without adding any other additional element that is significantly more. The additional limitations, the step of retrieving computer code mere data gathering for the mental steps in claim 5, while the edge device, a cloud device in claim 10 and the processor in claim 11 is described at a high level of generality for applying or performing the abstract idea (per claim 11, for intended action of generation of multiple samples) and do not indicate any integration of the abstract idea into a practical application as the mental steps are merely applied with a generic computing component(s). See MPEP see MPEP 2106.05(f) /2106.05(h). Therefore, the additional limitations do not integrate the abstract idea into a practical application. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components or insignificant extra solution activities (e.g. processors, devices, program instructions), then it falls within the "Mental Processes" grouping of abstract ideas (2019 PEG step 2A, Prong 1: Abstract idea grouping? Yes, Mental Process). At most, the retrieving step is not found to include anything more than what is well-understood, routine, conventional activity in the field. In this case, it is noted that the claimed extra-solution of data gathering is acknowledged to be a well-understood, routine, conventional activity court recognized as WURC examples in MPEP 2106.05(d)(ll), for example, data gathering and retrieving, storing data, transmitting/displaying a result - Symantec, Versata Dev, Content extraction, Electric Power Group). Insignificant extra solution activities or mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Viewing the limitations individually and as a combination, the additional elements merely perform data gathering for the mental steps and perform the mental steps using generic computing components as tools without integrating the abstract idea into a practical application. For at least these reasons, claims 2-11 are not patent eligible. Per claims 12-22, the claim is directed to the same idea itself as in claims 1-11, reciting only the same mental steps without adding any other additional element that is significantly more. Therefore, the claims are rejected for the same reasons as in claims 1-11. Per claims 27-34, these claims are directed to the same idea itself as in claims 1-11, reciting details of data and the mental steps without adding any other additional element that is significantly more. Therefore, the claims are rejected for the same reasons as in claims 1-11. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5-13, 16-20, 22, 27, 28, 30, and 32-34 are rejected under 35 U.S.C. 103 as being unpatentable over Duan et al. (US 20230359441, hereafter Duan) in view of Luzhnica et al. (US 11516158, hereafter Luzhnica) and Logozzo et al. (US20130339929, hereafter Logozzo). 1. An apparatus to provide one or more syntax correct samples, comprising: at least one memory; and at least one processor coupled to the at least one memory and configured to (Duan, see at least [0098], processors… memory devices; [0113]): generate, based on input data, second input data for a machine learning model; generate, based on the second input data, a prompt (Duan, see at least [0038] The code completion engine 214 receives the context of the partially-formed source code snippet 212… The code completion engine 214 transforms the context into a sequence of tokens that is input into the encoder 114 and the BoW model 116; [0078] The code completion engine 124 receives the context of the partially-formed source code snippet 212 … The context 212 includes the partially-formed source code snippet and a number of preceding tokens; [0092] The context 212 and the retrieved source code segment 224 are concatenated to form an input sequence that is applied to the neural decoder transformer model (block 706); [00001] where the input consists of queries Q and keys K of dimension d.sub.k, and values V of dimension d.sub.V. Q is a matrix that contains the query or vector representation of one token in a sequence; Note that the partially-formed source code snippet along with the preceding code, the code segment retrieved corresponds to the input and second input data respectively and input sequence (queries) corresponds to the prompt for the model); apply a beam search with sampling on the prompt to generate a set of output sample (Duan, see at least [0041] The retrieved source code segment context 224 is combined with the context and input into the beam search engine 228. The beam search engine 228 uses the decoder 226 to predict candidates 210 to complete the partially-formed source code snippet; [0093] A beam search iteratively generates tokens/subtokens by invoking the neural decoder transformer model 226; [0062] The training dataset generator 134 generates a self-supervised training dataset from various source code files 132 from a source code repository 130. The training dataset includes numerous training samples; Note that the candidate sequences are output samples). Duan does not explicitly teach the beam search is nondeterministic. Luzhnica teaches a non-deterministic beam search (Lushnica, see at least fig.1 and associated texts, a sampling engine can use a random sampling technique, or a modification of such techniques, e.g., Top K or Top P sampling (also known as nucleus sampling) …. a sampling engine can employ stochastic beam search, which reportedly exhibits some of the best properties of both random sampling and beam search … wherein the sampling engine employs one or more methods comprising random sampling, beam search, stochastic beam search, typical sampling, or a combination thereof in the selection of semantic elements from semantic element options in a distribution (aspect 102); Note that random sampling such as nucleus sampling, Top K or Top P sampling employing stochastic beam search is non-deterministic because it introduces controlled randomness into the selection process, for exploring diverse solutions unlike traditional deterministic beam search which follows fixed rules). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Duan’s code completion with Lushnica’s nondeterministic beam search to modify Duan’s system to combine the code correctness testing function as taught by Lushnica, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to a neural network. Combining Lushnica’s functionality with that of Duan results in a system that allows to use a nondeterministic beam search. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to provide probabilistic nature for better diversity (Lushnica, see at least fig.1 and associated texts, a sampling engine can use a random sampling technique, or a modification of such techniques, e.g., Top K or Top P sampling (also known as nucleus sampling) …. a sampling engine can employ stochastic beam search, which reportedly exhibits some of the best properties of both random sampling and beam search … wherein the sampling engine employs one or more methods comprising random sampling, beam search, stochastic beam search, typical sampling, or a combination thereof in the selection of semantic elements from semantic element options in a distribution (aspect 102); Note that random sampling such as nucleus sampling, Top K or Top P sampling employing stochastic beam search is non-deterministic because it introduces controlled randomness into the selection process, for exploring diverse solutions unlike traditional deterministic beam search which follows fixed rules). Duan further teaches: apply a static analysis to the set of output samples to generate a set of samples (Duan, see at least [0018] The sparse retriever is a term-frequency based retrieval technique that captures lexical information and is sensitive to code identifiers. The dense retriever captures syntactic and semantic information …that may come from lexical similarity and similar functionality; [0029]; [0031] In one aspect, identifier renaming and dead code insertion are used to create the positive code samples; [0032]; [0036] generates a corresponding syntax tree and semantic model that is used to extract the context of the partially-formed source code snippet 212. The parser 208 also updates the syntax tree and semantic model as the developer creates and edits the source code in the source code editor 202; [0045] using algorithms and statistical models to analyze and draw inferences from patterns in data; Note that the gathering of syntactic, semantic, and lexical information and generating a syntax tree are processes of static analysis); output the set of samples (Duan, see at least [0093], The output of the neural decoder transformer model 226 is a matrix of token probabilities for each position in a candidate sequence … to form a partial candidate sequence; [0095] Upon the completion of the beam search, the code completion engine 214 receives the top k candidates 210 likely to complete the partially-formed source code snippet which is sent back to the user interface; Note that the candidates output are the set of samples). Duan does not explicitly teach applying the static analysis based on identification and correction of syntax errors in the set of output samples. Logozzo teaches applying the static analysis based on identification and correction of syntax errors in the set of output samples (Logozzo, see at least [0104] the method 400 is also used to infer and repair syntactic errors within the code of the program; [0002], a static analyzer can be used to detect software bugs within a program. In some cases, the IDE suggests simple syntactic fixes to the user based on the detected syntactic errors within the program; [0005] statically analyze a code of a program, determine semantic errors within the code of the program, and, for each semantic error, generate suggested repairs to the code of the program based on a type of the semantic error; [0061] the modular program verifier 304 may be an abstract interpreter or any other type of static analyzer that is capable of performing an automatic program repair procedure). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Logozzo’s static analyzer with Duan’s code completion and Lushnica’s nondeterministic beam search to modify Duan’s system to detect and correct syntax errors as taught by Logozzo, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to code development or a neural network. Combining Logozzo’s functionality with that of Duan and Lushnica results in a system that allows to use the static analysis with error discovery and repairing. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to perform an automatic program repair procedure by a static analyzer to “infer and repair syntactic errors within the code of the program (Logozzo, see at least [0104] the method 400 is also used to infer and repair syntactic errors within the code of the program; [0002], a static analyzer can be used to detect software bugs within a program. In some cases, the IDE suggests simple syntactic fixes to the user based on the detected syntactic errors within the program; [0005] statically analyze a code of a program, determine semantic errors within the code of the program, and, for each semantic error, generate suggested repairs to the code of the program based on a type of the semantic error; [0061] the modular program verifier 304 may be an abstract interpreter or any other type of static analyzer that is capable of performing an automatic program repair procedure).” 2. The apparatus of claim 1, wherein the input data comprises at least one of a natural language description of a process or input-output examples (Duan, see at least [0025] The Bag-of-Words model 116 describes the frequency of the unique source code tokens used in the source code files that are included in the database 122. The Bag-of-Words model 116 is trained on the source code files 108 of the database in order to develop a vocabulary of unique source code tokens. In an aspect, the vocabulary includes n-grams or n-token sequences of source code tokens. The Bag-of-Words model 116 includes the frequency of each n-gram token over all the n-gram tokens in the database 122. [0038] The code completion engine 214 receives the context of the partially-formed source code snippet 212… The code completion engine 214 transforms the context into a sequence of tokens that is input into the encoder 114 and the BoW model 116; [0078] The code completion engine 124 receives the context of the partially-formed source code snippet 212 … The context 212 includes the partially-formed source code snippet and a number of preceding tokens; [0092] The context 212 and the retrieved source code segment 224 are concatenated to form an input sequence that is applied to the neural decoder transformer model (block 706); Note the input data for the model with the context concatenated with code segment includes a natural language description of a process or input-output examples; the Bag of Words model is a core feature in NLP representing code, the frequencies are input/output examples 204); 5. The apparatus of claim 1, wherein, to generate the second input data for the machine learning model, the at least one processor is configured to retrieve, from a codebase sample database, computer code to guide the machine learning model (Duan, see at least [0078] The code completion engine 124 receives the context of the partially-formed source code snippet; [0110], search for a semantically-similar source code snippet of the partially-formed source code snippet in a retrieval source code database, wherein the retrieval source code database includes a plurality of source code segments; [0092] The context 212 and the retrieved source code segment 224 are concatenated to form an input sequence that is applied to the neural decoder transformer model; Note that the code segments (snippets) are retrieved and passed to the model as additional samples). 6. The apparatus of claim 1, wherein the machine learning model is trained to generate computer code based on the second input data (Duan, see at least [0025]; [0061]; [0062]; [0073] Turning back to FIG. 4, the neural decoder transformer model is trained to predict the source code tokens to complete a partially-formed source code snippet. The neural decoder transformer model is trained on source code snippets from various source code files). 7. The apparatus of claim 1, wherein the beam search with sampling is performed by a stochastic beam search, by adding sampling to a beam search algorithm, or by a sampling method (Lushnica, see at least fig.1 and associated texts, a sampling engine can use a random sampling technique, or a modification of such techniques, e.g., Top K or Top P sampling (also known as nucleus sampling) …. a sampling engine can employ stochastic beam search, which reportedly exhibits some of the best properties of both random sampling and beam search … wherein the sampling engine employs one or more methods comprising random sampling, beam search, stochastic beam search, typical sampling, or a combination thereof in the selection of semantic elements from semantic element options in a distribution (aspect 102)). 8. The apparatus of claim 1, wherein, to generate the second input data for the machine learning model, the at least one processor is configured to: encode a query associated with the input data and keys from a code retrieval database into a dense vector; and include the query, the keys, and values obtained from the code retrieval database in the second input data for the machine learning model (Duan, see at least [0051] Attention is used to decide which parts of the input sequence are important for each token, especially when decoding long sequences since the encoder is limited to encoding a fixed-size vector. Attention mechanisms gather information about the relevant context of a given token and then encode that context into a vector which represents the token. It is used to identity the relationships between tokens in the long sequence while ignoring other tokens that do not have much bearing on a given prediction. where the input consists of queries Q and keys K of dimension d.sub.k, and values V of dimension d.sub.V. Q is a matrix that contains the query or vector representation of one token in a sequence, K is the vector representations of all tokens in the sequence, and V is the vector representations of all the tokens in the sequence; [0053] The queries, keys and values are linearly projected h times in parallel with d.sub.V, output values which are concatenated to a final value: [0079]-[0081], Note that the context 212 is used by the encoder and BoW model to generate a corresponding embedding vector). Per claim 9: Duan, Lushinica, and Logozzo furter discloses: analyzing the set of output samples for the syntax errors to generate a set of syntax wrong samples; and correct the syntax errors in the set of syntax wrong samples. Clement teaches analyzing the set of output samples for syntax errors to generate a set of syntax wrong samples; and correcting the syntax errors in the set of syntax wrong samples (Logozzo, see at least [0104] the method 400 is also used to infer and repair syntactic errors within the code of the program; [0002], a static analyzer can be used to detect software bugs within a program. In some cases, the IDE suggests simple syntactic fixes to the user based on the detected syntactic errors within the program; [0005] statically analyze a code of a program, determine semantic errors within the code of the program, and, for each semantic error, generate suggested repairs to the code of the program based on a type of the semantic error; [0061] the modular program verifier 304 may be an abstract interpreter or any other type of static analyzer that is capable of performing an automatic program repair procedure). 10. The apparatus of claim 1, wherein the apparatus is configured on one or more of an edge device and a cloud device associated with a cloud-based compute service (Duan, see at least [0097] a smart phone; Note that a smart phone is an edge device). 11. The apparatus of claim 1, wherein the at least one processor is configured to batch on a sample dimension such that multiple samples are generated at a same time by the apparatus (Duan, see at least [00001]where the input consists of queries Q and keys K of dimension d.sub.k, and values V of dimension d.sub.V. Q is a matrix that contains the query or vector representation of one token in a sequence, K is the vector representations of all tokens in the sequence, and V is the vector representations of all the tokens in the sequence; [0053] The queries, keys and values are linearly projected h times in parallel with d.sub.V, output values which are concatenated to a final value; [0060]; [0066]; Note that the training dataset is partitioned into batches with each batch of sequences running through the training process and the queries, keys and values are linearly projected h times in parallel). Per claims 12, 13, 16-20 and 22, they are the method versions of claims 1, 2, and 5-11, respectively, and are rejected for the same reasons set forth in connection with the rejection of claims 1, 2, and 5-11 above. Per claims 27, 28, 30, and 32-34, they are the medium versions of claims 1, 2, and 5-8, respectively, and are rejected for the same reasons set forth in connection with the rejection of claims 1, 2, and 5-8 above. Claims 3, 4, 14, 15, 21, 29 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Duan in view of Lushnica, Logozzo and Clement et al. (US20230281318, hereafter Clement). Per claim 3: Duan does not explicitly teach: apply an execution-based filter to the set of samples to determine that at least one syntax correct sample of the set of samples executes properly. Clement teaches applying an execution-based filter to the set of samples to determine at least one syntax correct sample of the set of samples executes properly (Clement, see at least [0006] A constrained decoding technique incorporates token constraints into a beam search at each iteration of a decoding process in order to generate viable candidate sequences that are syntactically and semantically correct … The token constraints are generated from checking whether a partial solution predicted at each decoding step is feasible based on the production rules of the grammar of the programming language, the syntactic correctness of a partial solution, semantic constraints, and/or static type correctness… A post-processing engine tests the candidate solutions for syntactic correctness and error vulnerability and eliminates those candidate solutions which are not likely to be useful; [0055] At each time step, the beam search engine infers or predicts new token constraints in order to narrow or prune the search space to more viable candidates. The new token constraints are predicted from static analysis code tools that analyze the partial solutions as they are generated at each time step of the beam search. …The tools guide the beam search to select a next best token for a partial solution that is more likely to produce a viable candidate sequence instead of relying only on the decoder's output probabilities; [0021]; 0102]; [0078] The syntax and vulnerability analyzer runs a series of tests for each candidate sequence (block 708). The series of tests include compilation of a candidate sequence in its surrounding context through a compiler or compilation tool, testing that the compiled code works in a build environment, performing a static type check analysis on the candidate sequence to check if the code sequence contains any out-of-scope types, and analyzing the candidate sequence for error vulnerabilities using the separation-logic static code analyzer). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have combined Duan’s code completion with Lushnica’s nondeterministic beam search, Logozzo’s static analyzer and Clement’s testing syntax correctness by a static analysis to modify Duan’s system to combine the code correctness testing function as taught by Clement, with a reasonable expectation of success, since they are analogous art because they are from the same field of endeavor related to code development or a neural network. Combining Clement’s functionality with that of Duan, Logozzo and Lushnica results in a system that allows to test for syntactically correct code samples. The modification would be obvious because one having ordinary skill in the art would be motivated to make this combination to generate viable candidate sequences that are syntactically and semantically correct to prevent error vulnerabilities (Clement, see at least [0006] A constrained decoding technique incorporates token constraints into a beam search at each iteration of a decoding process in order to generate viable candidate sequences … The token constraints are generated from checking whether a partial solution predicted at each decoding step is feasible based on the production rules of the grammar of the programming language, the syntactic correctness of a partial solution, semantic constraints, and/or static type correctness… A post-processing engine tests the candidate solutions for syntactic correctness and error vulnerability and eliminates those candidate solutions which are not likely to be useful; [0055] At each time step, the beam search engine infers or predicts new token constraints in order to narrow or prune the search space to more viable candidates. The new token constraints are predicted from static analysis code tools that analyze the partial solutions as they are generated at each time step of the beam search. …The tools guide the beam search to select a next best token for a partial solution that is more likely to produce a viable candidate sequence instead of relying only on the decoder's output probabilities; [0021]; 0102]; [0078] The syntax and vulnerability analyzer runs a series of tests for each candidate sequence (block 708). The series of tests include compilation of a candidate sequence in its surrounding context through a compiler or compilation tool, testing that the compiled code works in a build environment, performing a static type check analysis on the candidate sequence to check if the code sequence contains any out-of-scope types, and analyzing the candidate sequence for error vulnerabilities using the separation-logic static code analyzer). 4. The apparatus of claim 3, wherein the execution-based filter is configured to run the set of samples as computer code to determine which of the set of samples executes properly (Clement, see at least [0006] A constrained decoding technique incorporates token constraints into a beam search at each iteration of a decoding process in order to generate viable candidate sequences that are syntactically and semantically correct … The token constraints are generated from checking whether a partial solution predicted at each decoding step is feasible based on the production rules of the grammar of the programming language, the syntactic correctness of a partial solution, semantic constraints, and/or static type correctness… A post-processing engine tests the candidate solutions for syntactic correctness and error vulnerability and eliminates those candidate solutions which are not likely to be useful; [0055] At each time step, the beam search engine infers or predicts new token constraints in order to narrow or prune the search space to more viable candidates. The new token constraints are predicted from static analysis code tools that analyze the partial solutions as they are generated at each time step of the beam search. …The tools guide the beam search to select a next best token for a partial solution that is more likely to produce a viable candidate sequence instead of relying only on the decoder's output probabilities; [0021]; 0102]; [0078] The syntax and vulnerability analyzer runs a series of tests for each candidate sequence (block 708). The series of tests include compilation of a candidate sequence in its surrounding context through a compiler or compilation tool, testing that the compiled code works in a build environment, performing a static type check analysis on the candidate sequence to check if the code sequence contains any out-of-scope types, and analyzing the candidate sequence for error vulnerabilities using the separation-logic static code analyzer). Per claims 14, 15, and 21, they are the product versions of claims 3, 4, 10, respectively, and are rejected for the same reasons set forth in connection with the rejection of claims 3, 4, and 10 above. Per claim 29, it is the medium version of claim 3, and is rejected for the same reasons set forth in connection with the rejection of claim 3 above. Per claim 31, it is the medium version of claim 4, and is rejected for the same reasons set forth in connection with the rejection of claim 4 above. Examiner’s Note The Examiner has pointed out particular references contained in the prior art of record within the body of this action for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Response to Arguments Applicant’s arguments with respect to claim(s) s 1-22 and 27-34 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In response to applicant’s statement regarding the 101 rejection that the pending claims are directed to statutory a subject matter, the amended claims are still directed to an idea of itself, mental processes that can be performed in the human mind, or by a human using a pen and paper. Identification and correction syntax errors are commonly performed by a developer/human. Such tasks were/are a primary part of a developer’s work (code review). In the context of code development, static analysis simply means examining code without actually executing it, and can be absolutely performed mentally to find and fix syntax errors. The static analysis includes checking syntax for misspelled keywords, missing semicolons, unmatched {} etc. and a human can mentally trace the code logic flow to see if a syntax facilitates the intended result. This analysis often results in more accurate error detection and mitigation when performed by a human. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US20210357307 is related to automated program repair using a bug type annotation derived from a static code analysis; CN 110704297 is related to finding and repairing the syntax error before execution. Copyright ©2026 Clarivate Analytics. All rights reserved. Republication or redistribution of Clarivate Analytics content, including by framing or similar means, is prohibited without the prior written consent of Clarivate A Any inquiry concerning this communication or earlier communications from the examiner should be directed to INSUN KANG whose telephone number is (571)272-3724. The examiner can normally be reached M-TR 8 -5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /INSUN KANG/ Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §101, §103, §112
Nov 11, 2025
Interview Requested
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Response Filed
Jan 08, 2026
Final Rejection — §101, §103, §112
Feb 17, 2026
Response after Non-Final Action
Mar 06, 2026
Request for Continued Examination
Mar 13, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596632
METHOD FOR TESTING A COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12578981
GAME TRANSLATION METHOD, AND ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12578945
INSTANT INSTALLATION OF APPS
2y 5m to grant Granted Mar 17, 2026
Patent 12530211
SYSTEMS AND METHODS FOR DYNAMIC SERVER CONTROL BASED ON ESTIMATED SCRIPT COMPLEXITY
2y 5m to grant Granted Jan 20, 2026
Patent 12498906
INLINE CONVERSATION WITH ARTIFICIAL INTELLIGENCE WITHIN CODE EDITOR USER INTERFACE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+40.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month