Prosecution Insights
Last updated: April 19, 2026
Application No. 18/588,113

AUTOMATIC CURATION OF REUSABLE CODE SNIPPETS FOR LLM AGENTS

Non-Final OA §101§103
Filed
Feb 27, 2024
Examiner
RAMPURIA, SATISH
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
740 granted / 833 resolved
+33.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
21 currently pending
Career history
854
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 833 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the application filed on 02/27/2024. Claims 1-20 are pending. Examiner’s Note Please note that Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirely as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1, this claim is within at least one of the four categories of patent eligible subject matter as it is directing to a method claim under Step 1. A method, comprising: storing, by a device, code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run; determining, by the device, one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function; determining, by the device, whether the merged function is acceptable; and adding, by the device and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding. Regarding claim 1, the limitations “determining, by the device, one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function; determining, by the device, whether the merged function is acceptable” as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. For example, a person is capable of selecting a set of input and grouping them as per their similarity/differences to condense and merging. In the same manner, a person is capable of determining whether the merged function with the aid of pen and paper are validated for acceptance and are accessible to model agents by perhaps comparing. Therefore, these limitations encompass a human mind carrying out the function through observation, evaluation judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, the additional elements “by a device” is recited at a high-level of generality such that it amounts no more than mere instructions for executing/running some code that are accessible to large language model which merely using generic computing equipment to execute/run the software tools to perform the abstract idea. See MPEP 2106.05(f). For the additional elements “storing,… code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run” and “adding,… and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding” do nothing more than to add insignificant extra solution activity to the judicial exception of merely storing/gathering data for automation. See MPEP § 2106.05(h). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “by a device” amount to no more than mere instructions, or generic computer and/or computer components to carry out the exception, thus, cannot amount to an inventive concept. See MPEP 2105.06(f). For the additional elements “storing,… code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run” and “adding,… and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding” the courts have recognized storing and retrieving information in memory as a well‐understood, routine, and conventional functions in a merely generic manner (e.g., at a high level of generality) or an insignificant extra-solution activity. See MPEP 2106.05(d).II.iv. Accordingly, the claims are not patent eligible under 35 USC 101. 2. The method of claim 1, wherein determining whether the merged function is acceptable comprises: evaluating the merged function for any execution errors; and pruning the merged function responsive to having execution errors. The limitations for this claim further recite an additional mental process under prong 1. 3. The method of claim 1, further comprising: building a directed graph summarizing how the code-based functions in the database have been used by the traces, wherein nodes of the directed graph correspond to questions, final answers to the questions, and particular code-based functions used to reach the final answers to the questions, and wherein edges between the nodes follow flows of past agent runs. The limitations for this claim further recite an additional mental process under prong 1. 4. The method of claim 3, further comprising: weighting the edges based on a number of traces going through each edge according to the flows of the past agent runs. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 5. The method of claim 3, further comprising: filtering the directed graph from all of the traces to filtered traces that are either only successful traces or only failed traces for distinguished treatments of the filtered traces. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 6. The method of claim 1, further comprising: displaying the merged function for reviewer review; and receiving a response from the reviewer review regarding the merged function, the response being one of either: acceptance of the merged function, modification of the merged function for acceptance, or refusal of the merged function to prevent future use of the merged function. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 7. The method of claim 6, further comprising: providing contextual information for the reviewer review selected from a group consisting of: the merged function; sample runs performed on the merged function with corresponding inputs and respective outputs; and values of intermediate variables in the merged function. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 8. The method of claim 6, further comprising: providing an option for reviewer-initiated execution of the merged function with reviewer-supplied inputs. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 9. The method of claim 1, further comprising: updating the traces, responsive to the merged function being acceptable, with the merged function such that the respective list of sequential code-based functions used to answer the respective question passes through the merged function instead of the one or more sequential groupings of the code-based functions. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 10. The method of claim 1, further comprising: creating, responsive to the merged function being acceptable, a new trace for an original question by at least one of the large language model agents with the merged function available in the database. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 11. The method of claim 1, wherein the candidates for corresponding reduction into the merged function comprise two trace-connected code-based functions each with exactly one inbound edge and exactly one outbound edge. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 12. The method of claim 1, further comprising: wherein the candidates for corresponding reduction into the merged function comprise a first code-based function that forks to two or more code-based functions, wherein the merged function comprises a first step and two or more possible steps dependent on an outcome of the first step. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 13. The method of claim 1, further comprising: iteratively merging the merged function with one or more other functions into a further merged function. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 14. The method of claim 1, further comprising: producing the merged function by prompting a large language model with i) the one or more sequential groupings of the code-based functions that are the candidates for corresponding reduction into the merged function and ii) instructions to the large language model to generalize, into one or more new code snippets, all of the one or more sequential groupings of the code-based functions that are the candidates for corresponding reduction into the merged function. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. 15. The method of claim 1, wherein the large language model agents comprise troubleshooting agents. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. Claim 16, this claim is within at least one of the four categories of patent eligible subject matter as it is directing to an apparatus claim under Step 1. 16. An apparatus, comprising: one or more network interfaces to communicate with a network; a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor, the process comprising: storing code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run; determining one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function;determining whether the merged function is acceptable; and adding, responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding. Regarding claim 16, the limitations “determining one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function; determining whether the merged function is acceptable” as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. For example, a person is capable of selecting a set of input and grouping them as per their similarity/differences to condense and merging. In the same manner, a person is capable of determining whether the merged function with the aid of pen and paper are validated for acceptance and are accessible to model agents by perhaps comparing. Therefore, these limitations encompass a human mind carrying out the function through observation, evaluation judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, the additional elements “An apparatus, comprising: one or more network interfaces to communicate with a network; a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor” are recited at a high-level of generality such that it amounts no more than mere instructions for executing/running some code that are accessible to large language model which merely using generic computing equipment to execute/run the software tools to perform the abstract idea. See MPEP 2106.05(f). For the additional elements “storing code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run” and “adding, responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding” do nothing more than to add insignificant extra solution activity to the judicial exception of merely storing/gathering data for automation. See MPEP § 2106.05(h). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “An apparatus, comprising: one or more network interfaces to communicate with a network; a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor” amount to no more than mere instructions, or generic computer and/or computer components to carry out the exception, thus, cannot amount to an inventive concept. See MPEP 2105.06(f). For the additional elements “storing code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run” and “adding, responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding” the courts have recognized storing and retrieving information in memory as a well‐understood, routine, and conventional functions in a merely generic manner (e.g., at a high level of generality) or an insignificant extra-solution activity. See MPEP 2106.05(d).II.iv. Accordingly, the claims are not patent eligible under 35 USC 101. 17. The apparatus of claim 16, wherein determining whether the merged function is acceptable comprises: evaluating the merged function for any execution errors; and pruning the merged function responsive to having execution errors. The limitations for this claim further recite an additional mental process under prong 1. 18. The apparatus of claim 16, the process further comprising: building a directed graph summarizing how the code-based functions in the database have been used by the traces, wherein nodes of the directed graph correspond to questions, final answers to the questions, and particular code-based functions used to reach the final answers to the questions, and wherein edges between the nodes follow flows of past agent runs. The limitations for this claim further recite an additional mental process under prong 1. 19. The apparatus of claim 16, the process further comprising: displaying the merged function for reviewer review; and receiving a response from the reviewer review regarding the merged function, the response being one of either: acceptance of the merged function, modification of the merged function for acceptance, or refusal of the merged function to prevent future use of the merged function. The limitations for this claim further recite an additional insignificant extra solution activity under prong 2. Claim 20, this claim is within at least one of the four categories of patent eligible subject matter as it is directing to a medium claim under Step 1. 20. A tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising: storing code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run; determining one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function; determining whether the merged function is acceptable; and adding, responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding. Regarding claim 20, the limitations “determining one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function; determining whether the merged function is acceptable” as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. For example, a person is capable of selecting a set of input and grouping them as per their similarity/differences to condense and merging. In the same manner, a person is capable of determining whether the merged function with the aid of pen and paper are validated for acceptance and are accessible to model agents by perhaps comparing. Therefore, these limitations encompass a human mind carrying out the function through observation, evaluation judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, the additional elements “A tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising” is recited at a high-level of generality such that it amounts no more than mere instructions for executing/running some code that are accessible to large language model which merely using generic computing equipment to execute/run the software tools to perform the abstract idea. See MPEP 2106.05(f). For the additional elements “storing code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run” and “adding, responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding” do nothing more than to add insignificant extra solution activity to the judicial exception of merely storing/gathering data for automation. See MPEP § 2106.05(h). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “A tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising” amount to no more than mere instructions, or generic computer and/or computer components to carry out the exception, thus, cannot amount to an inventive concept. See MPEP 2105.06(f). For the additional elements “storing code-based functions in a database that is accessible to large language model agents for coding, wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run” and “adding, responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding” the courts have recognized storing and retrieving information in memory as a well‐understood, routine, and conventional functions in a merely generic manner (e.g., at a high level of generality) or an insignificant extra-solution activity. See MPEP 2106.05(d).II.iv. Accordingly, the claims are not patent eligible under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 6-13, 15-17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over USPN 20240020096 to Chen et al in view of USPN 20230259705 to Tunstall-Pedoe et al. Per claim 1: Chen discloses: A method, comprising: storing, by a device, code-based functions in a database that is accessible to large language model agents for coding (Paragraph [0056, 0139, 0051] “performing at least one of… storing the at least one identified computer code sample (e.g., locally and/or remotely) … system memory for storing data and can utilize one or more I/O devices or network interfaces for transmitting or receiving data. ML algorithms database 890 (or other data storage 708) … Training data may include, e.g., datasets collected from a variety of public software repositories (e.g., hundreds, thousands, millions, or even billions of datasets) (i.e., accessible to ML agents for coding/generation)”), wherein the code-based functions are associated with traces each relating to a respective question and a respective list of sequential code-based functions used to answer the respective question by a past agent run (Paragraph [0139, 0053] “a machine learning model may include multiple epochs, or passes of data (e.g., training data 802 a) through a machine learning model process (e.g., a training process)… Data into to a model to train the model may include input data (e.g., as described above) and/or data previously output from a model (e.g., forming recursive learning feedback)… such sources may provide problem (i.e., question) statements, function signatures, and solutions (i.e., answer), which may be collected and used as training data, using the problem description as the docstring… curated from open source projects utilizing continuous integration (e.g., by tracing (from past execution/run) and collecting inputs and outputs for functions called during integration testing in order to create unit tests for the functions)”); determining, by the device, one or more sequential groupings of the code-based functions that are candidates for corresponding reduction into a merged function (Paragraph [0094] “docstring generation model may be trained using concatenated strings… A concatenated string, as used herein, may refer to a sequence of strings combined, merged, joined, or associated together to create a unified or cohesive data entity… a concatenated string may include two or more strings (e.g., a function signature and a reference solution, a reference solution and a docstring, a function signature and a docstring, or a function signature, reference solution”). Chen does not explicitly disclose determining, by the device, whether the merged function is acceptable; and adding, by the device and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding. However, Tunstall-Pedoe discloses in an analogous computer system determining, by the device, whether the merged function is acceptable (Paragraph [0336] “check if any results can be found by executing computation units… The computation unit can then be called to get valid output values for the processed passage's unknowns” (it is obvious to evaluate merged function for acceptability via execution/scores)); and adding, by the device and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding (Paragraph [0202,0378] “lookup in the passage store if there are any passages that can be directly mapped with the passage being processed… exactly the same structure as a passage in the passage store, with all nodes matching… match against are valid results… recursive approach means that all data fetching from our database systems… modifications allows as much data fetching and processing” (it is obvious to add verified merged function for LLM to access)). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the method of determining, by the device, whether the merged function is acceptable; and adding, by the device and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding as taught by Tunstall-Pedoe into the method of generating code for finetuning the language model as taught by Chen. The modification would be obvious because of one of ordinary skill in the art would be motivated to add/incorporate the features of determining, by the device, whether the merged function is acceptable; and adding, by the device and responsive to the merged function being acceptable, the merged function to the database of the code-based functions that are accessible to the large language model agents for coding to provide an efficient technique for merging/combining the functions so as to optimize the training process of LLMs for better output by LLMs. Per claim 2: Chen discloses: 2. The method of claim 1, wherein determining whether the merged function is acceptable comprises: evaluating the function for any execution errors (Chen discloses verifying… evaluating each of the one or more generated computer samples based on at least one unit test… functional correctness score, see Paragraphs [0046, 0062, 0064] “automatically evaluating the correctness of synthesized code, e.g., via unit testing or heuristic ranking instead of manual evaluation… verifying may include evaluating each of the one or more generated computer code samples… necessarily providing a functional correctness score… verifying may further include evaluating each of the one or more generated computer code samples based on a threshold associated with the at least one unit test”). Chen does not explicitly disclose pruning the merged function responsive to having execution errors. However, Tunstall-Pedoe discloses in an analogous computer system pruning the merged function responsive to having execution errors (Paragraph [0336] “check if any results can be found by executing computation units… The computation unit can then be called to get valid output values for the processed passage's unknowns” (it is obvious to evaluate merged function for acceptability via execution/scores)”). The feature of providing pruning the merged function responsive to having execution errors would be obvious for the reasons set forth in the rejection of claim 1. Per claim 6: The rejection of claim 1 is incorporated and further, Chen does not explicitly disclose displaying the merged function for reviewer review; and receiving a response from the reviewer review regarding the merged function, the response being one of either: acceptance of the merged function, modification of the merged function for acceptance, or refusal of the merged function to prevent future use of the merged function. However, Tunstall-Pedoe discloses in an analogous computer system displaying the merged function for reviewer review; and receiving a response from the reviewer review regarding the merged function, the response being one of either: acceptance of the merged function, modification of the merged function for acceptance, or refusal of the merged function to prevent future use of the merged function (Paragraph [0795-821] “human users will already accept them and so they can be safely not displayed to users. In some examples, a large language model (LLM) is used to provide a concise explanation derived from the detailed explanation by providing examples and concise summaries in the prompt, then providing the detailed explanation and then asking the LLM to provide a completion thus generating the concise explanation”). The feature of providing displaying the merged function for reviewer review; and receiving a response from the reviewer review regarding the merged function, the response being one of either: acceptance of the merged function, modification of the merged function for acceptance, or refusal of the merged function to prevent future use of the merged function would be obvious for the reasons set forth in the rejection of claim 1. Per claim 7: The rejection of claim 6 is incorporated and further, Chen does not explicitly disclose providing contextual information for the reviewer review selected from a group consisting of: the merged function; sample runs performed on the merged function with corresponding inputs and respective outputs; and values of intermediate variables in the merged function. However, Tunstall-Pedoe discloses in an analogous computer system providing contextual information for the reviewer review selected from a group consisting of: the merged function; sample runs performed on the merged function with corresponding inputs and respective outputs; and values of intermediate variables in the merged function(Paragraph [0795-821] “human users will already accept them and so they can be safely not displayed to users. In some examples, a large language model (LLM) is used to provide a concise explanation derived from the detailed explanation by providing examples and concise summaries in the prompt, then providing the detailed explanation and then asking the LLM to provide a completion thus generating the concise explanation”). The feature of providing contextual information for the reviewer review selected from a group consisting of: the merged function; sample runs performed on the merged function with corresponding inputs and respective outputs; and values of intermediate variables in the merged function would be obvious for the reasons set forth in the rejection of claim 1. Per claim 8: The rejection of claim 6 is incorporated and further, Chen does not explicitly disclose providing an option for reviewer-initiated execution of the merged function with reviewer-supplied inputs. However, Tunstall-Pedoe discloses in an analogous computer system providing an option for reviewer-initiated execution of the merged function with reviewer-supplied inputs (Paragraph [0023] “analyse the continuation output (e.g. text output) generated by the LLM in response to a prompt to enable an improved version of that continuation output to be provided to a user”). The feature of providing an option for reviewer-initiated execution of the merged function with reviewer-supplied inputs would be obvious for the reasons set forth in the rejection of claim 1. Per claim 9: Chen discloses: 9. The method of claim 1, further comprising: updating the traces, responsive to the merged function being acceptable, with the merged function such that the respective list of sequential code-based functions used to answer the respective question passes through the merged function instead of the one or more sequential groupings of the code-based functions (Paragraph [0120, 0139] “docstring generation model may be trained… added, merged, divided, duplicated, repeated (e.g., as part of a machine learning process), modified, performed sequentially, performed in parallel, and/or deleted… Data source(s) 802 may include one or more of training data 802 a (e.g., input data to feed a machine learning model as part of one or more training processes), validation (i.e., acceptable) data 802 b (e.g., data against which at least one processor may compare model output with, such as to determine model output quality), and/or reference data 802 c”). Per claim 10: Chen discloses: 10. The method of claim 1, further comprising: creating, responsive to the merged function being acceptable, a new trace for an original question by at least one of the large language model agents with the merged function available in the database (Paragraph [0121] “evaluating the trained docstring generation model. Step 650 may include, e.g., assessing the docstring generation model's performance based on a validation data set (e.g., computing evaluation metrics such as accuracy, precision, BLEU score, ROUGE score, and/or analyzing results and adjusting model architecture)”). Per claim 11: The rejection of claim 1 is incorporated and further, Chen does not explicitly disclose wherein the candidates for corresponding reduction into the merged function comprise two trace-connected code-based functions each with exactly one inbound edge and exactly one outbound edge. However, Tunstall-Pedoe discloses in an analogous computer system wherein the candidates for corresponding reduction into the merged function comprise two trace-connected code-based functions each with exactly one inbound edge and exactly one outbound edge (Paragraph [0202,0378] “lookup in the passage store if there are any passages that can be directly mapped with the passage being processed… exactly the same structure as a passage in the passage store, with all nodes matching… match against are valid results… recursive approach means that all data fetching from our database systems… modifications allows as much data fetching and processing” (it is obvious to add verified merged function for LLM to access)).. The feature of providing wherein the candidates for corresponding reduction into the merged function comprise two trace-connected code-based functions each with exactly one inbound edge and exactly one outbound edge would be obvious for the reasons set forth in the rejection of claim 1. Per claim 12: The rejection of claim 1 is incorporated and further, Chen does not explicitly disclose wherein the candidates for corresponding reduction into the merged function comprise a first code-based function that forks to two or more code-based functions, wherein the merged function comprises a first step and two or more possible steps dependent on an outcome of the first step. However, Tunstall-Pedoe discloses in an analogous computer system wherein the candidates for corresponding reduction into the merged function comprise a first code-based function that forks to two or more code-based functions, wherein the merged function comprises a first step and two or more possible steps dependent on an outcome of the first step (Paragraph [0203] “check if any results can be found by executing computation units; it is checked if this passage matches against any passages in a computation unit description; all non-unknown nodes in the passage being processed must match the same nodes in the corresponding position in the computation description or align with a computation input unknown”). The feature of providing wherein the candidates for corresponding reduction into the merged function comprise a first code-based function that forks to two or more code-based functions, wherein the merged function comprises a first step and two or more possible steps dependent on an outcome of the first step would be obvious for the reasons set forth in the rejection of claim 1. Per claim 13: The rejection of claim 1 is incorporated and further, Chen does not explicitly disclose iteratively merging the merged function with one or more other functions into a further merged function. However, Tunstall-Pedoe discloses in an analogous computer system iteratively merging the merged function with one or more other functions into a further merged function (Paragraph [0203] “mappings for unknowns used in the reasoning passage are found by mapping with the passage being processed; this mapping can then be applied to the front half of the reasoning passage to generate a list of passages that, if they can be matched with known or generated processing language and mappings found for them, will prove and find valid mappings for the focus passage; solutions for the list of passages can then be found recursively”). The feature of providing iteratively merging the merged function with one or more other functions into a further merged function would be obvious for the reasons set forth in the rejection of claim 1. Per claim 15: Chen discloses: 15. The method of claim 1, wherein the large language model agents comprise troubleshooting agents (Paragraph [0139] “System 800 may also include machine learning (ML) modeling engine 830, which may be configured to execute one or more operations on a machine learning model (e.g., model training, model re-configuration, model validation, model testing)”). Claims 16-17 and 19 is/are the apparatus/system claim corresponding to method claims 1-2 and 6 respectively, and rejected under the same rational set forth in connection with the rejection of claims 1-2 and 6 respectively, as noted above. Claim 20 is/are the medium claim corresponding to method claim 1 and rejected under the same rational set forth in connection with the rejection of claim 1 as noted above. Allowable Subject Matter Claims 3 and 14 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 4-5 are objected by virtue of their respective dependencies on claim 3. Please note that applicants must overcome the outstanding 101 rejections above in order for these claims to be allowed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Related cited arts: Qian, Chen, et al. "Communicative agents for software development." arXiv preprint arXiv:2307.07924 6.3 (2023): pp.1-29. Babaei Giglou, Hamed, Jennifer D’Souza, and Sören Auer. "LLMs4OL: Large language models for ontology learning." International Semantic Web Conference. Cham: Springer Nature Switzerland, 2023. pp. 2-27. Wan, Zhongwei, et al. "Efficient large language models: A survey." arXiv preprint arXiv:2312.03863 (2023). pp.1-67 US20250045256 discloses a method to be executed on a computing device comprising (i) accessing and/or modifying a database to be automatically curated, (ii) optionally accessing additional data or information sources for further useful data or information, (iii) using one or more pre-trained large language models (LLMs) accessed via API or other connections, issuing prompts and retrieving prompt-answers and executing database curation requests that specify database curation tasks to be performed on at least one sub-structure of the database, the tasks comprising (a) a database enrichment task to compute new data records to be inserted into the database sub-structure, (b) a database verification task to verify, using the one or more LLMs, data contained in the sub-structure, and identify incorrect data, (c) a database update, and (d) a null-value or a missing value replacement task. The requested tasks are automatically performed via a computation comprising an adaptively generated prompt sequence. US12475086 discloses a method for database constraint generation, executed by at least one processor on a computing device accessing one or more large language models (LLMs), comprising retrieving data and/or metadata from a database; generating prompts by parameterizing inputs with concrete values; interacting with LLMs through these prompts to obtain and analyze responses; and performing data intelligence processing to derive natural-language descriptions of structural database elements. The method enables generating database constraints from defined classes, such as attribute-domain restrictions, intra-relational, and inter-relational constraints. Constraints include semantic, syntactic, and dependency-based types. Orchestration of constraint learning involves predefined or dynamic workflows incorporating tasks like database sampling, constraint testing, and refinement. It employs LLM-based techniques to generate candidate rules and optimize constraints through iterative testing and scoring. The method further supports counterexample identification, score aggregation, and rule evaluation to ensure robust constraint generation and refinement. US20200265060 discloses a system, method, and computer-readable medium to extract information from at least one of code and text documentation, the extracted information conforming to a base ontology and being extracted in the context of a knowledge graph; add the extracted information to the knowledge graph; generate, in a mixed interaction with a user selectively in communication with the system, computational models including scientific knowledge; and persist, in a memory, a record of the generated computational models. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Satish Rampuria whose telephone number is 571-272-3732. The examiner can normally be reached on Monday-Friday from 8:30 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do, can be reached at telephone number 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Satish Rampuria/Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596353
Industrial Field Device Monitoring System
2y 5m to grant Granted Apr 07, 2026
Patent 12596630
PROCESSOR SUPPORT FOR USING MEMORY PAGE MARKINGS AS LOGGING CUES TO SIMULTANEOUSLY RECORD PLURAL EXECUTION CONTEXTS INTO INDEPENDENT EXECUTION TRACES
2y 5m to grant Granted Apr 07, 2026
Patent 12592302
SYSTEMS AND METHODS FOR INACCURACY DETECTION AND PREVENTION WITHIN PRESCRIPTION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12585571
MULTIPLE MODES OF STORING AND QUERYING TRACE DATA IN A MICROSERVICES-BASED ARCHITECTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12585437
SYSTEM AND METHOD FOR A MACHINE LEARNING SOURCE CODE GENERATION VIA A HOLOCHAIN NETWORK
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+25.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 833 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month