Prosecution Insights
Last updated: April 19, 2026
Application No. 18/606,201

VALIDATING REUSABLE CODE FOR A LANGUAGE MODEL-BASED NETWORK AGENT

Non-Final OA §101§103§112
Filed
Mar 15, 2024
Examiner
TRAN, TRAVIS VIET
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
93%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
13 granted / 14 resolved
+37.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
20.6%
-19.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The Office Action is in response to claims filed 3/15/2024. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 2, 11, are objected to because of the following informalities: Claims 1, 11, and 20 are objected to for a minor informality. The claims recite the limitation “to assess whether it is able to perform actions of the particular type”. It is unclear if the term “it” is referential to “the code” or something else entirely. For the purposes of compact prosecution, Examiner will interpret the claims as follows: “to assess whether the code is able to perform actions of the particular type” Claims 2 and 20 are objected to for a minor informality. The claims recite the limitation “by storing it in a database of validated code”. It can be unclear if the term “it” is referential to “the code” or something else entirely. For the purposes of compact prosecution, Examiner will interpret the claims as follows “by storing the code in a database of validated code” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "wherein language model-based agent" in line 1. It is unclear if “language model-based agent” refers to aforementioned “language model-based agent” in claim 1 or to something else entirely. Therefore, the claim is rendered vague and indefinite. For the purpose of compact prosecution, Examiner interprets the claim as follows: “wherein the language model-based agent”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 11, and 20 as drafted recite a process, under its broadest reasonable interpretation, that covers steps that could reasonably be performed in the mind with the aid of pen and paper, but for the recitation of generic computer/computing components. Claims 1, 11, and 20 recite the limitations: Determining…one or more parameters to execute the code in a testing environment; Performing…a validation assessment of the code to assess whether it is able to perform actions of the particular type by executing it with the one or more parameters in the testing environment; and Making, … and based on the validation assessment, the code available to the language model-based agent to perform a subsequent action of the particular type which can be done by a human mind carrying out these functions through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, these limitations fall under the “Mental Processes” group of abstract ideas. The judicial exception is not integrated into a practical application. Claim 1 recites the additional elements: A device A language model-based agent A computer network Claim 11 recites the additional elements: one or more network interfaces a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor, the process when executed Claim 20 recites the additional elements: a tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process The additional elements are recited at a high level of generality such that it amounts to no more than a mere computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Furthermore, the claims recite the additional element “obtaining…code generated by a language model-based agent to perform an action of a particular type with respect to a computer network” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data transmission (See MPEP 2106.05(g)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The claims recite additional elements that do not amount to significantly more than the abstract idea. Claim 1 recites the additional elements: A device A language model-based agent A computer network Claim 11 recites the additional elements: one or more network interfaces a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor, the process when executed Claim 20 recites the additional elements: a tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process The additional elements are recited at a high level of generality such that it amounts to no more than a mere computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). The claims further recite “obtaining…code generated by a language model-based agent to perform an action of a particular type with respect to a computer network” which has been determined to be a well-known, routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Accordingly, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. Thus, the claims are not patent eligible. Claims 2 and 12 recite an additional element that are not integrated into a practical application. The claims recite the additional element “by storing it in a database of validated code accessible by the language model-based agent” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data gathering (See MPEP 2106.05(g)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits upon practicing the abstract idea. The claims recite an additional element that does not amount to significantly more than the abstract idea. The claim recites “by storing it in a database of validated code accessible by the language model-based agent” which has been determined to be a well-known, routine, and/or conventional activity of electronic recordkeeping (See MPEP 2106.05(d)(II)). Accordingly, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. Thus, the claims are not patent eligible. Claims 3 and 13 recite the additional limitation “wherein the device performs the validation assessment when the code is not similar to previous code validated for use…” which is a process, under its broadest reasonable interpretation, that can be performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the limitations fall under the “Mental Processes” group of abstract ideas. The judicial exception is not integrated into a practical application. The claims further recite the additional element “by the language model-based agent” which is recited at a high level of generality such that it amounts to no more than a generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The claim recites an additional element which does not amount to significantly more than the abstract idea. The claims further recite the additional element “by the language model-based agent” which is recited at a high level of generality such that it amounts to no more than a generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. Thus, the claims are not patent eligible. Claims 4 and 14 recite an additional element that are not integrated into a practical application. The claims recite the additional element “wherein the particular type of the action corresponds to one or more troubleshooting steps for a particular issue in the computer network” which is a process, under its broadest reasonable interpretation, that can be reasonably performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the limitations fall under the “Mental Processes” grouping of abstract ideas. Claims 5 and 15 recite the additional limitation “using…an output of the code from its execution in the testing environment as input to additional code configured to perform a reverse of the action of the particular type; and determining…whether an output of the additional code matches the one or more parameters” which is a process, under its broadest reasonable interpretation, that can be reasonably performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the limitations fall under the “Mental Processes” grouping of abstract ideas. The judicial exception is not integrated into a practical application. The claims further recite the additional element “by the device” which is recited at a high level of generality such that it amounts to no more than a generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The claim recites an additional element which does not amount to significantly more than the abstract idea. The claims further recite the additional element “by the device” which is recited at a high level of generality such that it amounts to no more than a generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. Thus, the claims are not patent eligible. Claims 6 and 16 recite the additional limitation “…to perform the subsequent action in lieu of generating new code to do so” which is a process, under its broadest reasonable interpretation, that can be reasonably performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the limitations fall under the “Mental Processes” grouping of abstract ideas. The judicial exception is not integrated into a practical application. The claims further recite the additional element “wherein the language model-based agent uses the code” which is recited at a high level of generality such that it amounts to no more than a generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The claim recites an additional element which does not amount to significantly more than the abstract idea. The claims further recite the additional element “wherein the language model-based agent uses the code” which is recited at a high level of generality such that it amounts to no more than a generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. Thus, the claims are not patent eligible. Claims 7 and 17 recite an additional element that does not integrate the judicial exception into a practical application. The claims recite the additional elements “the device” and “a user interface” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(g)). The claims further recite “providing … performance metrics regarding use of the code by the language model-based agent to … for review by a user” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data transmission (See MPEP 2106.05(g)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits upon practicing the abstract idea. The claims recite additional elements that do not amount to significantly more than the abstract idea. The claim recites “the device” and “a user interface” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(g)). The claims further recite “providing … performance metrics regarding use of the code by the language model-based agent to … for review by a user” which has been determined to be a well-known, routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Accordingly, the additional elements cannot provide an inventive concept because it does not amount to significantly more than the abstract idea. Thus, the claims are not patent eligible. Claims 8 and 18 recite the additional limitation “performs the validation assessment in accordance with one or more constraints” which is a process, under its broadest reasonable interpretation, that can be reasonably performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the limitations fall under the “Mental Processes” grouping of abstract ideas. The claims recite additional elements that do not amount to significantly more than the abstract idea. The claims recite “the device” and “a user interface” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(g)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits upon practicing the abstract idea. The claims recite additional elements that do not amount to significantly more than the abstract idea. The claim recites “the device” and “a user interface” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(g)). Accordingly, the additional elements cannot provide an inventive concept because it does not amount to significantly more than the abstract idea. Thus, the claims are not patent eligible. Claims 9 and 19 recite an additional element that does not integrate the judicial exception into a practical application. The claims recite “wherein the code when executed accesses an application programming interface of a network controller” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data transmission (See MPEP 2106.05(g)). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits upon practicing the abstract idea. The claims recite additional elements that do not amount to significantly more than the abstract idea. The claims recite “wherein the code when executed accesses an application programming interface of a network controller” which has been determined to be a well-known, routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Accordingly, the additional elements cannot provide an inventive concept because it does not amount to significantly more than the abstract idea. Thus, the claims are not patent eligible. Claim 10 recites an additional element that does not integrate the judicial exception into a practical application. The claim recites “wherein language model-based agent uses a large language model (LLM) to generate the code” which is recited at a high level of generality such that it amounts to no more than a mere generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits upon practicing the abstract idea The claims recite an additional element that does not amount to significantly more than the abstract idea. The claim recites “wherein language model-based agent uses a large language model (LLM) to generate the code” which is recited at a high level of generality such that it amounts to no more than a mere generic computer/computing component to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional element cannot provide an inventive concept nor amount to significantly more. Thus, the claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, 8-11, 14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20250007988 A1 hereinafter “Letort” in view of US 20180113799 A1 hereinafter “M.V.” and further in view of US 20240028312 A1 hereinafter “Gillman” With regards to claim 1, Letort teaches A method comprising: obtaining, by a device, code generated by a language model-based agent to perform an action of a particular type with respect to a computer network; (Letort [0089-91], “Upon receiving the propagated data sets, the other system components may then utilize the same in order to “compute” or perform one or more system services, operations, functions, etc. For example, propagated data sets may be utilized by the AI layer for training and/or inferencing (e.g., deploying one or more machine learning models to generate actionable output, suggestions, predictions, etc.) [generated by a language model-based agent] … Examples of such additional operations may include (without limitation) transmitting the output and/or results to client device(s) for rendering and/or display thereon, storing the output and/or results in a dedicated data repository, generating and transmitting notices, alerts and other communications to client device(s), triggering orchestration tasks, re-routing data transmissions within an SDN, transmitting to other system layers for processing (e.g., transmitting to AI layer for re-training machine learning models), initiating predictive maintenance routines, generating a digital twin of an SDN (e.g., to assess network performance and/or optimization scenarios), assessing energy efficiency optimization and forecasting, etc…The AI layer may include one or more AI/ML modeling engines (collectively referred to as “AI engine” or “AI modeling engine”) configured to generate, train, validate, test and/or deploy one or more combinations of AI/ML models or algorithms (collectively referred to as “AI models”). The AI modeling engine, via the AI layer, may be operatively coupled to one or more components of the system, such as the SDN automation engine [with respect to a computer network], user devices, etc., and configured to receive, store and analyze data therefrom and in turn, generate instructions to cause the components of the system to initiate and execute one or more actions [code…to perform an action of a particular type]. In some embodiments, the AI modeling engine may also be configured to continually refine its AI models based on, for example, policies, user sentiment, network analytics, and so on”) [Examiner’s Note: A SDN is defined as a Software Defined Network. Instructions that are generated, from an AI model, can be interpreted as code to configure the SDN in response to a specific action.] Letort does not teach: determining, by the device, one or more parameters to execute the code in a testing environment; performing, by the device, a validation assessment of the code to assess whether it is able to perform actions of the particular type by executing it with the one or more parameters in the testing environment; However, in an analogous art M.V. teaches determining, by the device, one or more parameters (M.V. [0047], “The JAR file may be used to extract all the classes and functions of the API. The source code may be parsed using program splicing techniques to identify the code flow, identify the atomicity of the variables, and derive an approximate order of function execution. This may facilitate an understanding of the call flow for the application, including the starting points and control flows for the application's data. Based on the JAR and source code analysis, a partial model for the application may be generated. The partial model, for example, may be a file identifying each function call and providing certain “annotations” or “comments” specifying additional information about each function (e.g., the annotations of FIGS. 3B and 3C), as explained further throughout this disclosure [determining, by the device, one or more parameters].”) to execute the code in a testing environment; performing, by the device, a validation assessment of the code to assess whether it is able to perform actions of the particular type by executing it with the one or more parameters in the testing environment; (M.V. [0091-92], “Analogous to shipping containers, software containers may package a particular software component with all of its dependencies to ensure that it runs the same in any environment or infrastructure, out-of-the-box. For example, a software container may package everything required to run a particular software component, such as the code, software libraries, APIs, configuration, files, runtime environment, and any other associated tools or applications [to execute the code]. Software containers enable applications to be migrated across various infrastructures and environments without any modifications or environment-specific configurations [executing it with the one or more parameters]. For example, applications can be migrated to or from local workstations, development servers, test environments [in the testing environment], and/or production environments. Software containers also enable applications to be developed using the best programming languages and tools for each application, without any internal conflicts from the requirements of different applications. Many inefficiencies of software development and deployment are eliminated with software containers, such as time spent configuring development and production environments, concerns about inconsistencies between development and production environments, and so forth. Software containers also avoid locking developers into any particular platform, software technology, and/or vendor.”) (M.V. [0106-107], “For example, the partial model may identify the classes and functions of the API or application, along with the function control flow, parent functions, function input criteria, and/or function validation criteria. Accordingly, a model graph for the application may then be generated using the partial model. For example, the model graph may include nodes representing each function, with edges between nodes to represent the control-flow of the functions. The model graph may also identify the input criteria and validation criteria for each function. The flowchart may then proceed to block 710 to test the application using the model graph. For example, the model graph may enable model-based testing of the application, which may involve automated testing of the application using test cases generated from the model graph [performing, by the device, a validation assessment of the code]. For example, the model graph may be used to generate inputs to the functions (e.g., based on the function input criteria), execute the functions using the generated inputs and according to the identified control-flow, and determine whether the output of each function is valid (e.g., using the output validation criteria) [to assess whether it is able to perform actions of the particular type].”) [Examiner’s Note: By testing the code with validation criteria that corresponds with input criteria, the passage defines a validation assessment. If a tested function meets the validation criteria, then the application or API would be able to perform an action of its specified type.] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of M.V. into the teachings of Letort. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation in a designated environment, as in M.V. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of automating the generation of various tests under a test model that covers several use cases and testing environments (M.V. [0044]). The combination of Letort and M.V. does not teach: making, by the device and based on the validation assessment, the code available to the language model-based agent to perform a subsequent action of the particular type. However, in an analogous art Gillman teaches making, by the device and based on the validation assessment, the code available to the language model-based agent to perform a subsequent action of the particular type. (Gillman [0080], “the methods and systems described herein may combine the functionality of a machine learning engine that can encode and generate machine learning models with characteristics selected based on characteristics of a user-specified task [to perform a subsequent action of the particular type] and/or user-specified data with the functionality of a large learning model trained to generate executable code for use in performing additional tasks on the output of the generated machine learning models and with the functionality of the machine learning engine to evaluate and validate the generated code and then execute the generated code. A machine learning engine with functionality for interacting with a large language model to generate and execute validated computer code to perform user-specified task described in a natural language (in contrast to, for example, a computer language) provides a technical improvement over conventional systems [making, by the device and based on the validation assessment, the code available to the language model-based agent].”) [Examiner’s Note: if the particular task/action is called upon the LLM will execute previously validated software that has been provided when outputs are evaluated as in Gillman] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Gillman into the teachings of Letort in view of M.V. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation for further LLM use, as in M.V., and providing the validated code for future LLM code generation use, as in Gillman. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of analyzing user-specified data in order to identify useful features required in task completion for influencing future machine learning prediction accuracy (Gillman [0029]). With regards to claim 4, the rejection of claim 1 is incorporated. Letort further teaches wherein the particular type of the action corresponds to one or more troubleshooting steps for a particular issue in the computer network. (Letort [0087-88], “The types of data and information collected from these external sources may include, for example, customer order data, customer billing data, user relationship and interaction data, images, user personal identifiable information (PII) (e.g., customer name, location, employer, etc.) and other network authoritative data, configuration data relating to hardware, software, systems, facilities, personnel, etc. such as cloud onramp configuration information [for a particular issue in the computer network], routing tables (e.g., locations, latency between locations, price of connection, etc.), as well as unstructured data (e.g., provided via user devices or external systems) such as user-created documents, vendor scripting documentation, training information (e.g., service descriptions, training videos, presentations, etc.), user guides, network service requests, resolution/troubleshooting guides [to one or more troubleshooting steps] and documentation, communication service provider (CSP) onramp information, user configuration data, telemetry data, and so on. The external data may also include user feedback, responses, sentiments (e.g., positive or negative), input, etc. collected, for example, via user devices… The operations may include any number of pre-processing functions such as, for example, labeling, annotating, filtering, formatting, normalizing, cleansing (e.g., removing noise, discarding corrupt, duplicate or incomplete data, etc.), scaling, resolving missing data values, performing “ETL” operations (i.e., extracting, transforming and/or loading the data from any number of sources to a data warehouse or other unified data repository), and so on, to prepare the data for use by other components or services of the system. In some aspects, the pre-processing may include monitoring the data collection layer and/or any of the pre-processing functions (e.g., labeling, annotating, etc.) [wherein the particular type of the action corresponds] to confirm compliance with business rules and metrics, to generate reports (e.g., notating non-compliance), to generate alerts (e.g., responsive to interruptions to data streams or other processing issues), etc.”) With regards to claim 8, the rejection of claim 1 is incorporated. Letort further teaches wherein the device performs the validation assessment in accordance with one or more constraints specified via a user interface. (Letort [0123-125], “As noted above, the portal 1206 may interact with the API 1205 to provide an interface for the user 1203 to configure a customized SDN. The portal 1206 may also be configured to provide an interactive GUI 1206a through which the user 1203 may submit information to configure the customized SDN. It is through this interactive GUI 1206a that the user 1203 may access and interact with the chat-bot generated by the AI layer 1220 … In some embodiments, the policies may be user-defined, system-defined or a combination thereof. For example, a user-defined policy may be based on user-defined parameters, such that adjustments to model weights may be initiated to comply with the user-defined policy. In another example, the system may infer a policy based on prior user interactions or tendencies. Output generated by the AL models (adjusted according to optimization policies) may then be used in topology discovery, routing calculations, bandwidth allocation, intelligent allocation of services, service re-routing, service distribution predictions, fault predictions, resource adjustments, etc.”) With regards to claim 9, the rejection of claim 1 is incorporated. Letort further teaches wherein the code when executed accesses an application programming interface of a network controller. (Letort [0074], “FIG. 10 schematically illustrates an example of the schematics of a software-defined network (SDN) automation engine. The SDN automation engine facilitates a customer to create a software-defined network of virtual network devices (e.g., one or more virtual controllers 800 and/or one or more virtual gateways 900) on the physical infrastructure of system 300. In some implementations, the SDN automation engine 1001 makes a Representational State Transfer (REST) Application Programming Interface (API) 1002 available to a customer's application to configure a software-defined network. In some implementations, the SDN automation engine 1001 may provide a portal 1004 for a customer 1003 to configure the software-defined network. In an embodiment, the portal 1004 communicates with the SDN automation engine via the REST API 1002. When a customer configures the software-defined network, the SDN automation engine may interact with SDN resource managers 1005a, 1005b, 1005c, . . . . SDN resource managers 1005a, 1005b, 1005c, . . . run as a process on an operating system installed on the “bare-metal” hardware resources 1007a, 1007b, 1007c, . . . of a sub-zone. The sub-zone may be within a Cloud Exchange or a Cloud PoP. SDN resource managers 1005a, 1005b, 1005c, . . . (see also, e.g., FIGS. 8 and 9) instantiate and/or deploy virtual network devices 1006a”) With regards to claim 10, the rejection of claim 1 is incorporated. Letort further teaches wherein language model-based agent uses a large language model (LLM) to generate the code. (Letort [0100-1010], “To do this, the conversational layer may invoke natural language processing (NLP) to interpret the input, and a converter to convert the interpreted input into the one or more commands. In some embodiments, the one or more commands may include network device commands, orchestration layer commands, commands for external (e.g., third party) systems or tools, and the like. In some embodiments, the one or more commands may be used to gather data or initiate changes, whether based on user/customer input or automatically by orchestration features of the system. The NLP may itself comprise executing one or more of the ensemble LLMs discussed above, for example … For example, a command to return and display information to which the conversational layer has access may be processed directly by the conversational layer. On the other hand, a command for creating or modifying a software defined network (SDN) may be transmitted to the system's SDN automation engine, for example, for further processing and execution. Notably, while the conversational layer may be utilized to invoke certain actions, such as generating notices, triggering orchestration, running scripts, etc., execution of such actions may occur within other system layers (e.g., the AI layer).”) Claims 11, 14, and 18-19 are directed to an apparatus, comprising: one or more network interfaces; (Letort FIG. 4) a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor (Letort FIG. 8) corresponding to the method limitations as disclosed in claims 1, 4, and 8-9 respectively. Thus, claims 11, 14, and 18-19 are rejected for the same reasons set forth in claims 1, 4, and 8-9. Claim 20 is directed to a tangible, non-transitory, computer-readable medium storing program instructions corresponding to the method limitations as disclosed in claim 1. Thus, claim 20 is rejected for the same reasons set forth in claim 1. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Letort in view of M.V. in view of Gillman, as applied to claims 1 and 11 respectively, and further in view of US 20240338306 A1 hereinafter “Waxman”. With regards to claim 2, the rejection of claim 1 is incorporated. The combination of Letort, M.V., and Gillman teaches wherein the device makes the code available to the language model-based agent but does not teach [wherein the device makes the code available to the language model-based agent] by storing it in a database of validated code accessible by the language model-based agent However, in an analogous art Waxman teaches […] by storing it in a database of validated code accessible by the language model-based agent (Waxman [0043], “At 312, the source code of the API is validated using the test(s). For example, a test script may be automatically executed to verify each function of the API and the compatibility of the API with one or more selected environments … Each newly solved error can be stored in a database [by storing it in a database of validated code] for future use by the machine learning model to enable future automatic solution implementations [accessible by the language model-based agent].”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Waxman into the teachings of Letort in view of M.V. and further in view of Gillman. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation for further LLM use, as in M.V., and providing the validated code for future LLM code generation use, as in Gillman, by storing the generated and validated code into a database for future reference, as in Waxman. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of providing a deployment mechanism that can store API versions and transmittal of validated APIs specified to particular environments (Waxman [0029]). Claim 12 is directed to an apparatus corresponding to the method limitations as disclosed in claim 2. Thus, claim 12 is rejected for the same reasons set forth in claim 2. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Letort in view of M.V. in view of Gillman, as applied to claims 1 and 13 respectively, and further in view of US 20200371902 A1 hereinafter “Mangione-Tran”. With regards to claim 3, the rejection of claim 1 is incorporated. The combination of Letort, M.V., and Gillman teaches the validation assessment but does not teach: wherein the device performs the validation assessment when the code is not similar to previous code validated for use by the language model- based agent. However, in an analogous art Mangione-Tran teaches wherein the device performs the validation assessment when the code is not similar to previous code validated for use by the language model- based agent (Mangione-Tran [0054-56], “The illustrated process 450 also involves the TCP server 352 identifying (block 460) a set of software files 462 that have been modified since the previous software testing. In some embodiments, identifying the set of modified software files 462 may include comparing two versions of a software file, such as a recent version of the software file and a previously validated version of the software file [previous code validated for use]. In certain embodiments, to determine whether a software file has been modified, a comparison may be made between the value of the last modified field 382 of a software file listed in the software table 372 and the value of the timestamp field 396 of related test results in the test results table 376. For the embodiment illustrated in FIG. 5, identifying the set of modified software files may involve creating a new record in the modified software table 402, and for each software file that is determined to have been modified since the previous preflight verification, a Boolean value of true may be stored in the corresponding software changed field (e.g., software ID#1 changed field 412, software ID#2 changed field 414, etc.) of the modified software table 402 [when the code is not similar]… In certain embodiments, correlating may involve the use of a ML component, such as an ANN, a SVM, a restricted Boltzmann machine, Bayesian networks, a genetic algorithms, or another suitable ML component. For the embodiment illustrated in FIG. 5, correlating involves populating the test case cluster ID field 416 for the record of the modified software table 402 created in block 460, which establishes a relationship between certain clusters of test cases and certain groups of modified software files [by the language model-based agent] … Additionally, prior to a software merge, the software developer may desire extensive software testing that provides a high level of confidence that the software modifications during development did not introduce any regressions.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Mangione-Tran into the teachings of Letort in view of M.V. and further in view of Gillman. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation for further LLM use, as in M.V., and providing the validated code for future LLM code generation use, as in Gillman, and evaluating the software with the language model for validation upon determination of differences, as in Mangione-Tran. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of prioritizing test cases when there is a higher likelihood of regression as a result of modified software files (Mangione-Tran [0027]). Claim 13 is directed to an apparatus corresponding to the method limitations as disclosed in claim 3. Thus, claim 13 is rejected for the same reasons set forth in claim 3. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Letort in view of M.V. in view of Gillman, as applied to claims 1 and 11 respectively, and further in view of US 20250238629 A1 hereinafter “Malkiel”. With regards to claim 5, the rejection of claim 1 is incorporated. The combination of Letort, M.V., and Gillman teaches the validation assessment but does not teach: [wherein performing the validation assessment comprises:] using, by the device, an output of the code from its execution in the testing environment as input to additional code configured to perform a reverse of the action of the particular type; and determining, by the device, whether an output of the additional code matches the one or more parameters. However, in an analogous art Malkiel teaches […] using, by the device, an output of the code from its execution in the testing environment as input to additional code configured to perform a reverse of the action of the particular type; and determining, by the device, whether an output of the additional code matches the one or more parameters. (Malkiel [0291-293], “204 language model hallucination detection functionality (also referred to as hallucination detection functionality 204 or functionality 204), e.g., software or specialized hardware which performs or is configured to perform steps 304 forward, 304 backward [using, by the device, an output of the code from its execution in the testing environment as input to additional code configured to perform a reverse of the action of the particular type], 306, and 312, or steps 802, 804, 806, 808, 902, 904, and 812, or steps 802, 804, 806, 808, 810, and 812, or any software or hardware which performs or is configured to perform a novel method 1000 or a computational machine learning model 132 hallucination detection functionality activity first disclosed herein 206 computationally detect hallucinated content in output 136 of a machine learning model 132, e.g., by performing at least a forward traversal, a backward traversal (a.k.a. back traversal), and comparing a distance based on question vectors resulting from the back traversal to a threshold 208 hallucinated content in output 136 of a machine learning model 132, i.e., content which is contrary to validated retrieved context 448, or is internally logically inconsistent with itself, or states as a fact the existence of an event or item or being which does not actually exist (or did not exist at the relevant time), or is deemed false by a court or other legal authority or by society generally or by peer-reviewed scientific consensus, or is described as false or misleading by a majority of the parties who rely on it; also referred to as hallucination or fabrication or as content that is false, that lacks truth, that lacks veracity, or lacks fidelity [and determining, by the device, whether an output of the additional code matches the one or more parameters]”) [Examiner’s Note: Upon backwards traversal Malkiel teaches a detection of hallucinated/invalid content if the code does NOT match the one or more parameters] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Malkiel into the teachings of Letort in view of M.V. and further in view of Gillman. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation for further LLM use, as in M.V., and providing the validated code for future LLM code generation use, as in Gillman, and executing a backwards traversal to confirm the logic/output for validation detection, as in Malkiel. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of providing hallucination detection technology based on internal testing using backwards traversal in response to a prompt to do so (Malkiel [0032]). Claim 15 is directed to an apparatus corresponding to the method limitations as disclosed in claim 5. Thus, claim 15 is rejected for the same reasons set forth in claim 5. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Letort in view of M.V. in view of Gillman, as applied to claims 1 and 11 respectively, and further in view of US 20250138502 A1 hereinafter “Patel”. With regards to claim 6, the rejection of claim 1 is incorporated. The combination of Letort, M.V., and Gillman does not teach: wherein the language model-based agent uses the code to perform the subsequent action in lieu of generating new code to do so. However, in an analogous art Patel teaches wherein the language model-based agent uses the code to perform the subsequent action in lieu of generating new code to do so. (Patel [0076], “The generative AI component model 226 can reference these customer-specific libraries in connection with generating control code recommendations (or recommendations for control code edits) so that all recommended control code conforms to the customer's in-house standards in terms of control program formatting, program documentation standards, variable naming conventions, AOIs or instructions used, UDTs, etc. The generative AI component 210 can also reuse prewritten code stored in these libraries 306 where appropriate to satisfy the functional requirements specified by the user's prompt.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Patel into the teachings of Letort in view of M.V. and further in view of Gillman. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation for further LLM use, as in M.V., and providing the validated code for future LLM code generation use, as in Gillman, and reusing validated software instead of generating new software to meet user specifications, as in Patel. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of ascertaining the requirements of the system in order to generate appropriate responses accordingly from an AI component (Patel [0044]). Claim 16 is directed to an apparatus corresponding to the method limitations as disclosed in claim 6. Thus, claim 16 is rejected for the same reasons set forth in claim 6. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Letort in view of M.V. in view of Gillman, as applied to claims 1 and 11 respectively, and further in view of US 12056255 B1 hereinafter “Mystetskyi” With regards to claim 7, the rejection of claim 1 is incorporated. The combination of Letort, M.V., and Gillman does not teach: providing, by the device, performance metrics regarding use of the code by the language model-based agent to a user interface for review by a user. However, in an analogous art Mystetskyi teaches providing, by the device, performance metrics regarding use of the code by the language model-based agent to a user interface for review by a user. (Mystetskyi Column 38 Lines 1-10, “In some disclosed embodiments, the intermediate platform is configured to validate code modifications. Code modifications may include updates and/or patches for a code (e.g., associated with performance, security, privacy, compatibility, and/or any other interest for changing a code), added and/or removed features, changes (e.g., improvements) to a user interface, and/or any other change and/or revision made to a code. For example, code modifications may include code changes that were performed during the wrapping thereof by the intermediate platform. Validating code modifications may include examining, testing, emulating, and/or simulating an altered code (or a portion thereof) to confirm and/or verify that changes made to the code avoid violating one or more rules included in a configuration for a serverless environment.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Mystetskyi into the teachings of Letort in view of M.V., and further in view of Gillman. This combination of teachings would have resulted in a method configured to generate network configuration code from a language model for specified operations, as in Letort, while using the generated code to facilitate testing and validation for further LLM use, as in M.V., and providing the validated code for future LLM code generation use, as in Gillman, and providing performance metrics to an interface for user review, as in Mystetskyi. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of comparing estimated outputs to their trained desired outputs to evaluate and train a machine learning algorithm to the user’s desired parameters (Mystetskyi Column 10 Lines 25-45) Claim 17 is directed to an apparatus corresponding to the method limitations as disclosed in claim 7. Thus, claim 17 is rejected for the same reasons set forth in claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRAVIS VIET TRAN whose telephone number is (571)272-3720. The examiner can normally be reached Monday-Friday 8:30AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached at 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.V.T./Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572351
INTEGRATION OF MACHINE LEARNING MODELS INTO SOFTWARE SYSTEMS USING SOFTWARE LIBRARY
2y 5m to grant Granted Mar 10, 2026
Patent 12541353
DEPLOYING AND UPDATING APPLICATIONS EXECUTED ON CONTROL SYSTEMS CONNECTED TO EDGE COMPUTE MODULES VIA A BACKPLANE
2y 5m to grant Granted Feb 03, 2026
Patent 12528429
ELECTRONIC CONTROL UNIT, VEHICLE CONTROL SYSTEM, AND VEHICLE CONTROL METHOD
2y 5m to grant Granted Jan 20, 2026
Patent 12524329
WEB APPLICATION OBSERVABILITY WITH DISTRIBUTED TRACKING AND CUSTOM HEADER
2y 5m to grant Granted Jan 13, 2026
Patent 12505026
OBJECT HISTORY TRACKING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+100.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month