Prosecution Insights
Last updated: April 19, 2026
Application No. 18/437,941

PLC PROGRAM GENERATOR/COPILOT USING GENERATIVE AI

Final Rejection §103§DP
Filed
Feb 09, 2024
Examiner
SOLTANZADEH, AMIR
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Rockwell Automation Technologies Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
340 granted / 421 resolved
+25.8% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
456
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 421 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-2, 4-21 are presented for examination. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 4-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of copending Application No. 18/991,211 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because: Comparison of Claims: Claim 1 of the instant application (18/437,941) recites a system comprising a memory, processor, user interface for receiving a natural language prompt specifying industrial control code requirements, and an AI component to generate industrial control code based on the prompt, sample code in a code repository, text-based documents in a document repository, with embedded functional documentation. Claim 1 of the reference application (18/991,211) recites a substantially similar system with a user interface for receiving a functional specification document (comprising natural language text), and an AI component to generate documented control code based on analysis of the natural language text, with embedded documentation. The reference claim's "functional specification document comprising natural language text" is an obvious variation of the instant claim's "prompt" as natural language input, as both involve natural language descriptions of control requirements; the generation of "documented control code" with embedded documentation in the reference directly corresponds to the instant's "industrial control code" with embedded documentation; and the reference's analysis using a generative AI model renders obvious the instant's inference-based generation. Claim 2 of the instant application recites documents in the repository including manuals or specifications. Claim 5 of the reference recites a substantially identical limitation. Claim 4 of the instant application recites sample control code types. Claim 6 of the reference recites a substantially identical limitation. Claim 5 of the instant application recites generating code in a specified format (e.g., ladder logic). Claim 8 of the reference recites a substantially identical limitation, with the format specified via user interface input (obvious variation of prompt). Claim 6 of the instant application recites generating documented control code from a functional specification document. Claim 1 of the reference directly teaches this core limitation, with embedded documentation based on the text. Claim 7 of the instant application recites generating a text-based functional specification from a control code sample. Claim 7 of the reference recites a substantially identical limitation. Claim 8 of the instant application recites hyperparameter tuning for multiple code versions and storing selected hyperparameters. Claim 9 of the reference recites a substantially identical limitation (applied to documented control code). Claim 9 of the instant application recites iterative hyperparameter adjustment based on selection. Claim 10 of the reference recites a substantially identical limitation. Claim 10 of the instant application recites UI rendering questions for prompt refinement based on output type. While not identically recited in the reference, this is an obvious variation of the reference's user interface for receiving functional specifications (Claim 1), as interactive refinement of natural language inputs is routine in AI systems to clarify requirements. Claim 11 of the instant application recites output types (routine or add-on instruction). This is an obvious variation of the reference's formats in Claim 8. Claims 12-18 of the instant application (method claims) parallel the system claims 1-9 and 11 above, and are not patentably distinct from reference method claims 11-18 for the same reasons. Claims 19-20 of the instant application (medium claims) parallel instant claims 1-2, and are not patentably distinct from reference medium claims 19-20 for the same reasons. The differences between the claims (e.g., “prompt” vs. “functional specification document” as input; explicit repositories in instant vs. included in reference Claim 3) are obvious variations as evidenced by US 20240020096 A1 (priority June 15, 2023), which teaches that a natural-language docstring/functional description serves as the prompt for generative AI code creation (Para [0005]) and that the model is improved by training/retrieval on sample code repositories and text documents (Para [0054]–[0056]). One of ordinary skill would have found it obvious to apply the docstring-as-prompt technique of US 20240020096 to a functional specification document in the industrial control domain, with the repositories of the reference application providing the predictable benefit of domain-specific accuracy This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-7 and 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20240020096A1) in view of Noetzelmann (US 20180136910A1) further in view of Achin (US 9489630B2). Regarding Claim 1, Chen (US 20240020096A1) teaches A system, comprising: a memory that stores executable components and a generative artificial intelligence (AI) model (Para [0044], " The embodiments discussed herein involve or relate to artificial intelligence (AI). AI may involve perceiving, synthesizing, inferring, predicting and/or generating information using computerized tools and techniques (e.g., machine learning). For example, AI systems may use a combination of hardware and software as a foundation for rapidly performing complex operation to perceive, synthesize, infer, predict, and/or generate information") and a processor, operatively coupled to the memory, that executes the executable components, the executable components comprising: a user interface component configured to receive, as natural language input, a prompt specifying [industrial control code] design requirements (Para [0076], "receiving a docstring representing natural language text specifying a digital programming result"; Para 0056, “In some embodiments, a method may comprise performing at least one of outputting, via a user interfacel”. Para 0057, “Outputting, as used herein, may refer to sending, transmitting, producing, or providing”) Examiner Comments: The prior art teaches receiving natural language input (docstring) via a system that implies a user interface for specifying programming requirements and an AI component configured to generate, using the generative AI model, [industrial control code] inferred to satisfy the [industrial control code] design requirements based on analysis of the prompt, sample control code store in a code repository, and text-based documents stored in a document repository, (Para [0076], "generating, using a trained machine-learning model and based on the docstring, one or more computer code samples configured to produce respective candidate results"; Para [0011], "the trained machine learning model may be fine-tuned based on at least one of a public web source or software repository"; Para [0011], "the trained machine learning model may be fine-tuned based on a set of training problems constructed from examples within the at least one public web source or software repository") Examiner Comments: The prior art describes generating code using a generative model based on natural language prompt analysis, fine-tuned on sample code from repositories and web sources including text documents wherein the AI component is further configured to embed functional documentation in the [industrial control code] based on the prompt, the text-based documents, and the control code samples. (Para [0014], "outputting, via the user interface, a definition of a function, method, class, or module associated with the outputted at least one identified computer code sample") Examiner Comments: The prior art teaches outputting code with associated functional definitions, which embeds documentation derived from the input prompt and training data including documents and samples. Chen did not specifically teach industrial control code. However, Noetzelmann (US 20180136910A1) teaches industrial control code (Para [0002], "generating programmable logic controller (PLC) code based on a connectivity model in a multidisciplinary engineering system") Examiner Comments: The prior art explicitly teaches generation of PLC code, which is industrial control code, using models and rules. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen with Noetzelmann in order to integrate the natural language-based generative AI code generation capabilities of Chen with the specific domain of industrial control and PLC programming as taught by Noetzelmann. This combination would enable engineers in industrial settings to specify control requirements in natural language and automatically generate verified PLC code, thereby reducing programming errors, shortening development cycles, and leveraging multidisciplinary engineering data for more robust automation systems, ultimately improving efficiency and reliability in manufacturing and process control environments where specialized knowledge of PLC languages is often required (Noetzelmann [Summary], [0022]). Chen and Noetzelmann did not specifically teach store the industrial control code in the code repository in association with the prompt, store the functional documentation in the document repository, create a link between the industrial control code in the code repository and the functional documentation in the document repository, and refine a training of the generative AI model based on the link between the industrial control code and the functional documentation. However, Achin (US 9489630B2) teaches store the industrial control code in the code repository in association with the prompt, (Col 36, lines 24-44, "For each model, exploration engine 110 may store a record of the modeling technique used to generate the model and the state of model after fitting, including coefficient and hyper-parameter values.") Examiner Comments: The prior art teaches storing the fitted model (code) with a record of the technique and state, associated with the input dataset or problem prompt; this teaches the limitation because the model is stored linked to its generation context, analogous to associating with the prompt. store the functional documentation in the document repository, (Col 24, lines 47-67, "The deployment engine 140 may construct human-readable rules for tuning the model's parameters based on a representation (e.g., a mathematical representation) of the predictive model, and provide the human-readable rules to the user.") Examiner Comments: The prior art teaches constructing and providing human-readable rules (functional documentation) which can be stored; this maps as the rules are derived and stored as part of deployment, in a repository-like structure. create a link between the industrial control code in the code repository and the functional documentation in the document repository, (Col 38, lines 9-26, "The deployment engine may then extract the particular operations for a complete model and encode them using the meta-model.") Examiner Comments: The prior art teaches extracting and encoding operations to link the model code to its meta-model representation, which includes rules/documentation; this teaches creating a link as the encoding associates the code with its descriptive meta-elements. and refine a training of the generative AI model based on the link between the industrial control code and the functional documentation. (Col 39, lines 1-30, "Some models may be refreshed (e.g., refitted) by applying the corresponding modeling techniques to the new data and combining the resulting new model with the existing model, while others may be refreshed by applying the corresponding modeling techniques to a combination of original and new data … Alternatively or in addition, new models may be generated exploring the modeling search space, in part or in full, with the new data included in the dataset") Examiner Comments: The prior art teaches refining/retraining models using new data or performance feedback, based on stored associations/links in metadata; this maps because the refinement uses the linked state and rules to update training. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen and Noetzelmann with Achin in order to incorporate the model storage, linking, and refinement capabilities of Achin into the AI code generation system of the combination. This would allow for systematic management of generated industrial code and documentation, enabling iterative improvement of the generative AI model through linked feedback, enhancing accuracy and adaptability in complex predictive tasks, as taught by Achin's metadata-driven retraining, thereby facilitating better model evolution in industrial environments where ongoing refinement based on documented associations is crucial for reliability and performance. Regarding Claim 2, Chen, Noetzelmann and Achin teach The system of claim 1. Chen did not specifically teach wherein the documents stored in the document repository comprises at least one of industrial programming manuals, industrial device manuals, or functional specification documents. However, Noetzelmann teaches wherein the documents stored in the document repository comprises at least one of industrial programming manuals, industrial device manuals, or functional specification documents. (Para [0028], "The connectivity model provides interfaces and connections between various aspects of the multidisciplinary engineering system to provide engineering data, code scripts, executables, calls and other information that is used to generate PLC code") Examiner Comments: Noetzelmann describes engineering data including specifications and manuals from multidisciplinary sources, applicable to industrial programming and device manuals. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen with Noetzelmann in order to integrate the natural language-based generative AI code generation capabilities of Chen with the specific domain of industrial control and PLC programming as taught by Noetzelmann. This combination would enable engineers in industrial settings to specify control requirements in natural language and automatically generate verified PLC code, thereby reducing programming errors, shortening development cycles, and leveraging multidisciplinary engineering data for more robust automation systems, ultimately improving efficiency and reliability in manufacturing and process control environments where specialized knowledge of PLC languages is often required (Noetzelmann [Summary], [0022]). Regarding Claim 4, Chen, Noetzelmann and Achin teach The system of claim 1, wherein the sample control code stored on the code repository comprises at least one of control code submitted by a device or software vendor, customer-specific control code samples, or add-on instructions. (Chen, Para [0007], "the trained machine learning model may be fine-tuned based on at least one of a public web source or software repository"; Noetzelmann, Para [0044], "code from another code entity can be reused by inserting the PLC code in the target code entity") Examiner Comments: The prior arts teach fine-tuning on sample code from repositories including reusable vendor or customer-specific code entities. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen with Noetzelmann in order to integrate the natural language-based generative AI code generation capabilities of Chen with the specific domain of industrial control and PLC programming as taught by Noetzelmann. This combination would enable engineers in industrial settings to specify control requirements in natural language and automatically generate verified PLC code, thereby reducing programming errors, shortening development cycles, and leveraging multidisciplinary engineering data for more robust automation systems, ultimately improving efficiency and reliability in manufacturing and process control environments where specialized knowledge of PLC languages is often required (Noetzelmann [Summary], [0022]). Regarding Claim 5, Chen, Noetzelmann and Achin teach The system of claim 1, wherein the AI component is configured to generate the industrial control code in a format specified as part of the prompt, and the format is at least one of a ladder logic routine, a structured text routine, or an add-on instruction (Chen, Para [0047], " Practical application examples of the present disclosure may include converting comments into computer code, providing predictive code suggestions based on user comments, auto-filling computer code (e.g., repetitive code, routine coding tasks)") . Regarding Claim 6, Chen, Noetzelmann and Achin teach The system of claim 1,wherein the AI component is further configured to generate, based on analysis using the generative AI model of text contained in a functional specification document stored in the document repository, documented control code capable of performing a control function defined in the functional specification, the documented control code comprising embedded documentation generated based on the text of the functional specification document, and store the documented control code in the code repository (Chen, Para [0018], "generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text... outputting... the at least one identified docstring with the at least a portion of the one or more computer code samples"; Para [0107], "the docstring generation model may be fine-tuned based on at least one of a public web source or software repository"; Para [0056], “storing the at least one identified computer code sample (e.g., locally and/or remotely”) Examiner Comments: Chen teaches generating and embedding docstrings (documentation) into code based on analysis, with fine-tuning on documents from repositories, though reverse; the combination applies bidirectionally. Regarding Claim 7, Chen, Noetzelmann and Achin teach The system of claim 1, wherein the AI component is further configured to generate, based on analysis using the generative AI model of a control code sample stored in the code repository, a text-based functional specification document describing a function of the control code sample, and store the text-based functional specification document in the document repository (Chen, Para [0018], "generating... one or more candidate docstrings... identifying... that provides an intent... outputting... the at least one identified docstring"; Para [0107], "the docstring generation model may be fine-tuned based on at least one of a public web source or software repository "; Para [0056], “storing the at least one identified computer code sample (e.g., locally and/or remotely”) Examiner Comments: Chen directly teaches generating text-based docstrings (functional specs) from code samples, storing in associated repositories. Regarding Claim 10, Chen, Noetzelmann and Achin teach The system of claim 1, wherein the user interface component is configured to render a user interface configured to receive the prompt, in response to selection, via interaction with the user interface, of a type of output to be generated, the user interface component renders one or more questions relevant to the type of output, and the prompt comprises answers to the questions submitted via interaction with the user interface. (Chen, Para [0113], “the steps may further be configured for outputting, via a user interface, the identified docstring(s) with one or more associated computer code sample”; Para [00114], “The present disclosure may be used to perform a range of natural language processing tasks related to code. Example tasks include summarization (e.g., generating summaries of code to provide a high-level overview of its functionality), translation (e.g., translating code comments and documentation into multiple languages, which may be useful for developers who are working with international teams or developing applications for users who speak different languages), code documentation (e.g., generating documentation for code, including descriptions of functions, variables, and classes, which may be used to help other developers understand the code and how it works), question-answering (e.g., answering questions about code, such as “What is this function doing?” or “How is this variable used?”, which may be useful for developers who are trying to understand code written by others or developers who are working with legacy code), and code completion (e.g., suggesting code completions based on natural language descriptions).” Regarding Claim 11, Chen, Noetzelmann and Achin teach The system of claim 10, wherein the type of output is at least one of a routine or an add-on instruction. (Chen, Para [0047], " Practical application examples of the present disclosure may include converting comments into computer code, providing predictive code suggestions based on user comments, auto-filling computer code (e.g., repetitive code, routine coding tasks) "). Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20240020096A1) in view of Noetzelmann (US 20180136910A1) and Achin (US 9489630B2) further in view of Rudenko (US20250004915). Regarding Claim 8, Chen, Noetzelmann and Achin teach The system of claim 1, [and in response to receipt of a selection of a version of the industrial control code, from among the multiple versions of the industrial control code], store one of the sets of values of the hyperparameters corresponding to the version of the industrial control code in a prompt repository (Chen [Para 0056, “In some embodiments, a method may comprise performing at least one of outputting, via a user interface, the at least one identified computer code sample, compiling the at least one identified computer code sample, transmitting the at least one identified computer code sample to a recipient device, storing the at least one identified computer code sample”]). Chen, Noetzelmann and Achin did not specifically teach wherein the AI component is further configured to set multiple sets of values of hyperparameters for the generative AI model, generate, using the generative AI model, multiple versions of the industrial control code using the respective multiple sets of values of hyperparameters for the generative AI model, and in response to receipt of a selection of a version of the industrial control code, from among the multiple versions of the industrial control code, store one of the sets of values of the hyperparameters corresponding to the version of the industrial control code in a prompt repository. However, Rudenko (US20250004915) teaches wherein the AI component is further configured to set multiple sets of values of hyperparameters for the generative AI model, generate, using the generative AI model, multiple versions of the industrial control code using the respective multiple sets of values of hyperparameters for the generative AI model, (Para [0110], "the pipeline batch inputs the formed prompts to the code fix model, which is a generative AI model that has been trained to predict a token/text sequence based on the prompt, the predicted token/text sequence being a modification of the flawed code fragment. The code fix model will generate responses according to a configuration or hyperparameter that specifies responses to generate per prompt. Assuming the code fix model has been configured to generate n responses per prompt and the batch includes m prompts, the code fix model will generate m×n responses. The batch of prompts yields generated responses which this description sometimes refers to as a pool of generated candidate patches or pool of candidate patches. The candidate patches are candidates for possibly being selected to be used to fix the flaw"; Para [0065], “the trainer inserts the filtered training examples into the training dataset. The training examples with aggregated quality metrics values that satisfy the inclusion criterion are stored in a repository of training examples”) Examiner Comments: Rudenko directly teaches hyperparameter. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen, Noetzelmann and Achin’s teaching to Rudenko’s in order to enable performing fine-tuning of the training of the pre-trained large scale language model (LLM) to a particular task in an effective manner by ranking multiple generated responses according to predicted quality measures based ranking to allow the highest ranked generated responses to be selected (Rudenko [Summary]). Regarding Claim 9, Chen, Noetzelmann, Achin and Rudenko teach The system of claim 8. Chen, Noetzelmann and Achin did not specifically teach wherein the AI component is further configured to, in response to the receipt of the selection of the version of the industrial control code: set multiple sets of new values of the hyperparameters based on the one of the sets of values of the hyperparameters corresponding to the version of the industrial control code, and generate, using the generative AI model and based on the version of the industrial control code, multiple new versions of the industrial control code using the respective multiple new sets of values of hyperparameters for the generative AI model. However, Rudenko teaches wherein the AI component is further configured to, in response to the receipt of the selection of the version of the industrial control code: set multiple sets of new values of the hyperparameters based on the one of the sets of values of the hyperparameters corresponding to the version of the industrial control code, and generate, using the generative AI model and based on the version of the industrial control code, multiple new versions of the industrial control code using the respective multiple new sets of values of hyperparameters for the generative AI model (Para [0110], " The code fix model will generate responses according to a configuration or hyperparameter that specifies responses to generate per prompt. Assuming the code fix model has been configured to generate n responses per prompt and the batch includes m prompts, the code fix model will generate m×n responses. The batch of prompts yields generated responses which this description sometimes refers to as a pool of generated candidate patches or pool of candidate patches "). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen, Noetzelmann and Achin’s teaching to Rudenko’s in order to enable performing fine-tuning of the training of the pre-trained large scale language model (LLM) to a particular task in an effective manner by ranking multiple generated responses according to predicted quality measures based ranking to allow the highest ranked generated responses to be selected (Rudenko [Summary]). Claim(s) 12-13, 15-17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20240020096A1) in view of Noetzelmann (US 20180136910A1) and further in view of Dunn (US20210089278A1) and Achin (US 9489630B2). Regarding Claim 12, Chen (US 20240020096A1) teaches A method, comprising: receiving, [by an industrial integrated development environment (IDE)] system comprising a processor, a prompt requesting [industrial control code] that performs a specified control function, wherein the prompt is formatted as a natural language input (Para [0076], "receiving a docstring representing natural language text specifying a digital programming result"; Para 0056, “In some embodiments, a method may comprise performing at least one of outputting, via a user interfacel”. Para 0057, “Outputting, as used herein, may refer to sending, transmitting, producing, or providing”; Para 0086, one or more computer code samples may be received when a user inputs at least a portion of computer code into a prompt field. In some embodiments, a user may input at least a portion of computer code and natural language text into a prompt field) Examiner Comments: The prior art teaches receiving natural language input (docstring) via a system that implies a user interface for specifying programming requirements and generating, by the industrial IDE system using a generative artificial intelligence (AI) model, the [industrial control code] based on analysis of the prompt, sample control code store in a code repository, and text-based documents stored in a document repository (Para [0076], "generating, using a trained machine-learning model and based on the docstring, one or more computer code samples configured to produce respective candidate results"; Para [0011], "the trained machine learning model may be fine-tuned based on at least one of a public web source or software repository"; Para [0011], "the trained machine learning model may be fine-tuned based on a set of training problems constructed from examples within the at least one public web source or software repository") Examiner Comments: The prior art describes generating code using a generative model based on natural language prompt analysis, fine-tuned on sample code from repositories and web sources including text documents. Chen did not specifically teach industrial control code an industrial integrated development environment (IDE). However, Noetzelmann (US 20180136910A1) teaches industrial control code (Para [0002], "generating programmable logic controller (PLC) code based on a connectivity model in a multidisciplinary engineering system") Examiner Comments: The prior art explicitly teaches generation of PLC code, which is industrial control code, using models and rules. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen with Noetzelmann in order to integrate the natural language-based generative AI code generation capabilities of Chen with the specific domain of industrial control and PLC programming as taught by Noetzelmann. This combination would enable engineers in industrial settings to specify control requirements in natural language and automatically generate verified PLC code, thereby reducing programming errors, shortening development cycles, and leveraging multidisciplinary engineering data for more robust automation systems, ultimately improving efficiency and reliability in manufacturing and process control environments where specialized knowledge of PLC languages is often required (Noetzelmann [Summary], [0022]). Chen and Noetzelmann did not specifically teach an industrial integrated development environment (IDE). However, Dunn (US20210089278A1) teaches an industrial integrated development environment (IDE) (Para 0004, one or more embodiments provide a method for developing industrial applications, comprising rendering, by a system comprising a processor, integrated development environment (IDE) interfaces on a client device). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combined Chen and Noetzelmann’s teaching to Dunn’s in order to develop industrial applications by providing editor definition component which is configured to receive interface definition data that specifies customization of integrated development environment interface and to reconfigure IDE editor to implement customization on IDE interface (Dunn [Summary]). Chen, Noetzelmann and Dunn did not specifically teach generating, by the industrial IDE system, functional documentation for the industrial control code based on the prompt, the sample control code, and the text-based documents; storing, by the industrial IDE system, the industrial control code in the code repository in association with the prompt; storing, by the industrial IDE system, the functional documentation in the document repository; creating, by the industrial IDE system, a link between the industrial control code and the functional documentation; and refining, by the industrial IDE system, a training of the generative AI model based on the link between the industrial control code and the functional documentation. However, Achin (US 9489630B2) teaches generating, by the industrial IDE system, functional documentation for the industrial control code based on the prompt, the sample control code, and the text-based documents (Col 24, lines 47-67, "The deployment engine 140 may construct human-readable rules for tuning the model's parameters based on a representation (e.g., a mathematical representation) of the predictive model, and provide the human-readable rules to the user.") storing, by the industrial IDE system, the industrial control code in the code repository in association with the prompt, (Col 36, lines 24-44, "For each model, exploration engine 110 may store a record of the modeling technique used to generate the model and the state of model after fitting, including coefficient and hyper-parameter values.") Examiner Comments: The prior art teaches storing the fitted model (code) with a record of the technique and state, associated with the input dataset or problem prompt; this teaches the limitation because the model is stored linked to its generation context, analogous to associating with the prompt. storing, by the industrial IDE system, the functional documentation in the document repository, (Col 24, lines 47-67, "The deployment engine 140 may construct human-readable rules for tuning the model's parameters based on a representation (e.g., a mathematical representation) of the predictive model, and provide the human-readable rules to the user.") Examiner Comments: The prior art teaches constructing and providing human-readable rules (functional documentation) which can be stored; this maps as the rules are derived and stored as part of deployment, in a repository-like structure. creating, by the industrial IDE system, a link between the industrial control code and the functional documentation, (Col 38, lines 9-26, "The deployment engine may then extract the particular operations for a complete model and encode them using the meta-model.") Examiner Comments: The prior art teaches extracting and encoding operations to link the model code to its meta-model representation, which includes rules/documentation; this teaches creating a link as the encoding associates the code with its descriptive meta-elements. and refining, by the industrial IDE system, a training of the generative AI model based on the link between the industrial control code and the functional documentation. (Col 39, lines 1-30, "Some models may be refreshed (e.g., refitted) by applying the corresponding modeling techniques to the new data and combining the resulting new model with the existing model, while others may be refreshed by applying the corresponding modeling techniques to a combination of original and new data … Alternatively or in addition, new models may be generated exploring the modeling search space, in part or in full, with the new data included in the dataset") Examiner Comments: The prior art teaches refining/retraining models using new data or performance feedback, based on stored associations/links in metadata; this maps because the refinement uses the linked state and rules to update training. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen, Noetzelmann, and Dunn with Achin in order to incorporate the model storage, linking, and refinement capabilities of Achin into the AI code generation system of the combination. This would allow for systematic management of generated industrial code and documentation, enabling iterative improvement of the generative AI model through linked feedback, enhancing accuracy and adaptability in complex predictive tasks, as taught by Achin's metadata-driven retraining, thereby facilitating better model evolution in industrial environments where ongoing refinement based on documented associations is crucial for reliability and performance. Regarding Claim 13, Chen, Noetzelmann, Dunn and Achin teach The method of claim 12. Chen did not specifically teach wherein the documents stored in the document repository comprises at least one of industrial programming manuals, industrial device manuals, or functional specification documents. However, Noetzelmann teaches wherein the documents stored in the document repository comprises at least one of industrial programming manuals, industrial device manuals, or functional specification documents. (Para [0028], "The connectivity model provides interfaces and connections between various aspects of the multidisciplinary engineering system to provide engineering data, code scripts, executables, calls and other information that is used to generate PLC code") Examiner Comments: Noetzelmann describes engineering data including specifications and manuals from multidisciplinary sources, applicable to industrial programming and device manuals. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen with Noetzelmann in order to integrate the natural language-based generative AI code generation capabilities of Chen with the specific domain of industrial control and PLC programming as taught by Noetzelmann. This combination would enable engineers in industrial settings to specify control requirements in natural language and automatically generate verified PLC code, thereby reducing programming errors, shortening development cycles, and leveraging multidisciplinary engineering data for more robust automation systems, ultimately improving efficiency and reliability in manufacturing and process control environments where specialized knowledge of PLC languages is often required (Noetzelmann [Summary], [0022]). Regarding Claim 15, Chen, Noetzelmann, Dunn and Achin teach The method of claim 12, wherein the receiving comprises receiving, as part of the prompt, an indication of a requested format for the industrial control code (Chen, Para [0086], “receiving one or more computer code samples. Receiving, as used herein, may refer to requesting, accessing, obtaining, acquiring, accepting, identifying, selecting, highlighting, and/or collecting. For example, one or more computer code samples may be received when a user highlights (or otherwise selects) at least a portion of computer code, such as by providing an input to a user interface for assessing, executing, and/or modifying code. As another example, one or more computer code samples may be received when a user inputs at least a portion of computer code into a prompt field”) the generating comprises generating the industrial control code in a format specified as part of the prompt, and the format is at least one of a ladder logic routine, a structured text routine, or an add-on instruction (Chen, Para [0047], " Practical application examples of the present disclosure may include converting comments into computer code, providing predictive code suggestions based on user comments, auto-filling computer code (e.g., repetitive code, routine coding tasks)") . Regarding Claim 16, Chen, Noetzelmann, Dunn and Achin teach The method of claim 12, further comprising: translating, by the industrial IDE system using the generative AI model, text of a document contained in the document repository to documented control code capable of performing a control function defined in the document; and storing the documented control code in the code repository, wherein the documented control code comprises embedded documentation generated based on the text of the document (Chen, Para [00114], “The present disclosure may be used to perform a range of natural language processing tasks related to code. Example tasks include summarization (e.g., generating summaries of code to provide a high-level overview of its functionality), translation (e.g., translating code comments and documentation into multiple languages, which may be useful for developers who are working with international teams or developing applications for users who speak different languages), code documentation (e.g., generating documentation for code, including descriptions of functions, variables, and classes, which may be used to help other developers understand the code and how it works), question-answering (e.g., answering questions about code, such as “What is this function doing?” or “How is this variable used?”, which may be useful for developers who are trying to understand code written by others or developers who are working with legacy code), and code completion (e.g., suggesting code completions based on natural language descriptions).”; Para [0056], “storing the at least one identified computer code sample (e.g., locally and/or remotely”). Regarding Claim 17, Chen, Noetzelmann, Dunn and Achin teach The method of claim 12, further comprising: translating, by the industrial IDE system using the generative AI model, a control code sample stored in the code repository to a text-based functional specification document describing a function of the control code sample; and storing by the industrial IDE system, the text-based functional specification document in the document repository (Chen, Para [00114], “The present disclosure may be used to perform a range of natural language processing tasks related to code. Example tasks include summarization (e.g., generating summaries of code to provide a high-level overview of its functionality), translation (e.g., translating code comments and documentation into multiple languages, which may be useful for developers who are working with international teams or developing applications for users who speak different languages), code documentation (e.g., generating documentation for code, including descriptions of functions, variables, and classes, which may be used to help other developers understand the code and how it works), question-answering (e.g., answering questions about code, such as “What is this function doing?” or “How is this variable used?”, which may be useful for developers who are trying to understand code written by others or developers who are working with legacy code), and code completion (e.g., suggesting code completions based on natural language descriptions).”; Para [0056], “storing the at least one identified computer code sample (e.g., locally and/or remotely”). Regarding Claim 19, is a non-transitory claim corresponding to the method claim above (Claim 12), and, therefore, is rejected for the same reasons set forth in the rejection of Claim 12. Regarding Claim 20, is a non-transitory claim corresponding to the method claim above (Claim 13), and, therefore, is rejected for the same reasons set forth in the rejection of Claim 13. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20240020096A1) in view of Noetzelmann (US 20180136910A1), Dunn (US20210089278A1) and Achin (US 9489630B2) further in view of de Seabra (US7735062B2). Regarding Claim 14, Chen, Noetzelmann, Dunn and Achin teach The method of claim 12. Chen, Noetzelmann, Dunn and Achin did not teach wherein the AI component is further configured to store the industrial control code in the code repository, to store the functional documentation in the document repository, and to create a link between the industrial control code and the functional documentation. However, de Seabra (US7735062B2) teaches wherein the AI component is further configured to store the industrial control code in the code repository, to store the functional documentation in the document repository, and to create a link between the industrial control code and the functional documentation (Col 3: ln 30-43, In one embodiment, the method includes, prior to the comparing step, the step of storing the modified computer design model in a source control repository. This can be accomplished by one or more of the following steps: creating a new version record in the source control repository; storing an XML file in the source control repository such that it is attached or related to the new version record; extracting the design model interface specifications in XML format; storing the design model interface specification in the source control repository attached to the new version record; extracting a list of other design models to which the design model includes references to; and storing the list of other design models to which the design model includes references to, attached to the new version record) Examiner Comments: This prior-art teaches creation of a link by attaching reference lists and XML documentation to the version record of the control code model in the repository, teaching claim 14's requirement for storing code/documentation in repositories and creating a link between them compared to previous references that only implied associations. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen, Noetzelmann, Dunn and Achin’s teaching to de Seabra’s enable modification of several inter-related computer software design models having any degree of complexity by receiving specification for file containing computer design model and determining whether computer design model in specified file is the version most recently stored in repository (de Seabra [Summary]). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20240020096A1) in view of Noetzelmann (US 20180136910A1), Dunn (US20210089278A1) and Achin (US 9489630B2) further in view of Rudenko (US20250004915). Regarding Claim 18, Chen, Noetzelmann, Dunn and Achin teach The system of claim 1, [and in response to receiving a selection of a version of the industrial control code, from among the multiple versions of the industrial control code], storing, by the industrial IDE system, one of the sets of values of the hyperparameters corresponding to the version of the industrial control code in a prompt repository (Chen [Para 0056, “In some embodiments, a method may comprise performing at least one of outputting, via a user interface, the at least one identified computer code sample, compiling the at least one identified computer code sample, transmitting the at least one identified computer code sample to a recipient device, storing the at least one identified computer code sample”]). Chen, Noetzelmann, Dunn and Achin did not specifically teach further comprising: setting, by the industrial IDE system, multiple sets of values of hyperparameters for the generative AI model, generating, by the industrial IDE system using the generative AI model, multiple versions of the industrial control code using the respective multiple sets of values of hyperparameters for the generative AI model, and in response to receiving a selection of a version of the industrial control code, from among the multiple versions of the industrial control code, storing, by the industrial IDE system, one of the sets of values of the hyperparameters corresponding to the version of the industrial control code in a prompt repository. However, Rudenko (US20250004915) teaches further comprising: setting, by the industrial IDE system, multiple sets of values of hyperparameters for the generative AI model, generating, by the industrial IDE system using the generative AI model, multiple versions of the industrial control code using the respective multiple sets of values of hyperparameters for the generative AI model, and in response to receiving a selection of a version of the industrial control code, from among the multiple versions of the industrial control code, storing, by the industrial IDE system, one of the sets of values of the hyperparameters corresponding to the version of the industrial control code in a prompt repository (Para [0110], "the pipeline batch inputs the formed prompts to the code fix model, which is a generative AI model that has been trained to predict a token/text sequence based on the prompt, the predicted token/text sequence being a modification of the flawed code fragment. The code fix model will generate responses according to a configuration or hyperparameter that specifies responses to generate per prompt. Assuming the code fix model has been configured to generate n responses per prompt and the batch includes m prompts, the code fix model will generate m×n responses. The batch of prompts yields generated responses which this description sometimes refers to as a pool of generated candidate patches or pool of candidate patches. The candidate patches are candidates for possibly being selected to be used to fix the flaw"; Para [0065], “the trainer inserts the filtered training examples into the training dataset. The training examples with aggregated quality metrics values that satisfy the inclusion criterion are stored in a repository of training examples”) Examiner Comments: Rudenko directly teaches hyperparameter. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen, Noetzelmann, Dunn and Achin’s teaching to Rudenko’s in order to enable performing fine-tuning of the training of the pre-trained large scale language model (LLM) to a particular task in an effective manner by ranking multiple generated responses according to predicted quality measures based ranking to allow the highest ranked generated responses to be selected (Rudenko [Summary]). Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20240020096A1) in view of Noetzelmann (US 20180136910A1) further in view of Achin (US 9489630B2) and Dickman (US6411863B1). Regarding Claim 21, Chen, Noetzelmann and Achin The system of claim 1. Chen, Noetzelmann and Achin did not specifically teach wherein the industrial control code is configured to, in response to execution on an industrial controller, configure the industrial controller to monitor and control an industrial automation system in accordance with the industrial control code design requirements. However, Dickman (US6411863B1) teaches wherein the industrial control code is configured to, in response to execution on an industrial controller, configure the industrial controller to monitor and control an industrial automation system in accordance with the industrial control code design requirements. (Col 1, lines 40-67, "The PLC processor will issue control commands to the clutch control circuit based upon a logic operations analysis of a group of performance measurements obtained from sensors which detect certain performance indicia (e.g., crankshaft angle) representing the operating condition of the press machine.") Examiner Comments: The prior art teaches the PLC processor executing code to analyze sensor data (monitor) and issue control commands (control) to the clutch circuit of the press machine (industrial automation system) based on logic operations (in accordance with design requirements); this maps one-to-one as the executed code configures the controller to perform monitoring via sensors and control the system per the programmed logic. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Chen, Noetzelmann and Achin with Dickman in order to incorporate the execution of control code on a PLC to monitor sensors and control an industrial system as taught by Dickman into the code generation system of the combination. This would ensure that the generated industrial control code is deployable on actual controllers for real-world operation, enabling reliable monitoring and control of automation processes as per specified requirements, thereby bridging the gap between code generation and practical implementation in manufacturing environments where sensor-based feedback and command issuance are essential for safe and efficient machine operation. Response to Arguments Applicant’s arguments with respect to claims 1-2, 4-21 have been considered but are moot because the arguments do not apply to the previous cited sections of the references used in the previous office action. The current office action is now citing additional references to address the newly added claimed limitations. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SOLTANZADEH whose telephone number is (571)272-3451. The examiner can normally be reached M-F, 9am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR SOLTANZADEH/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Feb 09, 2024
Application Filed
Nov 22, 2025
Non-Final Rejection — §103, §DP
Feb 13, 2026
Response Filed
Mar 03, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602225
IDENTIFYING THE TRANLATABILITY OF HARD-CODED STRINGS IN SOURCE CODE VIA POS TAGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12591414
CENTRALIZED INTAKE AND CAPACITY ASSESSMENT PLATFORM FOR PROJECT PROCESSES, SUCH AS WITH PRODUCT DEVELOPMENT IN TELECOMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12561134
Function Code Extraction
2y 5m to grant Granted Feb 24, 2026
Patent 12561136
METHOD, APPARATUS, AND SYSTEM FOR OUTPUTTING SOFTWARE DEVELOPMENT INSIGHT COMPONENTS IN A MULTI-RESOURCE SOFTWARE DEVELOPMENT ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12561118
SYSTEM AND METHOD FOR AUTOMATED TECHNOLOGY MIGRATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+16.9%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 421 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month