Prosecution Insights
Last updated: April 19, 2026
Application No. 18/984,498

DYNAMIC ARTIFICIAL INTELLIGENCE-BASED BLUEPRINTING GENERATION AND EXECUTION PLATFORM

Non-Final OA §101§102§103
Filed
Dec 17, 2024
Examiner
PHILLIPS, III, ALBERT M
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
C3 AI Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
583 granted / 712 resolved
+26.9% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
37.4%
-2.6% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 712 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11 and 18-20 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites 1. A computer-implemented method, comprising: receiving a first natural language query identifying a target procedure in a particular context; determining, based on the first natural language query, a reference schema for the target procedure from among a plurality of stored schemas, wherein each schema of the plurality of stored schemas defines a respective deterministic action sequence for performing a respective procedure in a respective context; constructing a second natural language query based on the first natural language query and the reference schema; sending, to a multimodal model, the second natural language query; receiving, from the multimodal model, an initial output schema based on the second natural language query, the initial output schema defining an initial action sequence for performing the target procedure; verifying a construction of the initial action sequence; and generating, based on the verifying, a final output schema that defines a deterministic action sequence for performing the target procedure, wherein the final output schema provides a deterministic result when executed. Examiner finds that the emphasized portions of claim 1 above recite an abstract idea—namely, mental processes. See MPEP 2106.04(a)(2)and (III): Accordingly, the ‘mental processes’ abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions When read as a whole, the recited limitations are directed to using mental steps to observe, evaluate, and make judgements about electronic data Taking each element individually, Examiner provides the following analysis: The element “determining, based on the first natural language query, a reference schema for the target procedure from among a plurality of stored schemas, wherein each schema of the plurality of stored schemas defines a respective deterministic action sequence for performing a respective procedure in a respective context” merely requires observation and evaluation of the first NL query and an evaluation/judgment with respect to which reference schema to choose. The element “constructing a second natural language query based on the first natural language query and the reference schema” merely requires observation and evaluation of the first NL query and reference schema and an evaluation and/or judgment as to how to construct the second NL query. The element “verifying a construction of the initial action sequence” merely requires human judgment as to the validity of the constructions of the initial action sequence. The element “generating, based on the verifying, a final output schema that defines a deterministic action sequence for performing the target procedure” merely requires an evaluation and/or judgment as to how to generate the final output schema. Examiner finds this element can be practically performed in the human mind with the aid of pen and paper. Turning to the additional elements, the element “1. A computer-implemented method, comprising: receiving a first natural language query identifying a target procedure in a particular context” recites insignificant extra solution activity in the form of mere data gathering. See MPEP 2106.05(g). As such, this element does not integrate the exception. This element also recites receiving or transmitting data over a network and thus recites a well-understood, routine, and conventional (WURC) computer function. As such, it fails to recite an inventive concept. See 2106.05(d)(II). The element “sending, to a multimodal model, the second natural language query” recites “mere data gathering” and thus recites insignificant extra solution activity and does not integrate the exception. See MPEP 2106.05(g).This element also recites receiving or transmitting data over a network and thus recites a WURC computer function. See 2106.05(d)(II). As such, it does not recite an inventive concept. See id. The element “receiving, from the multimodal model, an initial output schema based on the second natural language query, the initial output schema defining an initial action sequence for performing the target procedure” recites “mere data gathering” and thus recites insignificant extra solution activity and does not integrate the exception. See MPEP 2106.05(g). This element also recites receiving or transmitting data over a network and/or storing and retrieving information in memory and thus recites a WURC computer function. See 2106.05(d)(II). As such, it does not recite an inventive concept. See id. Examiner finds “wherein the final output schema provides a deterministic result when executed” has no patentable weight because it recites an intended result1. The additional elements above “‘[a]dd nothing … that is not already present when the steps are considered separately’”. MPEP 2106.05 (I)(B)(quoting Alice). As such, claim 1 recites an abstract idea without significantly more. Turning to the dependent claims, Examiner provides the following analysis: Claim 2 recites “2. The computer-implemented method of claim 1, wherein the determining comprises: performing at least one of a Boolean database search, a natural language search, or a vector similarity algorithm on the plurality of stored schemas against the first natural language query.” This element recites insignificant extra solution (i.e. mere data gathering) and thus does not integrate the exception. See MPEP 2106.05(g). This element also recites storing and retrieving information in memory and thus recites a WURC computer function. See 2106.05(d)(II). As such, it does not recite an inventive concept. See id. Claim 3 recites “3. The computer-implemented method of claim 1, wherein the generating comprises: updating, in response to a failed construction verification, the initial action sequence of the initial output schema to the deterministic action sequence.” This element merely requires evaluation of whether a failed construction verification occurs and an evaluation and/or judgment of how to update the initial action sequence based on this failure. Examiner finds this element can be practically performed in the human mind with the aid of pen and paper. Claim 4 recites “4. The computer-implemented method of claim 1, wherein the generating comprises: using, in response to a positive construction verification, the initial action sequence as the deterministic action sequence.” Examiner finds this element generally links the abstract idea to the field of use of Large Language Models and thus does not integrate the exception or recite an inventive concept. See MPEP 2106.05(h). Claim 5 recites “5. The computer-implemented method of claim 1, wherein the second natural language query further comprises information related to the particular context.” This element merely recites results of the evaluation and/or judgment being performed. Claim 6 recites “6. The computer-implemented method of claim 5, wherein the information related to the particular context comprises at least one of a user manual, an enterprise repository, or expert information.” This element merely recites results of the evaluation and/or judgment being performed. Claim 7 recites “7. The computer-implemented method of claim 1, further comprising: executing the final output schema to obtain the deterministic result for the target procedure.” This element recites mere instructions to apply the exception (i.e. it equivalent to “apply it”). See MPEP 2106.05(f). As such, this element fails to integrate the exception and fails to recite an inventive concept. See id. Claim 8 recites “8. The computer-implemented method of claim 1, wherein the target procedure comprises diagnosing or troubleshooting an issue with an industrial machine.” This element merely describes what is being observed and/or evaluated. Claim 9 recites “9. The computer-implemented method of claim 1, wherein the receiving the query comprises: mining information regarding the particular context to obtain the target procedure.” This element generally links the abstract idea to the field of use of data mining. As such, it fails to integrate the exception and fails to recite an inventive concept. See MPEP 2106.05(h). Claim 9 recites “and constructing the natural language query based on the mining.” This element merely requires observation and evaluation of the mined data and an evaluation and/or judgment as to how to construct the NL query based on the data. Claim 10 recites “10. The computer-implemented method of claim 9, wherein the mining comprises mining at least one of conversation data or video data related to the particular context.” This element generally links the abstract idea to the field of use of data mining. As such, it fails to integrate the exception and fails to recite an inventive concept. See MPEP 2106.05(h). Claim 11 recites “11. The computer-implemented method of claim 1, further comprising: assigning, . . . a plurality of agents to execute the final output schema.” This element merely requires judgment on the part of a human as to which agent is assigned to execute the final output schema. Examiner finds this element can be practically performed in the human mind with the aid of pen and paper. Examiner finds “by an orchestrator” generally links the abstract idea to the field of use of agents/LLMs/generative AI and thus does not integrate the exception or recite an inventive concept. See MPEP 2106.05(h). The element “instructing a first agent to perform a first action within the deterministic action sequence to obtain an agent decision” merely requires human judgment as to the instructions that are needed for a first agent to perform a first action. Examiner finds this element can be practically performed in the human mind with the aid of pen and paper. The element “determining, based on the agent decision, one or more additional actions within the deterministic action sequence to perform” merely requires observation and evaluation of the decision and an evaluation/judgment as to what additional action to take. Examiner finds this element can be practically performed in the human mind with the aid of pen and paper. The element “instructing one or more additional agents to perform the one or more additional actions to obtain a final schema result; and” merely requires human evaluation/judgment as to what instructions are required to “perform the one or more additional actions to obtain a final schema result. . .”Examiner finds this element can be practically performed in the human mind with the aid of pen and paper. The element “completing the target procedure based on the final schema result” recite mere instructions to apply the exception. As such, it fails to integrate the exception and fails to recite an inventive concept. The additional elements “‘[a]dd nothing … that is not already present when the steps are considered separately’”. MPEP 2106.05 (I)(B)(quoting Alice). Thus, the dependent claims above are directed to an abstract idea without significantly more. Claim 18 recites (emphasis added): 18. A computer-implemented method, comprising: receiving, from a schema management platform, a natural language query identifying a first procedure in a first context and a reference schema for the first procedure, wherein the reference schema defines a deterministic action sequence for performing a second procedure in a second context generating an initial output schema based on the natural language query, the initial output schema defining an initial action sequence for performing the first procedure; and sending, to the schema management platform, the initial output schema, wherein the initial output schema is used to generate a final output schema that defines a deterministic action sequence for performing the first procedure and the final output schema provides a deterministic result when executed. Examiner finds that the emphasized portions of claim 18 above recite an abstract idea—namely, mental processes. See MPEP 2106.04(a)(2)and (III): Accordingly, the ‘mental processes’ abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions When read as a whole, the recited limitations are directed to using mental steps to observe, evaluate, and make judgements about electronic data Taking each element individually, Examiner provides the following analysis: The element “generating an initial output schema based on the natural language query, the initial output schema defining an initial action sequence for performing the first procedure” merely requires observation and evaluation of the NL query and an evaluation/judgment as to how to generate the initial output schema based on the evaluation of the NL query. As such, claim 18 recites an abstract idea. Turning to the additional elements, the element “18. A computer-implemented method, comprising: receiving, from a schema management platform, a natural language query identifying a first procedure in a first context and a reference schema for the first procedure, wherein the reference schema defines a deterministic action sequence for performing a second procedure in a second context” recites insignificant extra solution activity in the form of mere data gathering. See MPEP 2106.05(g). As such, it does not integrate the exception. This element recites receiving or transmitting data over a network and thus recites a WURC computer function. See MPEP 2106.05(d)(II).As such, it does not recite an inventive concept. The element and “sending, to the schema management platform, the initial output schema, wherein the initial output schema is used to generate a final output schema that defines a deterministic action sequence for performing the first procedure and the final output schema provides a deterministic result when executed.” This element recites insignificant extra solution activity in the form of mere data gathering. See MPEP 2106.05(g). As such, it does not integrate the exception. This element recites receiving or transmitting data over a network and thus recites a WURC computer function. See MPEP 2106.05(d)(II). As such, it does not recite an inventive concept. Examiner finds “wherein the initial output schema is used to generate a final output schema that defines a deterministic action sequence for performing the first procedure and the final output schema provides a deterministic result when executed” recites an intended result and thus has no patentable weight2. The additional elements “‘[a]dd nothing … that is not already present when the steps are considered separately’”. MPEP 2106.05 (I)(B)(quoting Alice). Thus, claim 18 is directed to an abstract idea without significantly more. Claim 19 recites “19. The computer-implemented method of claim 18,wherein the generating comprises: employing one or more agents to action the natural language query.” This element generally links the abstract idea the field of use of LLMs/generative AI/agents and thus does not integrate the exception and does not recite an inventive concept. See MPEP 2106.05(h). Claim 20 recites “20. The computer-implemented method of claim 19, wherein the generating further comprises: receiving, from the one or more agents, an additional prompt for iterative processing, wherein the initial output schema is based on the natural language query and the additional prompt.” This element recites insignificant extra solution activity in the form of mere data gathering. See MPEP 2106.05(g). As such, it does not integrate the exception. This element recites receiving or transmitting data over a network and thus recites a WURC computer function. See MPEP 2106.05(d)(II). As such, it does not recite an inventive concept. The additional elements “‘[a]dd nothing … that is not already present when the steps are considered separately’”. MPEP 2106.05 (I)(B)(quoting Alice). Thus, claims 19-20 are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 7, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zeng, FlowMind: Automatic Workflow Generation with LLMs; Nov. 2023. With respect to claim 1, Zeng teaches “1 A computer-implemented method, comprising receiving a first natural language query identifying a target procedure in a particular context” on p. 80, Fig. 7 (first NL query is, for example (“What is the total purchase sale for the FT CBOE VEST U.S. EQUITY DEEP BUFFER ETF - FEBRUARY?”); p. 75 section 3.1 first paragraph (“Specifically, the proposed lecture prompt recipe covers: • Context: First we introduce the context which covers the domain of the expected tasks/queries from the user. For example, in our experiments, we set up the context as handling information queries from user, as shown in Figure 3. . . The crafted prompt following the lecture recipe enables the LLM to gain the necessary understanding of the context and available APIs, to utilize them in the subsequent stage of workflow generation effectively.) (emphasis added); PNG media_image1.png 126 672 media_image1.png Greyscale “determining, based on the first natural language query, a reference schema for the target procedure from among a plurality of stored schemas, wherein each schema of the plurality of stored schemas defines a respective deterministic action sequence for performing a respective procedure in a respective context” on p. 76, Fig. 3; (“Reference schemas are any one of the functions listed in Fig 3. . They are “deterministic sequence[s] of action that can be used for performing . . procedure[s] in a particular context[s].” Applicant’s spec at para. 17; the lecture in Fig. 3 provides respective contexts (e.g. document bot, reports, etc.)); p. 75 section 3.1 first paragraph (“Specifically, the proposed lecture prompt recipe covers: • Context: First we introduce the context which covers the domain of the expected tasks/queries from the user. For example, in our experiments, we set up the context as handling information queries from user, as shown in Figure 3. . . The crafted prompt following the lecture recipe enables the LLM to gain the necessary understanding of the context and available APIs, to utilize them in the subsequent stage of workflow generation effectively.) (emphasis added); “constructing a second natural language query based on the first natural language query and the reference schema” on p. 75 section 3.2 2nd paragraph (We prompt LLM with "Could you provide a concise high-level summary of the flow of code?” (second NL query); flow of code is the result of first NL query and schema (functions in Fig. 3, for example)); “sending, to a multimodal model, the second natural language query” p. 75 section 3.2 2nd paragraph (We prompt LLM with "Could you provide a concise high-level summary of the flow of code?”) (emphasis added); “receiving, from the multimodal model, an initial output schema based on the second natural language query, the initial output schema defining an initial action sequence for performing the target procedure” p. 80, Fig. 7 (initial output schema is the workflow in Fig. 7 left panel); PNG media_image2.png 334 633 media_image2.png Greyscale “generating, based on the verifying, a final output schema that defines a deterministic action sequence for performing the target procedure, wherein the final output schema provides a deterministic result when executed” on p. 80, Fig. 7 (final output schema is the workflow in the right panel of Fig. 7; the returned output is deterministic by definition; see also p. 77 section 4.2 first paragraph). PNG media_image3.png 357 614 media_image3.png Greyscale With respect to claim 2, Zeng teaches “2. The computer-implemented method of claim 1, wherein the determining comprises: performing at least one of a Boolean database search, a natural language search, or a vector similarity algorithm on the plurality of stored schemas against the first natural language query” on p. 75 section 3.1 (“Note that the function names, input arguments, and output descriptions must be semantically meaningful and relevant to the context above such that the LLM can comprehend to make good use of the functions”); on p. 80: In the future, it’s worth investigating crowdsourcing user feedback to refine workflows in FlowMind at scale, as well as life-long learning over past user-approved examples to evolve its performance over time. In addition, FlowMind can be expanded in the future to handle big libraries of APIs by retrieving the most relevant APIs for a given task given embedding similarity (Examiner finds the LLM determining which tool to use necessarily involves the LLM using NL search or vector similarity algorithm to determine which of the functions in Fig. 3 are similar to the NL query). With respect to claim 3, Zeng teaches “The computer-implemented method of claim 1, wherein the generating comprises: updating, in response to a failed construction verification, the initial action sequence of the initial output schema to the deterministic action sequence” on p. 80: In the future, it’s worth investigating crowdsourcing user feedback to refine workflows in FlowMind at scale, as well as life-long learning over past user-approved examples to evolve its performance over time. In addition, FlowMind can be expanded in the future to handle big libraries of APIs by retrieving the most relevant APIs for a given task given embedding similarity With respect to claim 4, Zeng teaches “4. The computer-implemented method of claim 1, wherein the generating comprises: using, in response to a positive construction verification, the initial action sequence as the deterministic action sequence” in the abstract; p. 74 Understanding the necessity for human oversight, our system also integrates user feedback. Without assuming the programming experiences of the user, the system provides a high-level description of the auto-generated workflow, allowing novice users to inspect and provide feedback. FlowMind then takes the user feedback and adjusts the generated workflow if needed (initial action sequence (initial workflow) used if adjustment is not needed). With respect to claim 7, Zeng teaches “7. The computer-implemented method of claim 1, further comprising: executing the final output schema to obtain the deterministic result for the target procedure” on p. 80 Fig. 7 (deterministic result is “The total purchase sale for the FT CBOE VEST U.S. EQUITY DEEP BUFFER ETF - FEBRUARY is 6.33e07”; final output schema is the workflow on right hand side in Fig. 7); PNG media_image4.png 576 825 media_image4.png Greyscale With respect to claim 18, Zeng teaches “18. A computer-implemented method, comprising: receiving, from a schema management platform, a natural language query identifying a first procedure in a first context and a reference schema for the first procedure” on p. 76, Fig. 3 lecture prompt; p. 80, Fig. 7 (first NL query is, for example (“What is the total purchase sale for the FT CBOE VEST U.S.EQUITY DEEP BUFFER ETF - FEBRUARY?”); p. 75 section 3.1 first paragraph Specifically, the proposed lecture prompt recipe covers: • Context: First we introduce the context which covers the domain of the expected tasks/queries from the user. For example, in our experiments, we set up the context as handling information queries from user, as shown in Figure 3. . . The crafted prompt following the lecture recipe enables the LLM to gain the necessary understanding of the context and available APIs, to utilize them in the subsequent stage of workflow generation effectively. • APIs: Then we provide a list of structured descriptions of the available APIs to use for the LLM. Importantly, we introduce the name of the function, the input arguments, and the output variables. Note that the function names, input arguments, and output descriptions must be semantically meaningful and relevant to the context above such that the LLM can comprehend to make good use of the functions (first procedure is “total purchase sale for the FT CBOE VEST U.S.EQUITY DEEP BUFFER ETF - FEBRUARY?” first context is total purchase sale; reference schema is the lecture prompt in Fig. 3); “wherein the reference schema defines a deterministic action sequence for performing a second procedure in a second context” on p. 76, Fig. 3, lecture prompt; (Examiner finds the second context is “Imagine we are working with a document bot. The job of this bot is to respond to information queries from user; Examine finds “Wait for user queries, then write python code (with modularization) and use these functions to respond” teaches “a deterministic action sequence for performing a second procedure”); p. 77: In FlowMind, we ground the ability of LLMs to reason with reliable Application Programming Interfaces (APIs), as discussed in Section 3. The strength of APIs lies in their robustness, having been designed by domain experts capable of handling vast amounts of data in a structured, parallelized, and deterministic manner “generating an initial output schema based on the natural language query, the initial output schema defining an initial action sequence for performing the first procedure” in Fig. 7 workflow, left panel (initial output schema are the functions described in Fig. 7 workflow left panel); “and sending, to the schema management platform, the initial output schema” in Fig. 7, left panel (high level workflow description is a result of the initial output schema being sent to an LLM—see p. 75 Fig. 2 stage 2); “wherein the initial output schema is used to generate a final output schema that defines a deterministic action sequence for performing the first procedure and the final output schema provides a deterministic result when executed” in Fig. 7 workflow, right panel; p. 75 Fig. 2 stage 2 (Execute results); p. 77: In FlowMind, we ground the ability of LLMs to reason with reliable Application Programming Interfaces (APIs), as discussed in Section 3. The strength of APIs lies in their robustness, having been designed by domain experts capable of handling vast amounts of data in a structured, parallelized, and deterministic manner (Examiner finds the final output schema is the function taught in Fig. 7 workflow right panel). With respect to claim 19, Zeng teaches 19. The computer-implemented method of claim 18,wherein the generating comprises: employing one or more agents to action the natural language query” on p. 75 Fig. 2 (Examiner finds “LLM” teaches at least one agent). With respect to claim 20, Zeng teaches “20. The computer-implemented method of claim 19, wherein the generating further comprises: receiving, from the one or more agents, an additional prompt for iterative processing” in Fig. 7 (user feedback is used in additional prompt). “wherein the initial output schema is based on the natural language query and the additional prompt” in Fig. 2 stage 2 During stage 2, we enable a feedback loop between FlowMind and the user, where FlowMind provides high-level description of the generated workflow in plain-language, and the user inputs feedback to FlowMind to approve or refine the workflow if needed. (the initial output schema can be any number of iterations (loops) before the LLM “gets it right” based on user feedback provides the correct workflow code). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zeng as applies to claim 1 above and further in view of Schafer, An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation, 11 Dec 2023. With respect to claim 9, Zeng teaches “receiving the query.” See above. Zeng fails to explicitly teach “mining information regarding the particular context to obtain the target procedure” “and constructing the natural language query based on the mining.” However, Schafer teaches “9. The computer-implemented method of claim 1, wherein the receiving the query comprises: mining information regarding the particular context to obtain the target procedure” on p. 2 left column 3rd full para. (“(3) usage examples for the function mined from documentation, if available”); p. 3: Documentation Miner: This component extracts code snippets and comments from documentation included with the PUT, and associates them with the API functions they pertain to. The aim is to collect, for each API function, comments and examples describing its purpose and intended usage p. 19 (“In addition, the mining of documentation and usage examples would need to be adapted to match the documentation format used for that language”); “and constructing the natural language query based on the mining” on p. 3 Figure 2 presents the high-level architecture of TESTPILOT, which consists of five main components: Given a PUT as input, the API explorer identifies functions to test; the documentation miner extracts metadata about them; and the prompt generator, test validator, and prompt refiner collaborate to construct prompts for test generation, assemble complete tests from the LLM’s response, run them to determine whether they pass, and construct further prompts to generate more tests. We now discuss each of these components in more detail. Fig. 2 (circles added by Examiner) PNG media_image5.png 290 687 media_image5.png Greyscale Zeng and Schafer are analogous art because they are from the same field of endeavor as the claimed invention. It would have been obvious to one skilled in the art before the effective filing date of the invention to modify “receiving the query” taught in Zeng to include wherein the receiving the query comprises: mining information regarding the particular context to obtain the target procedure and constructing the natural language query based on the mining” as taught by Schafer. The motivation would have been to produce units test that are readable, understandable, and produce non-spurious assertions. See Schafer p. 1 section 1 second and third paragraphs. Allowable Subject Matter Claims 12-17 are allowed. Reasons for Indicating Allowable Subject Matter Patent Eligibility Claim 12 recites the following: sending, to a language processing system having one or more multimodal models, a first query for generating a machine-executable instruction for performing an action within a deterministic action sequence defined in a schema, wherein the deterministic action sequence defined in the schema performs a respective procedure in a particular context; receiving, from the language processing system, a generated machine-executable instruction based on the first query; sending, to the language processing system, a second query for validating whether the generated instruction performs the action when executed; receiving, from the language processing system, a positive validation result indicating that the generated instruction performs the action when executed; sending, to the language processing system and based on the positive validation result, a third query for generating a unit test for the generated instruction, wherein the unit test specifies a pass condition; receiving, from the language processing system, a generated unit test for the generated instruction; obtaining, by executing the unit test, a positive unit test result indicating that the generated instruction fulfills the pass condition; and Examiner finds these additional elements recite “specific limitation[s] other than what is well-understood, routine, conventional activity in the field” and are “unconventional steps that confine the claim to a particular useful application.” MPEP 2106.05(I)(A). As such, claim 12 recites an inventive concept. See id. Prior Art Zeng teaches “12. A computer-implemented method, comprising: sending, to a language processing system having one or more multimodal models, a first query” on p. 80, Fig. 7 (first NL query is, for example (“What is the total purchase sale for the FT CBOE VEST U.S.EQUITY DEEP BUFFER ETF - FEBRUARY?”); first query sent to LLM (multimodal model)—see p. 75 section 3.2 first paragraph); “for generating a machine-executable instruction for performing an action within a deterministic action sequence defined in a schema wherein the deterministic action sequence defined in the schema performs a respective procedure in a particular context” on p. 76, Fig. 3; (schemas are any one of the functions (procedures) listed in Fig 3 because they are “deterministic sequence[s] of action that can be used for performing . . procedure[s] in a particular context[s].” Applicant’s spec at para. 17; the lecture in Fig. 3 provides respective contexts (e.g. document bot, reports, etc.)); see p. 75 section 3.1: Specifically, the proposed lecture prompt recipe covers: • Context: First we introduce the context which covers the domain of the expected tasks/queries from the user. For example, in our experiments, we set up the context as handling information queries from user, as shown in Figure 3. . . . The crafted prompt following the lecture recipe enables the LLM to gain the necessary understanding of the context and available APIs, to utilize them in the subsequent stage of workflow generation effectively (emphasis added). “receiving, from the language processing system, a generated machine-executable instruction based on the first query” in Fig.7 (“workflow” left panel is the generated machine-executable instruction based on the first query): “sending, to the language processing system, a second query for validating whether the generated instruction performs the action when executed” (second query is the query that produces the output of the High level workflow description in Fig. 7); “receiving, from the language processing system, a positive validation result indicating that the generated instruction performs the action when executed” on p. 75 Fig. 2: During stage 2, we enable a feedback loop between FlowMind and the user, where FlowMind provides high-level description of the generated workflow in plain-language, and the user inputs feedback to FlowMind to approve or refine the workflow if needed (Examiner finds “approve” is a positive validation); However, Schafer, An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation, Dec 11, 2023 teaches “a third query for generating a unit test for the generated instruction, wherein the unit test specifies a pass condition” on p. 2 A simple test generation technique where unit tests are generated by iteratively querying an LLM with a prompt containing signatures of API functions under test and, optionally, the bodies, documentation, and usage examples associated with such functions. (third query is the querying of LLM with the prompt); “receiving, from the language processing system, a generated unit test for the generated instruction” on p. 2 A simple test generation technique where unit tests are generated by iteratively querying an LLM with a prompt containing signatures of API functions under test and, optionally, the bodies, documentation, and usage examples associated with such functions. p. 3 Figure 2 presents the high-level architecture of TESTPILOT, which consists of five main components: Given a PUT as input, the API explorer identifies functions to test; the documentation miner extracts metadata about them; and the prompt generator, test validator, and prompt refiner collaborate to construct prompts for test generation, assemble complete tests from the LLM’s response, run them to determine whether they pass, “obtaining, by executing the unit test, a positive unit test result indicating that the generated instruction fulfills the pass condition; and” on p. 3 Figure 2 presents the high-level architecture of TESTPILOT, which consists of five main components: Given a PUT as input, the API explorer identifies functions to test; the documentation miner extracts metadata about them; and the prompt generator, test validator, and prompt refiner collaborate to construct prompts for test generation, assemble complete tests from the LLM’s response, run them to determine whether they pass, p. 4 Fig. 3 (“Prompt(b) c Prompt(b)contains one snippet and the generated test passes”). Prior art of records fails to teach or suggest “. . . sending, to the language processing system and based on the positive validation result,. . . assigning, based on the positive unit test result, the generated instruction to the action, wherein an execution of the action comprises executing the generated instruction” Conclusion The following prior art is relevant to Applicant’s specification: US 20180232443 A1 Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALBERT M PHILLIPS, III whose telephone number is (571)270-3256. The examiner can normally be reached 10a-6:30pm EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALBERT M PHILLIPS, III/Primary Examiner, Art Unit 2159 1 See MPEP 2111.04 Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) “adapted to” or “adapted for” clauses; (B) “wherein” clauses; and (C) “whereby” clauses. The determination of whether each of these clauses is a limitation in a claim depends on the specific facts of the case. > See, e.g., Griffin v. Bertina, 283 F.3d 1029, 1034, 62 USPQ2d 1431 (Fed. Cir. 2002)(finding that a “wherein” clause limited a process claim where the clause gave “meaning and purpose to the manipulative steps”). < In Hoffer v. Microsoft Corp., 405 F.3d 1326, 1329, 74 USPQ2d 1481, 1483 (Fed. Cir. 2005), the court held that when a “‘whereby’ clause states a condition that is material to patentability, it cannot be ignored in order to change the substance of the invention.” Id. However, the court noted (quoting Minton v. Nat’l Ass’n of Securities Dealers, Inc., 336 F.3d 1373, 1381, 67 USPQ2d 1614, 1620 (Fed. Cir. 2003)) that a “‘whereby clause in a method claim is not given weight when it simply expresses the intended result of a process step positively recited.’” Id. (emphasis added). 2 See id.
Read full office action

Prosecution Timeline

Dec 17, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596919
NEURAL NETWORK ACCELERATOR WITH A CONFIGURABLE PIPELINE
2y 5m to grant Granted Apr 07, 2026
Patent 12585918
ML MODEL DRIFT DETECTION USING MODIFIED GAN
2y 5m to grant Granted Mar 24, 2026
Patent 12585646
INFORMATION PROVISION DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579154
SYSTEM AND METHOD OF INFORMATION EXTRACTION, SEARCH AND SUMMARIZATION FOR SERVICE ACTION RECOMMENDATION
2y 5m to grant Granted Mar 17, 2026
Patent 12572810
System and Method For Generating Improved Prescriptors
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
95%
With Interview (+12.9%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 712 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month