Prosecution Insights
Last updated: April 19, 2026
Application No. 18/662,960

SYSTEMS AND METHODS FOR TRAINING LANGUAGE MODELS TO PERFORM ACTION CHAINS

Non-Final OA §103
Filed
May 13, 2024
Examiner
GAY, SONIA L
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Zeta Global Corp.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
701 granted / 855 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
888
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 855 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the initial filing of application no. 18/662,960 on 05/13/2024. Claims 1- 20 are still pending in this application, with claims 1 and 11 being independent. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1 and 11 are objected to because of the following informalities: Claims 1 and 11 recite the following limitation: the response generated using at an output from the at least one … The language is unclear. To further prosecution, the limitation has been interpreted as follows: the response generated at an output from the at least one resource obtained using the at least one tool. However, advisement or correction is requested. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5, 6, 9, 11, 13, 15, 16 and19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Radkoff et al. (US 2023/0342167) (“Radkoff”) in view of Shapira (US 12,216,713) and further in view of Alvarez et al. (US 2021/0241038) (“Alvarez”). For claims 1 and 11, Radkoff discloses a system and method (Abstract), comprising: one or more processors (Fig.9, 914; [0100] [0101]); and a memory (Fig.9, 925) storing instructions that, when executed by at least one processor in the one or more processors, cause the at least one processor to perform operations ([0105]) comprising: identifying a task included in a user request (A task/policy embedding, T2’, is generated based on a received natural language command.) received from a conversational user interface (microphone of client device to receive a spoken utterance, Fig.9, 922; [0053] [0100] [0102]) ([0057] [0081 – 0083]); determining an action chain for generating a response for the task (The action embedding A’, which represents an action chain, is determined for the task/policy embedding T2’, [0024 – 0027] [0058 – 0060] [0084]); and training an agent model (Domain Model C, [0034] [0035]) to generate a response for the action chain ([0060] [0063] [0064] [0087 – 0093]), the response generated using at an output (client device) ([0062] [0070]), the training including: determining training data ([0031 – 0033] [0060] [0063 – 0064] [0093]). Yet, Radkoff fails to teach the following: identifying, for the action chain, at least one resource and at least one tool that provides an interface for the at least one resource; generating the response for the action chain at the output from the at least one resource obtained using the at least one tool; determining training data including a training sample and a training prompt, the training prompt including the user request, training instructions, and one or more configurations for at least one of the at least one resource and the at least one tool; displaying the training prompt to a pre-trained model to generate the agent model; generating, by the agent model, agent responses for multiple test cases included in the training sample, each of the multiple test cases including a sample user request and a test response for the sample user request; comparing the agent response to the test response for each of the multiple test cases to generate a composite similarity score for the agent responses; updating the training prompt based on the composite similarity score; and re-training the agent model using the updated training prompt. However, Shapira discloses system and method for using a machine learning model to provide data corresponding to a query (Abstract), comprising the following: identifying, for an action chain (multiple actions, e.g. database queries, wherein the database queries are chained, column 28 lines 27 – 51), at least one resource (database, column 23 lines 12 – 27, 52 - 55) and at least one tool (database language) that provides an interface for the at least one resource (A database language enables interaction with a database, column 4 lines 13 – 16 and column 23 lines 26 – 43) (A domain is determined for a query/action chain. The domain is used to identify relevant prompt files, wherein the prompt files comprise resource/database and tool/database language information., column 23 lines 44 – column 24 line 47; column 26 lines 28 – 55, 65 – column 27 line 29); determining training data including a training sample (example query pairs in a prompt file, Fig.14, 1422; column 24 lines 28 – 43, column 27 lines 10 – 30, 50 – column 28 line 4, column 28 lines 61 – column 29 line 20, column 30 lines 59 – column 31 line 9) and a training prompt including a user query (column 27 lines 51 – 65; column 29 lines 1 and 2; column 30 lines 59 – column 31 line 3), training instructions (“Translate these English statements to the cypher graph query language”, which is provided in the prompt file. The prompt file is provided as input for a machine learning model during training/fine-tuning, Fig.14; column 26 lines 28 – 42, column 29 lines 1 – 15, column 30 lines 59 – column 31 lines 8), and one or more configurations (schema/parameters) for at least one of the at least one resource (database) (The database schema, e.g. nodes, is provided in the prompt file. The prompt file is provided as input for a machine learning model during training/fine-tuning, Fig.14, 1412; column 26 lines 65 – column 27 line 10, column 29 lines 1 – 15, column 30 lines 59 – column 31 lines 8) and at least one tool (database query language) (The accepted parameters/properties, e.g. Person.name, for the database query language are provided in the prompt file. The prompt file is provided as input for a machine learning model during training/fine-tuning, Fig.14, 1412; column 26 lines 65 – column 27 line 10, column 29 lines 1 – 15, column 30 lines 59 – column 31 lines 8); displaying the training data to a pre-trained machine model to generate a fine-tuned model (The pre-trained machine learning model is fine-tuned with the user query and prompt file., column 25 lines 3 – 13; column 27 lines 42 – column 28 lines 5, 20 – column 29 lines 2, 15 – 35; column 30 lines 59 – column 31 line 8); generating, by the pre-trained machine model, model responses for multiple cases included in the training sample (Batching is used in training the machine learning model, column 23 lines 11 – 27; column 24 lines 48 – 60; column 25 lines 3 – 13; column 29 lines 57 – 63; column 30 lines 59 – column 31 line 3), each of the multiple cases including a sample user request and a test response for the sample user request (query pairs in the prompt file, Fig.14, 1422; column 27 lines 10 – 30, column 29 lines 15 - 20); comparing the model response to the response for each of the multiple cases to generate a similarity score for the model responses (column 25 lines 28 – column 26 line 5; column 29 lines57 – column 30 lines 16, 59 – column 31 line 8); updating the training prompt based on the similarity score (The training prompt is updated by generating and adding a new second query represented in the database language to the prompt file. This second query provides the interface tool and related parameters, column 30 lines 17 – 33, 35 – column 31 line 3); re-training the pre-trained machine model using the updated training prompt (column 30 lines 17 – 33, 35 – column 31 line 3); and generating the response for the action chain at the output from the at least one resource obtained using the at least one tool (Fig.13, 1345 and Fig.16, 1610; column 26 lines 19 – 28, column 31 lines 55 – column 32 line 17). Moreover, Alvarez discloses a system and method for training and testing a neural network model (Abstract), comprising the following: training samples comprise both training and testing samples ([0032 - 0035]); and a composite score generating based on training and testing the model is used to assess model performance ([0039 – 0042]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve Radkoff’s invention in the same way that Shapira’s invention has been improved to achieve the following, predictable results for the purpose of training the agent model to receive and process natural language input to reliably and accurately query a database using database query language (Shapira, column 1 lines 35 – column 2 line 10): further identifying, for the action chain, at least one resource and at least one tool that provides an interface for the at least one resource; generating the response for the action chain at the output from the at least one resource obtained using the at least one tool; further determining training data including a training sample and a training prompt, the training prompt including the user request, training instructions, and one or more configurations for at least one of the at least one resource and the at least one tool; further displaying the training prompt to a pre-trained model to generate the agent model; further generating, by the agent model, agent responses for multiple cases included in the training sample, each of the multiple cases including a sample user request and a response for the sample user request; further comparing the agent response to the response for each of the multiple cases to generate a similarity score for the agent responses; updating the training prompt based on the similarity score; and re-training the agent model using the updated training prompt. Additionally, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Radkoff and Shapira in the same way that Alvarez’s invention has been improved to achieve the following, predictable results for the purpose of training and testing the agent model to receive and process natural language input to reliably and accurately query a database using database query language (Shapira, column 1 lines 35 – column 2 line 10): the training samples further comprise test cases, wherein the multiple cases comprises multiple test cases; and the similarity score is further a composite similarity score. For claims 3 and 13, Shapira further discloses, wherein the training instructions (Shapira, Training instruction include instructions to translate the English statement to the database query language, Fig.14) include tool instructions (Shakira, The second query of an example query pair instructs the machine learning model on what database query language query, tool, should be output given a natural language query, column 24 lines 28 – 47; column 28 lines 52 – column 29 line 20) comprising one or more configurable aspects (Shapira, different example query pairs can be added to the prompt file, column 25 lines 39 – 59) including tools (Shapira, Tools include the functions of database queries specified in the prompt file, including SQL, Cypher, etc., Fig.14, 1422b; column 3 lines 65 – column 4 line 16, column 24 lines 28 – 47, column 27 lines 11 - 30), tool descriptions (Shapira, Tool descriptions include natural language representations of the database query language queries specified in the prompt file, including SQL, Cypher, etc., Fig.14, 1424; column 3 lines 65 – column 4 line 16, column 24 lines 28 – 47, column 26 lines 28 – 43, column 27 lines 10 - 30), and tool parameters (Shapira, Tool parameters include properties of the database schema specified in the prompt file and used in the database queries, Fig.14, 1412; column 3 lines 65 – column 4 line 16, column 24 lines 28 – 47, column 26 lines 28 – column 27 line 30). For claim 5 and 15, Radkoff, Shapira and Alvarez further disclose, determining a value for the composite similarity score that is below a similarity threshold (Shapira, column 29 lines 56 – column 30 line 33) (Alvarez, [0042]); and using machine learning to optimize a configuration of at least one of the agent model (Radkoff, [0034] [0035]) (Shapira, column 30 lines 16 – 33), the at least one tool (Shapira, A new prompt comprising a new database query/tool is generated using the prompt file engine. The new prompt is generated based on output predicted by a machine learning model., column 25 lines 39 – 61; column 30 lines 16 – 33), and the at least one resource (Shapira, A new prompt comprising a new database query/resource task is generated using the prompt file engine. The new prompt is generated based on output predicted by a machine learning model. , column 25 lines 39 – 61; column 30 lines 16 – 33). For claims 6 and 16, Radkoff, Shapira and Alvarez further disclose, wherein the operations comprise determining a value for the composite similarity score that satisfies a similarity threshold (Radkoff, [0063] [0064]) (Shapira, column 29 lines 57 – column 30 line 15) (Alvarez, [0042]); and deploying the agent model to a production environment (Radkoff, [0063] [0064]) (Shapira, Providing the predicted database query for fetching data interpreted as deploying the model to a production environment, column 30 lines 33 – 37) (Alvarez, [0042]). For claims 9 and 19, Radkoff and Shapira further disclose, wherein the operations comprise generating, with the re-trained agent model, the response for the action (Radkoff, [0057 -0062]) (Shapira, column 30 lines 17 – 58); and displaying the response in the conversational UI (Radkoff, [0061] [0062]) (Shapira, column 23 lines 12 - 26). Claim(s) 2, 7, 8, 12, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Radkoff et al. (US 2023/0342167) (“Radkoff”) in view of Shapira (US 12,216,713), and further in view of Alvarez et al. (US 2021/0241038) (“Alvarez”), and further in view of Bui et al. (US 2023/0306061) (“Bui”) and further in view of Dettinger et al. (US 2007/0136262) (“Dettinger”). For claims 2 and 12, the combination of Radkoff, Shapira and Alvarez further discloses, wherein the training instructions (Shapira, Training instruction include instructions to translate the English statement to the database query language, Fig.14) include resource instructions (Radkoff, [0069] [0070]) (Shapira, The second query of an example query pair, database language query, instructs the machine learning model on what resource, database, should receive the database language query, column 24 lines 28 – 47; column 25 lines 3 – 12 and 20 – 27, column 28 lines 52 – column 29 line 20) comprising one or more configurable aspects including resources (Shapira, databases, column 24 lines 28 – 46; column 26 lines 28 – 43); input formats (Shapira, column 26 lines 65 – column 27 line 10); resource tasks (Shapira, Tasks are embodied in database queries, column 27 lines 11 – 29), task parameters (Shapira, properties of nodes as task parameters, column 27 lines 11 – 29). Yet, the combination of Radkoff, Shapira and Alvarez fails to teach, wherein the aspects comprise output formats and validation instructions. However, Bui discloses a system and method for the purpose of generating a database query (Abstract), wherein a system comprises instructions to validate a query’s syntax and logical correctness before being executed on a database ([0067]). Moreover, Dettinger discloses a system and method for processing a database query (Abstract), wherein database queries comprise an indication of an output format ([0039]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Radkoff, Shapira and Alvarez in the same way that Bui’s invention has been improved to achieve the predictable results of the resource instructions further comprising instructions to validate the database query for the purpose of preventing an adverse impact of database performance caused by a logical incorrect query (Bui. [0067])/ Additionally, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Radkoff, Shapira, Alvarez and Bui in the same way that Dettinger’s invention has been improved to achieve the predictable results of the resource instructions further comprising output format for the database query for the purpose of increasing user satisfaction by enabling a given output format to be defined for a database query, wherein the results of the query may need to be presented in multiple presentation styles, formats or arrangements (Dettinger, [0003]). For claims 7 and 17, Shapira further discloses, wherein the updating the training prompt comprises modifying at least one of the one or more configurable aspects of the resource instructions (Shapira, Adding a new example query pair as configuring the resource instructions, column 29 lines 56 – column 30 line 32). For claims 8 and 18, Shapira further discloses wherein the updating the training prompt comprises modifying at least one of the one or more configurable aspects of the tool instructions (Shapira, Changing a database query of an example query pair as configuring the tool instructions, column 25 lines 39 – 60). Claim(s) 4, 10, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Radkoff et al. (US 2023/0342167) (“Radkoff”) in view of Shapira (US 12,216,713), and further in view of Alvarez et al. (US 2021/0241038) (“Alvarez”) and further in view of Leary et al. (US 2023/0342670). For claims 4 and 14, the combination of Radkoff, Shapira and Alvarez further discloses, wherein the training prompt includes a chain history for a completed action chain (Radkoff, A user generates a completed action chain, [0057 - 0064]) (Shapira, A user generates database queries which are stored in the prompt file. The database queries can indicate database chaining., column 26 lines 28 – 43; column 28 lines 20 – 50, 60 - 65), the chain history including an action in the completed action chain (Radkoff, [0057 - 0064]) (Shapira, column 26 lines 28 – 43; column 28 lines 20 – 50, 60 - 65), a prompt for generating a response for the action (Radkoff, [0057 - 0064]) (Shapira, natural language query as prompt, column 26 lines 28 – 43; column 28 lines 20 – 50, 60 - 65). Yet, the combination of Radkoff, Shapira and Alvarez fails to teach that the history comprises a completion including the response for the action. However, Leary discloses a system and method for providing a pipeline to develop and deploy machine learning models (Abstract), wherein queriers and responses generated by a machine learning model for the queries are stored as training data ([0029]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Radkoff, Shapira and Alvarez in the same way that Leary’s invention has been improved to achieve the following, predictable results for the purpose of training and testing the agent model to receive and process natural language input to reliably and accurately query a database using database query language (Shapira, column 1 lines 35 – column 2 line 10): the training data comprising the history further comprises a completion including the response for the action. For claims 10 and 20, the combination of Radkoff, Shapira and Alvarez further discloses, wherein the operations comprise generating, with the re-trained agent model, a response for a first action in the action chain (Radkoff, [0057 -0062]) (Shapira, column 30 lines 17 – 58); generating a second training prompt including the user request (column 27 lines 51 – 65; column 29 lines 1 and 2; column 30 lines 59 – column 31 line 3), new training instructions (“Translate these English statements to the cypher graph query language”, which is provided in the prompt file. The prompt file is provided as input for a machine learning model during training/fine-tuning, Fig.14; column 26 lines 28 – 42, column 29 lines 1 – 15, column 30 lines 59 – column 31 lines 8), and one or more configurations (schema/parameters) for at least one of the at least one new resource (database) (The database schema, e.g. nodes, is provided in the prompt file. The prompt file is provided as input for a machine learning model during training/fine-tuning, Fig.14, 1412; column 26 lines 65 – column 27 line 10, column 29 lines 1 – 15, column 30 lines 59 – column 31 lines 8) and at least one new tool (database query language) (The accepted parameters/properties, e.g. Person.name, for the database query language are provided in the prompt file. The prompt file is provided as input for a machine learning model during training/fine-tuning, Fig.14, 1412; column 26 lines 65 – column 27 line 10, column 29 lines 1 – 15, column 30 lines 59 – column 31 lines 8); training a second agent model using the second training prompt (Radkoff, [0057 -0062]) (Shapira, column 29 lines 50 - column 30 line 58); and generating, with the second agent model, a response for a second action in the action chain (Radkoff, [0057 -0062]) (Shapira, column 30 lines 33 - 58). Yet, the combination of Radkoff, Shapira and Alvarez fails to teach that the training prompt includes a chain history including a record of the first action. However, Leary discloses a system and method for providing a pipeline to develop and deploy machine learning models (Abstract), wherein queries and responses generated by a machine learning model for the queries are stored as training data ([0029]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Radkoff, Shapira and Alvarez in the same way that Leary’s invention has been improved to achieve the following, predictable results for the purpose of training and testing the agent model to receive and process natural language input to reliably and accurately query a database using database query language (Shapira, column 1 lines 35 – column 2 line 10): the training data/prompt further comprises a chain history including a record (response) of a first action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SONIA L GAY whose telephone number is (571)270-1951. The examiner can normally be reached Monday-Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SONIA L GAY/ Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

May 13, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602617
DATA MANUFACTURING FRAMEWORKS FOR SYNTHESIZING SYNTHETIC TRAINING DATA TO FACILITATE TRAINING A NATURAL LANGUAGE TO LOGICAL FORM MODEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602408
STREAMING OF NATURAL LANGUAGE (NL) BASED OUTPUT GENERATED USING A LARGE LANGUAGE MODEL (LLM) TO REDUCE LATENCY IN RENDERING THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602539
PROACTIVE ASSISTANCE VIA A CASCADE OF LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596708
SYSTEMS AND METHODS FOR AUTOMATED CODE GENERATION FOR CALCULATION BASED ON ASSOCIATED FORMAL SPECIFICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591604
INTELLIGENT ASSISTANT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 855 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month