Prosecution Insights
Last updated: April 19, 2026
Application No. 18/539,067

SYSTEM AND METHOD OF LLM TRAINING FOR NETWORK STATE CONVERSATIONAL ANALYTICS

Final Rejection §103§112
Filed
Dec 13, 2023
Examiner
LE, JESSICA N
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Aviz Networks, Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
366 granted / 504 resolved
+17.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
21 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 504 resolved cases

Office Action

§103 §112
DETAILED ACTION This communication is responsive to the claim amendment filed on 09/25/2025, which is filed continue-in-part (CIP) of application number 18/504,991 filed on 11/08/2023. Claims 1, 6, and 11 are independent claims. Claims 1, 4, 6, 8, 11 and 13 are amended. Claims 1-15 are pending in this application. This Action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding independent claims 1, 6, and 11, the claims recite the amended feature “the prompt” in line 5 of claim 1, in line 8 of claim 6, and in line 6 of claim 11. There is insufficient antecedent basis for this feature/limitation in the claims. Claims 2-5, 7-10, and 12-15 are also rejected because of dependency to claims 1, 6, and 11, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 6, 8, 11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al., US Patent No. 10,546,001 (hereinafter as “Nguyen”), and further in view of MENGKE et al., US Pub. No. 2025/0148400 A1 (hereinafter as “Mengke”). Regarding claim 1, Nguyen teaches: a method, comprising: receiving, by one or more agent applications, a query from a user (see Fig. 6, element 140 and element 620; Col. 9, lines 50-55: “The user interaction module 240 allows a user to interact with the big data analysis system using natural language queries. The user interaction module 240 may provide a user interface to a user via a web browser or via some custom client applications. The user interaction module 240 receives natural language queries provided by users”; and Col. 19, lines 50-56: “The user interface module 240 receives the natural language query from the client application 140. The user interface module 240 generates 650 an execution plan for executing the natural language query. In an embodiment, the execution plan comprises calls to one or more APIs (application programming interface) of the analytics framework 230 and/or distributed data framework 200…”); providing a dataframe to the one or more agent applications (Col. 8, lines 44-60 discloses “different vendors and support different times of interfaces” is interpreted as the agent application(s), and discloses the providing “distributed data-frame (DDF)” by the distributed data framework, and in Col. 9, lines 15-21: “the distributed data framework 200 supports SQL (structured query language) queries, data table filtering, projections, group by, and join operations based on distributed data-frames. The distributed data framework 200 provides transparent handling of missing data, APIs for transformation of data, and APIs providing machine-learning features based on distributed data-frames…”); appending a predetermined number of initial entries from the dataframe (Col. 29, lines 21-29 via “predefined list “10”, “20”, “30” and so on” is interpreted as the predetermined number of initial entries) to a suffix of the prompt (Col. 18, line 11, e.g., “a suffix of the nature language query”). constructing a standardized prompt template, wherein the query is embedded within the standardized prompt template (Fig. 9 as shown the prompt template, wherein the query/queries embedded in template, and Col. 25, lines 50-58; and further in Col. 28, lines 43-67, and Col. 29, lines 1-20: “…The natural language query processor 310 supports query template that match queries that request statistics of a particular attribute of the dataset. For example, the natural language query processor 310 supports queries of the form “what is the standard deviation of X?” The query template for such queries comprises the following components: (1) the phrase “what is the”, (2) a phrase pattern of a statistical operation belonging from a predefined list, for example, “max”, “min”, “average”, “median”, “standard deviation”, “25th-75th quartiles”, or “distribution.” (3) the keyword “of”, and (4) an attribute (or user defined metric.) The query template allows an optional extension of a filter clause) Nguyen does not explicitly teach: “configured to instruct one or more Large Language Models (LLMs) to generate executable code snippet for analyzing the dataframe to resolve the user’s query;” “channeling the standardized prompt template to one or more Large Language Models (LLMs); utilizing a Generative Pre-trained Transformer (GPT) Application Programming Interface (API) to generate a generated code snippet; executing the generated code snippet on the dataframe to create a resulting data analysis output; and relaying the resulting data analysis output from the execution of the generated code snippet back to the user as a direct response to the query.” In the same field of endeavor (i.e., data processing), Mengke teaches: configured to instruct one or more Large Language Models (LLMs) to generate executable code snippet for analyzing the dataframe to resolve the user’s query; (see Fig. 1, element 103 as Large Language Model (LLM); Fig. 3, element 308; and Abstract: “generate a prompt for a Large Language Model (LLM) comprising the user activity data and instructing the LLM to generate a report based on the user activity data, and submit the prompt to the LLM and receive the report generated by the LLM”); and channeling the standardized prompt template to one or more Large Language Models (LLMs) (see Fig. 3, element 310, e.g., submits prompt to LLM (e.g., GPT model); and Abstract: “…submit the prompt to the LLM and receive the report generated by the LLM”; par. [0099] “a prompt template for the prompt to the LLM, wherein the API is to access the prompt template to generate the prompt for the LLM”); utilizing a Generative Pre-trained Transformer (GPT) Application Programming Interface (API) to generate a generated code snippet (see Fig. 5, element 152 as shown the GPT API with the code snippet is generated; par. [0036] “the API submits the prompt to the LLM 310, as described above. In various examples, the LLM may be a GPT model.”; par. [0060] “The grid service 153 will also include a cloud provisioning service 153 to deploy or utilize a GPT model on the cloud computing platform 156. The grid service 153 thus securely handles the prompt to the GPT model and the response from the GPT model back to the browser 150 of the administrator.”; and par. [0057], e.g., “a summary icon or similarly labeled button to initiate the AI assistant. This invokes an administrator portal 141 and the API configured to obtain an LLM or GPT summary…”); executing the generated code snippet on the dataframe to create a resulting data analysis output (see Fig. 5 as shown the data frame, Fig. 6 as shown the resulting data analysis output; and pars. [0049-51] teach data analysis, and [0076] “the frameworks 718 (also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications 720 and/or other software modules. For example, the frame works 718 may provide various graphic user interface (GUI) functions, high-level resource management, or high-level location services. The frameworks 718 may provide a broad spectrum of other APIs for applications 720 and/or other software modules”; par. [0077], wherein the framework 718 embeds the API dataframe); and relaying the resulting data analysis output from the execution of the generated code snippet back to the user as a direct response to the query (see Figs. 3 and 5; and par. [0032] e.g., “The result is a report in real time that provides the administrator with an accurate view of what is happening on the collaboration system and what actions might be needed to better administer the system. Alternatively, collection of the activity data and the prompt to the LLM or a summarization report could be performed automatically, for example, on a periodic basis. In such an example the resulting report could be offered to the administrator when available without the administrator having to initiate the AI assistant feature to produce the report as described herein”; and further in pars. [0035 and 73]). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Mengke would have provided Nguyen with the above indicated limitations for allowing a skill artisan in motivation in constructing and using the prompt script template to generate the code snippet via GPT API to create the resulting data analysis output to user (Mengke: Figs. 1-4; Abstract, and pars. [0029, 35 and 57]). Regarding claim 4, Mengke teaches: “receiving processed data in CSV format.” (par. [0054] “the data provided to the LLM needs to be prepared in a way that is compatible with the model. This includes being aware of the token limit and converting the data into a format like JSON or CSV to string.”) Claims 6, 8, 11, and 13 are rejected in the analysis of above claims 1 and 4; and therefore, the claims are rejected on that basis. Claims 2, 7, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen and Mengke, and further in view of Blum et al., US Pub. No. 2025/0068667 A1 (hereinafter as “Blum”). Regarding claim 2, the claim is rejected by the same reasons set forth above to claim 1. However, Nguyen and Mengke do not explicitly teach: “specifying one or more pandas commands to execute in order to formulate an appropriate response to the query.” In the same field of endeavor (i.e., data processing), Blum teaches: “specifying one or more pandas commands to execute in order to formulate an appropriate response to the query” (Figs. 8-9; pars. [0045] “the expressions are Python® expressions. Consequently, each expression is effectively a code snippet that can be executed in a suitable Python environment to provide the output. Particularly, the expressions are pandas expressions. Pandas (https://pandas.pydata.org/) is a Python library for data analysis and manipulation of tabular data. The expressions may also comprise JSON query expressions” which implement the specified pandas commands, “[0053] “The pandas library discussed above may be employed for this purpose, with the input data being instantiated as a pandas DataFrame”, and [0054], [0058], and [0080] “the input data is tabular data, for example extracted from databases, in pandas format. The expressions can then also be pandas expressions. However, a wide variety of input data and corresponding query languages for the expressions may be implemented. For example, in respect of tabular data Excel® formulas could be used as the expressions or JSON query expressions could be used…”). Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Blum would have provided Nguyen and Mengke with the above indicated limitations for allowing a skill artisan in motivation as modified the teachings to use specific panda commands to formulate the appropriate response/answer to the query. Claims 7 and 12 are rejected in the analysis of above claim 2; and therefore, the claims are rejected on that basis. Claims 3, 9, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen and Mengke, and further in view of Lee et al., US Pub. No. 2025/0147999 A1 (hereinafter as “Lee”). Regarding claim 3, the claim is rejected by the same reasons set forth above to claim 1. However, Nguyen and Mengke do not explicitly teach: “wherein the relaying is done post-execution.” Regarding claim 3, Lee teaches: “wherein the relaying is done post-execution.” (par. [0045] “The resulting sequence of output tokens 216 can then be converted to a text sequence in post-processing. For example, each output token 216 can be an integer number that corresponds to a vocabulary index. By looking up the text segment using the vocabulary index, the text segment corresponding to each output token 216 can be retrieved, the text segments can be concatenated together, and the final output text sequence can be obtained”, wherein the “post-processing” is interpreted as post-execution) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Lee would have provided Nguyen and Blum with the above indicated limitation for allowing a skill artisan in motivation to perform the done post processing/execution for final output/result to user application(s) (Lee: Figs. 5-7 and par. [0045]). Claims 9 and 14 are rejected in the analysis of above claim 3; and therefore, the claims are rejected on that basis. Claims 5, 10, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen, Mengke, and Lee, and further in view of MORRIS, US Pub. No. 2025/0111143 A1 (hereinafter as “Morris”). Regarding claim 5, the claim is rejected by the same reasons set forth above to claim 1 and 3. Furthermore, Mengke teaches: “the temperature” (par. [0055] “In some LLMs, this parameter is referred to as the temperature. Consequently, the LLM prompt 131 may be accompanied by a temperature setting of about 0.2 and no more than 0.4. This will ensure that the generated insights are accurate and relevant.”). However, Nguyen, Mengke, and Lee do not explicitly teach that “setting a temperature of the one or more large language models to zero.” In the same field of endeavor (i.e., data processing), Morris teaches: “setting a temperature of the one or more large language models to zero” (see par. [0042] “the response generation module 102 may provide a request to a large language model 104 to ask whether the prompt relates to contemporaneous data. Using ChatGPT as an example of the LLM, this may be implemented using a prompt which directly requests whether the query relates to contemporaneous data. This request may utilise a zero-temperature setting within ChatGPT which asks for a purely deterministic yes/no answer. LLM 104 may also be a private LLM”; and further in par. [0069] e.g., “uses temperature setting of 0, i.e. zero”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Morris would have provided Nguyen, Blum, and Lee with the above indicated limitation(s) for allowing a skill artisan in motivation to perform question and answer within ChatGPT data processing using large language models (LLMs) more efficient. Claims 10 and 15 are rejected in the analysis of above claim 5; and therefore, the claims are rejected on that basis. Response to Arguments Referring to claim objections, the objections have been withdrawn in view of amendment to claims 1, 4, 6, 8, 11, and 13. Referring to claim rejections under 35 U.S.C. 103, Applicant’s arguments to the amended limitations to claim 1 (see Remarks, pages 6-8) have been fully considered, but are moot in view of the new grounds of rejection necessitated by applicant's amendment to the claims. Applicant's newly amended features are taught implicitly, expressly, or impliedly by the prior art of record. Please see the rejections set forth above for details. Furthermore, per Applicant’s argument to limitation recites in dependent claim 3 (see Remarks, page 8). Examiner respectfully submits that Lee teaches the instructed large language models (LLMs) (see par. [0040]) in the trained/pretrained machine learning model (see par. [0043]) with the “post-processing” steps to generate the text output or response as result (see pars. [0045-46]). Since the claim does not require any particular “post-execution”; hence, the “post-processing” of Lee should be matched as broadest reasonable interpretation. See MPEP 2111 – Claim Interpretation. Finally, per Applicant’s argument to limitation recites in dependent claim 5 (see Remarks, page 9). Examiner respectfully submits that Mengke teaches “a temperature setting of about 0.2 and no more than 0.4” (see par. [0055]). However, Nguyen, Mengke, and Lee do not explicitly teach that setting a temperature of the one or more large language models (LLMs) to “zero.” Thus, in the same field of endeavor (i.e., data processing), Morris clearly teaches “utilise a zero-temperature setting within ChatGPT” as an example of the large language models (LLMs) (see par. [0042]) with the “temperature setting of 0, i.e., zero” (see par. [0069]). For at least above reasons, the rejections are maintained to dependent claims 3 and 5. Prior Arts The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275,277 (CCPA 1968)); Merck & Co. v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert. denied, 493 U.S. 975 (1989). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica N. Le whose telephone number is (571)270-1009. The examiner can normally be reached M-F 9:30 am - 5:30 pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHERIEF BADAWI can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jessica N Le/Examiner, Art Unit 2169 /SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Dec 13, 2023
Application Filed
Jun 22, 2025
Non-Final Rejection — §103, §112
Sep 25, 2025
Response Filed
Oct 15, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585711
SYSTEMS AND METHODS FOR WEB SCRAPING
2y 5m to grant Granted Mar 24, 2026
Patent 12554704
STALE DATA RECOGNITION
2y 5m to grant Granted Feb 17, 2026
Patent 12475100
USING AD-HOC STORED PROCEDURES FOR ONLINE TRANSACTION PROCESSING
2y 5m to grant Granted Nov 18, 2025
Patent 12450225
DYNAMICALLY LIMITING THE SCOPE OF SPREADSHEET RECALCULATIONS
2y 5m to grant Granted Oct 21, 2025
Patent 12393604
SYSTEMS AND METHODS FOR PREVENTING DATABASE DEADLOCKS DURING SYNCHRONIZATION
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+28.6%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 504 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month