Prosecution Insights
Last updated: April 19, 2026
Application No. 18/535,122

THREAT INTELLIGENCE DIALOGUE SYSTEM FOR INTERFACING WITH A PROPRIETARY THREAT INTELLIGENCE DATABASE

Final Rejection §103
Filed
Dec 11, 2023
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is responsive to amendments and arguments filed on December 5th, 2025. Claims 1-2, 4, 7-8, 10, 12-14 and 16-19 are amended, claims 1-20 are pending; hence, this action has been made FINAL. Any objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on December 17th, 2023, May 21st, 2025 and December 5th, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments With respect to rejections made under 35 U.S.C. 102(a)(1), Applicant argues, “Tangari fails to disclose all elements of claims 1, 12, and 16 as amended, particularly the aspects of the amended claims pertaining to validating database queries generated by a language model and evaluating performance of a language model based on results of validating the database queries,” (page 10 of Remarks). Applicant’s argument is moot, as new grounds for rejection are raised in view of U.S. Patent Application 2025/0110948 to Nguyen et al. Further details are provided below. With respect to rejections made under 35 U.S.C. 103, Applicant argues, “Claims 9 and 11 are allowable at least for the reasons given for claim 1 and because Jackson fails to cure the deficiencies of Tangari,” (page 11 of Remarks). Applicant’s argument is moot, in view of new grounds for rejection raised under Nguyen. Further details are provided below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2024/0061834 to Tangari et al. (hereinafter, "Tangari") in view of U.S. Patent Application Publication 2025/0110948 to Nguyen et al. (hereinafter, "Nguyen"). Regarding claims 1, 12 and 16, Tangari teaches an apparatus, machine-readable instructions, and a method comprising: generating a plurality of database queries that represent a plurality of utterances with a first language model (paragraph [0124], "FIG. 5 shows a C2OMRL system 500 is powered by a machine learning model to be able to convert a natural language (NL) utterance (e.g., an utterance within the Digital Assistant platform as described with respect to FIGS. 1-3) into a logical form (LF) statement such as OMRL query or command, which in turn can be executed for querying an existing system such as a relational database."), wherein each of the plurality of database queries is to be executed against a database offered by a cybersecurity provider that maintains threat intelligence information obtained by the cybersecurity provider, wherein generating the plurality of database queries comprises, for each utterance of a plurality of utterances, running the first language model on a prompt comprising the utterance and obtaining a corresponding one of the plurality of database queries representing the utterance as a response to the prompt (paragraph [0131], "The training stage 605 builds and trains one or more machine learning models 625a-625n (‘n’ represents any natural number) to be used by the other stages (which may be referred to herein individually as a model 625 or collectively as the models 625). For example, the models 625 can include a first model for translating a natural language utterance to a logical form in an intermediate database query language and second model for translating the logical form to a particular database query language."); and deploying the first language model for interfacing with a dialogue system based on determining that performance of the first language model is satisfactory (paragraph [0088], "(7) Testing and deploying the skill bot— DABP 102 provides several features that enable the skill bot designer to test a skill bot being developed. The skill bot can then be deployed and included in a digital assistant."). Tangari discloses, at paragraph [0141], "As should be understood, other training/validation mechanisms are contemplated and may be implemented within the model system 600. For example, the model 625 may be trained and model parameters may be tuned on datasets from a subset of obtained or filtered datasets and the datasets from a subset of obtained or filtered datasets may only be used for testing and evaluating performance of the model 625," but does not explicitly teach “validating the plurality of database queries, wherein validating the plurality of database queries comprises, for each database query of the plurality of database queries, submitting the database query to the database,” or “determining if the database query is valid based on at least one of a result of submitting the database query and a schema of the database,” and thus, Nguyen is introduced. Nguyen teaches validating the plurality of database queries, wherein validating the plurality of database queries comprises, for each database query of the plurality of database queries, submitting the database query to the database (paragraph [0062], "In some implementations, for a query instruction set 204 to be complete, the query instruction set may be tested or otherwise verified to ensure that the sequence of query instructions are invalid or do not otherwise generate any errors."); determining if the database query is valid based on at least one of a result of submitting the database query and a schema of the database (paragraph [0062], "For example, when the system 234 identifies that the query instructions to perform the requests 202 have been generated and stored in the query instruction file 220, the system 234 may test the query instruction set in the query instruction file 220 by applying the verify tool to the query instructions in a sandbox based on the table description 232 to impersonate a data warehouse to execute the query instructions."); and evaluating performance of the first language model based on results of validating the plurality of database queries (paragraph [0062], "If a query instruction set is found invalid by the verify tool 218, the system 234 is to continue using the generative AI model 226 and the database query tools 210 to update the query instruction set and thus correct the query instructions for errors."). Tangari and Nguyen are considered analogous because they are each concerned with database query generation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have substituted sandbox validation method of Nguyen for the a priori validation method of Tangari for the purpose of model evaluation, given that the substitution of one known element for another yields predictable results. Regarding claims 2, 13 and 17, Tangari teaches the apparatus, machine-readable instructions, and a method wherein determining if the database query is valid comprises at least one of determining if the result of submitting the database query indicates if the database query was able to execute and determining if the database query is consistent with the schema of the database (paragraph [0062], "The verify tool 218 may be a debugger to attempt to identify location errors (such as nonexisting tables to be called or nonexisting cells to be retrieved (such as based on specific rows or columns not existing)) or operation errors (such as arithmetic errors, call errors, and so on). The verify tool 218 may also be configured to verify that the query instruction set does not include unwanted operations that modify the data warehouse (such as load content or save instructions to modify a data table or otherwise modify data stored in the data warehouse)."). Regarding claims 3, 14 and 18, Tangari teaches the apparatus, machine-readable instructions, and a method further comprising, based on a determination that the database query is valid, assigning a score to the database query indicating that the database query is valid (paragraph [0175], "At step 806, based on the input string, the first machine learning model generates a score indicating whether the natural language utterance should be routed to a second machine learning model. The score may, for example, be a score between 0.0 and 1.0 that indicates the predicted translatability of the utterance by C2OMRL."); andbased on a determination that the database query is not valid, assigning a score to the database query indicating that the database query is invalid (paragraph [0176], "At step 808, the score is compared to a threshold value. The threshold value may be configured to a suitable value, such as 0.7 or 0.6 or 0.5 or 0.8. In some examples, the threshold value is set to a default value (e.g., 0.7) which can be configured by a user."). Regarding claims 4, 15 and 19, Tangari teaches the apparatus, machine-readable instructions, and a method wherein evaluating performance of the first language model based on the results of validating the plurality of database queries comprises evaluating scores assigned to the plurality of database queries, wherein determining that performance of the first language model is satisfactory comprises that the scores assigned to the plurality of database queries satisfy one or more criteria (paragraph [0140], "Once the optimal set of model parameters are obtained, a reserved test set of data from the validating datasets are input into the trained model 625 to obtain output, and the output is evaluated versus ground truth values using correlation techniques such as Bland-Altman method and the Spearman's rank correlation coefficients and calculating performance metrics such as the error, accuracy, precision, recall, receiver operating characteristic curve (ROC), etc. In some instances, the obtaining, training, and validating data processes in the model system 600 can be repeatedly performed (adjusted) by the model trainer 640 until a predetermined condition is satisfied and a final set of model parameters can be provided by the model trainer 640."). Regarding claim 5, Tangari teaches The method of claim 2 further comprising: based on determining that the database query cannot execute, attempting to fix the database query to generate a fixed database query; evaluating a response to submitting the fixed database query to the database (paragraph [0161], "In some embodiments, to improve the DOCS model 702 robustness, the input 710 is further augmented to include synonyms for column/table names. For example, “invoice num” is augmented to include the synonym “invoice number,” and “supplier” is augmented to include the synonym “vendor”. This helps the DOCS model 702 to understand that the NL utterance 712 is relevant to the DB schema 714 if the user mentions a column/table name in the utterance using a synonym instead of the original name.");based on determining from the response that the fixed database query is valid, assigning a score to the fixed database query indicating that the fixed database query is valid (paragraph [0165], "The translatability head 706 may further output a label. The label indicates whether the utterance is translatable and should be handled by the C2OMRL semantic parser. The label is associated with the translatable score 722. For example, if the translatable score exceeds a threshold, then the utterance is labeled as translatable, and if the translatable score does not exceed the threshold, then the utterance is labeled as not translatable."); andbased on determining from the response that the fixed database query is not valid, assigning a score to the fixed database query indicating that the fixed database query is invalid (paragraph [0165], "The translatability head 706 may further output a label. The label indicates whether the utterance is translatable and should be handled by the C2OMRL semantic parser. The label is associated with the translatable score 722. For example, if the translatable score exceeds a threshold, then the utterance is labeled as translatable, and if the translatable score does not exceed the threshold, then the utterance is labeled as not translatable."). Regarding claims 6 and 20, Tangari teaches the apparatus and method further comprising, based on deployment of the first language model, based on obtaining a first utterance input to the dialogue system, running the first language model on a prompt comprising the first utterance to generate a database query representing the first utterance (paragraph [0143], " In some instances, the trained NL2LF model 665 performs one or more semantic parsing tasks to generate the prediction based on the features extracted from the input data 655. The NL2LF translation stage 610 outputs a logical form 660 which can be used in the query execution stage 620.");submitting the database query to the database to obtain a query result; and constructing a response to the first utterance based on the query result (paragraph [0144], "The query execution stage 620 comprises one or more executors configured for executing the logical form 660 on a system such as database 680 to obtain a result 685 (e.g., an answer to a query within natural language utterances(s))."). Regarding claim 7, Tangari teaches The method of claim 6 further comprising evaluating the first utterance to determine whether to run the first language model with the first utterance based, at least in part, on determining intent of the first utterance, wherein evaluating the first utterance comprises at least one of determining if the intent of the first utterance relates to threat intelligence and determining if the first utterance comprises an injection attack (paragraph [0131], "For example, the models 625 can include a first model for translating a natural language utterance to a logical form in an intermediate database query language and second model for translating the logical form to a particular database query language. The models 625 further include an DOCS model that identifies out-of-domain, out-of-scope, and confusion span utterances, as described herein. Still other types of prediction models (e.g., an intent classifier) may be implemented in other examples according to this disclosure."). Regarding claim 8, Tangari teaches The method of claim 7, wherein determining the intent of the first utterance comprises running a second language model on a prompt comprising the first utterance, wherein running the first language model with the first utterance comprises running the first language model with the first utterance based on determining that the intent relates to threat intelligence (paragraph [0154], "As shown in FIG. 7, the system 700 includes an DOCS model 702 (also referred to as a first machine learning model). The DOCS model 702 is configured to determine whether input 710 is translatable to a database query for a particular database. The DOCS model 702 can be trained by the model trainer 640 described above with respect to FIG. 6. The system 700 further includes a C2OMRL semantic parser 735 (also referred to as a second machine learning model). If the input 710 is deemed translatable by the DOCS model 702, then the input 710 will be provided to the C2OMRL semantic parser 735."). Regarding claim 10, Tangari teaches The method of claim 1, wherein the first language model comprises a large language model (LLM) (paragraph [0156], "The DOCS model 702 is a deep-learning model that includes three main components, according to various embodiments: initial encoder 704, translatable head 706, and confusion span head 708. The initial encoder 704 is a transformer-based pre-trained language model (PLM) used to embed the input 710 by capturing a representation of the NL utterance 712, the DB schema 714, and the NER matches 716 contextually."), wherein generating the plurality of database queries comprises generating the plurality of database queries based on prompt engineering of prompts to the LLM or fine-tuning of the LLM (paragraph [0168], "The confusion span 724 can be provided to a user or component for identifying problematic portions of the utterance. For example, the confusion span 724 is provided in a debug log. As another example, confusion span 724 is provided in, or used to determine, part of a dialog for adjusting the NL utterance 712. For example, the NL utterance 712 asks for employee birthdates, and the database at issue does not have a column for employee birthdates but does have a column for employee hire dates. The confusion span 724 can be used to generate an output to prompt the user to adjust their NL utterance 712 for properly formulating a database query, such as “Employee birthdates are not available, are you interested in hire dates?”."). Claims 9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Tangari and Nguyen as applied to claims 1 and 7 above, further in view of U.S. Patent Application Publication 2025/0103715 to Jackson (hereinafter, "Jackson"). Regarding claim 9, the combination of Tangari and Nguyen does not explicitly teach “The method of claim 7, wherein determining if the first utterance comprises an injection attack comprises determining if the first utterance comprises a cross-site-scripting (XSS) attack or a prompt injection-based attack and responding to the first utterance without running the first language model based on determining that the first utterance comprises an XSS attack or a prompt injection-based attack,” and thus, Jackson is introduced. Jackson teaches determining if the first utterance comprises an injection attack comprises determining if the first utterance comprises a cross-site-scripting (XSS) attack or a prompt injection-based attack and responding to the first utterance without running the first language model based on determining that the first utterance comprises an XSS attack or a prompt injection-based attack (paragraph [0034], "The various embodiments disclosed herein seek to prevent prompt injection attacks in LLM systems, such as chatbots or virtual assistants. The various embodiments disclosed herein provide systems and methods for preventing prompt injection attacks in LLM systems (e.g., chatbots/virtual assistant applications) by applying a pre-filtering evaluation to incoming prompts, and a post-filter evaluation to responses generated by the model. Specifically, upon receiving an incoming prompt from an external source, the prompt may be evaluated by a pre-filter module to determine whether it is likely to be malicious/unsafe. Such evaluation may use techniques such as direct string matching of known malicious content, natural language processing, machine learning, or other anomaly detection to identify characteristics common to known malicious prompts, including subjecting the prompt to the main LLM. In some embodiments, in instances in which the prompt is determined to be unsafe or potentially unsafe, the system may reject the prompt and prevent the LLM from executing instructions contained therein."). Tangari, Nguyen and Jackson are considered analogous because they are each concerned with language model prompt screening. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Tangari and Nguyen with the teachings of Jackson for the purpose of hardening a deployed language model. In view of design incentives or other market forces, improving a language model for cybersecurity would have been predictable. Regarding claim 11, the combination of Tangari and Nguyen does not explicitly teach “The method of claim 1, wherein the database offered by the cybersecurity provider maintains at least one of Common Vulnerabilities and Exposures (CVE) data enriched by the cybersecurity provider and threat prevention information maintained by proprietary systems of the cybersecurity provider,” however, Jackson teaches the database offered by the cybersecurity provider maintains at least one of Common Vulnerabilities and Exposures (CVE) data enriched by the cybersecurity provider and threat prevention information maintained by proprietary systems of the cybersecurity provider (paragraph [0071], "In various embodiments, the datasets 316 may be from one or more data sources that deliver data to the system through data interfaces (e.g., secure socket layers (SSLs), via virtual private networks (VPNs), HTTPs, or through other connections). Such data may include, for example, various forms of customer/client interaction by an enterprise network, including event logs, system logs, application logs, database logs, threat software data, operational intelligence, machine data that includes records of the activity and behavior of network customers and users, as well as activity and behavior records of transactions, applications, servers, networks and mobile devices on the network."). Tangari, Nguyen and Jackson are considered analogous because they are each concerned with language model prompt screening. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Tangari and Nguyen with the teachings of Jackson for the purpose of improving language model robustness. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent 10,268,721 to Dutta et al. U.S. Patent 11,868,344 to Song et al. U.S. Patent 12,001,548 to Kimon et al. U.S. Patent Application Publication 2019/0354831 to Romero. U.S. Patent Application Publication 2020/0394190 to Chaudhuri et al. U.S. Patent Application Publication 2021/0157812 to Rumiantsau et al. U.S. Patent Application Publication 2021/0390101 to Shaik et al. U.S. Patent Application Publication 2025/0077509 to Schindel et al. U.S. Patent Application Publication 2025/0156535 to Keller et al. Internation Publication WO 2022/224246 to Ohayon et al. China Invention Application CN 111563141 to Sheinin et al. China Invention Application CN 114897163 to Hui et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Nov 20, 2025
Interview Requested
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Examiner Interview Summary
Dec 05, 2025
Response Filed
Feb 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month