Prosecution Insights
Last updated: April 19, 2026
Application No. 18/924,682

Vulnerabilities and Protections in Large Language Models

Non-Final OA §102
Filed
Oct 23, 2024
Examiner
SARKER, SANCHIT K
Art Unit
2495
Tech Center
2400 — Computer Networks
Assignee
Zscaler Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
305 granted / 391 resolved
+20.0% vs TC avg
Strong +50% interview lift
Without
With
+49.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
19 currently pending
Career history
410
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
6.1%
-33.9% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 391 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the application 18/924,682 filed on 10/23/2024. Claims 1-20 have been examined and are pending in this application. Information Disclosure Statement The information disclosure statement (IDS), submitted on 10/23/2024, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Mommileti (US 2025/0365302). Regarding claim 1, Mommileti discloses a method for large language model security comprising steps of: inline monitoring a Large Language Model (LLM) (Mommileti par. 0117; The security platform performs LLM tracking and determines interactions of any malware. See also par. 0110 and 0119); detecting an attack on the LLM and defining an attack type of a plurality of attack types based on the monitoring (Mommileti par. 0119 and 0122; The security platform detects and prevents LLMs, models, prompts and pipelines. In another part of the invention, the security platform tracks lineage, interactions, access and activities. If and when the malware is contained, the security platform attempts to identify where the malware came from; wherein 1 form of finding an origin of the malware is engaging in incident correlation, wherein an incident is compared to past incidents to see if there is a similarity; wherein another form of finding an origin of the malware is analyzing lineage of the malware; wherein the detector detects different types of anomalous events, including: anomalous log messages; data pipeline lineage; AI resource tracking; Artifacts change; AI risk forecasting; Copyright & legal exposure; Sensitive information disclosure; Data privacy violation; Social engineering attack; Tagging attack; and Labelling attack; wherein the artificial intelligence system accepts prompts from a user; wherein the security platform performs prompt analytics on the prompts entered by the user; wherein the prompt analytics include prompt interaction analysis, prompt risk analysis, prompt injections detector and a prompt web application filter. See also par. 0109); providing a notification of the attack (Mommileti par. 0118; The security platform then runs an AI/ML application security filter with configuration and tags. The security platform then runs a web application filter for AI. Step 6 involves SIEM for AI. The security platform performs a correlation of anomalous logs, metrics, events to AI models, pipelines, alert enrichment and threat prioritization and incident correlation); and causing a defense to the attack based on the attack type (Mommileti par. 0119 and 0121; The security platform detects and prevents LLMs, models, prompts and pipelines. The present invention includes adversarial machine learning, attack surface management, including discovery, tracking and lineage. Another aspect of the invention is security posture management, including experiments, models, jobs, runs, artifacts and prompts. Another aspect of the invention is risk analysis, including adversarial attacks, spills, leaks, contaminations, exfiltration and infiltration. These actions can be taken in any order. See also par. 0019). Regarding claim 2, Mommileti discloses the method of claim 1, Mommileti further discloses wherein the monitoring includes monitoring a user input to the LLM (Mommileti par. 0122; Wherein the artificial intelligence system accepts prompts from a user; wherein the security platform performs prompt analytics on the prompts entered by the user; wherein the prompt analytics include prompt interaction analysis, prompt risk analysis, prompt injections detector and a prompt web application filter). Regarding claim 3, Mommileti discloses the method of claim 1, Mommileti further discloses wherein the defense includes blocking the user input to the LLM (Mommileti par. 0114; The detector connects and reads all threat intelligence and configuration from a database. The Connector connects and reads topic data and meta data from detections. The Connector creates a session table, and then breaks into sessions and aggregates Minutes, Hourly and Daily Observations. The Connector correlates detections to discovery, tracking via tracking services, lineage, risks and rejected inputs). Regarding claim 4, Mommileti discloses the method of claim 1, Mommileti further discloses wherein the plurality of attack types includes prompt hacking and adversarial attack (Mommileti par. 0122; The security platform analyzes the malware by creating a risk analysis based on each detected malware; wherein based on the risk analysis, the security platform engages in adversarial threat mapping; wherein adversarial threat mapping includes input filtering, output filtering and masking; wherein the security platform also tracks the malware utilizing a variety of tracking services; wherein, if malware is detected, then the security platform informs a user of the system that the user has been hacked. See also par. 0018). Regarding claim 5, Mommileti discloses the method of claim 4, Mommileti further discloses wherein prompt hacking is one of a prompt injection and a jailbreaking attack (Mommileti par. 0122; Wherein the prompt analytics include prompt interaction analysis, prompt risk analysis, prompt injections detector and a prompt web application filter). Regarding claim 6, Mommileti discloses the method of claim 4, Mommileti further discloses wherein the adversarial attack is one of a backdoor attack and a data poisoning attack (Mommileti par. 0016; There are numerous different types of attacks, including: adversarial attacks, poison and evasion attacks, training time attacks, inference time attacks, distortion attacks, traversal attacks, polarization attacks, contamination attacks, polarized data pollution attacks, prompt slicing attacks, feature corruption attacks, external agency attacks and internal agency attacks. Transboundary pollution is the result of contaminated Features, Data, Prompts from one Environment spilling into the classify, training, inference pipelines of another). Regarding claim 7, Mommileti discloses the method of claim 1, Mommileti further discloses wherein causing the defense includes causing any of a prevention-based defense and a detection-based defense (Mommileti par. 0119 and 0121; The security platform detects and prevents LLMs, models, prompts and pipelines. The present invention includes adversarial machine learning, attack surface management, including discovery, tracking and lineage. Another aspect of the invention is security posture management, including experiments, models, jobs, runs, artifacts and prompts. Another aspect of the invention is risk analysis, including adversarial attacks, spills, leaks, contaminations, exfiltration and infiltration. These actions can be taken in any order. See also par. 0019). Regarding claim 8, Mommileti discloses the method of claim 1, Mommileti further discloses wherein any of the monitoring, detecting, providing a notification, and causing the defense is performed by an intermediate system before a query reaches the large language model (Mommileti par. 0118; The security platform then begins model training, inference and pipeline analytics. The security platform then detects spills, leaks and other contamination detection. The security platform then runs a prompt interaction analyzer. Step 5 involves prompt masking, redaction, prompt input filter and an AI web app filter. The security platform performs masking and redacting sensitive data from prompts. The security platform then runs an AI/ML application security filter with configuration and tags. The security platform then runs a web application filter for AI. Step 6 involves SIEM for AI. The security platform performs a correlation of anomalous logs, metrics, events to AI models, pipelines, alert enrichment and threat prioritization and incident correlation. Then the security platform creates events for a security operations center (“SOC”) analysts. Lastly, the security platform performs log forwarding and tags AI risk events to SIEM. Various actions can be done at different points, and so this is not the only series of events that the security platform can perform). Regarding claim 9, Mommileti discloses the method of claim 1, Mommileti further discloses wherein the defense includes one of removing, altering, and redesigning an output (Mommileti par. 0119 and 0121; The security platform detects and prevents LLMs, models, prompts and pipelines. The present invention includes adversarial machine learning, attack surface management, including discovery, tracking and lineage. Another aspect of the invention is security posture management, including experiments, models, jobs, runs, artifacts and prompts. Another aspect of the invention is risk analysis, including adversarial attacks, spills, leaks, contaminations, exfiltration and infiltration. These actions can be taken in any order. See also par. 0019). Regarding claim 10, Mommileti discloses the method of claim 1, Mommileti further discloses wherein the detection includes one of response-based detection and prompt-based detection (Mommileti par. 0119 and 0122; AI resource tracking; Artifacts change; AI risk forecasting; Copyright & legal exposure; Sensitive information disclosure; Data privacy violation; Social engineering attack; Tagging attack; and Labelling attack; wherein the artificial intelligence system accepts prompts from a user; wherein the security platform performs prompt analytics on the prompts entered by the user; wherein the prompt analytics include prompt interaction analysis, prompt risk analysis, prompt injections detector and a prompt web application filter. See also par. 0109). Regarding claim 11, Mommileti discloses the method of claim 1, Mommileti further discloses wherein the defense includes one of system-mode self-reminder prompts, smooth LLM, black-box defense, and pretrained language model defense (Mommileti par. 0023-0030; The different aspects of the invention include: Discovery: Discovery, Lineage & Analysis AI visibility: Inventory, Monitoring & Tracking via tracking services Models; Detections: Anomalies, Threat Forecasting; Adversarial analytics: Adversarial Attacks; Large Language model (“LLM”), a computational model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification, Prompt analytics: Prompt interaction analytics; Prompt risk analytics, Prompt Injections detector and Prompt web application Filter). Regarding claims 12-19; claims 12-19 are directed to a non-transitory computer readable medium associated with the method claimed in claims 1, 3-4, 7, 9- 11 respectively. Claims 12-19 are similar in scope to claims 1, 3-4, 7, 9- 11 respectively, and are therefore rejected under similar rationale. Regarding claim 20, Mommileti discloses the non-transitory computer-readable medium of claim 12, Mommileti further discloses wherein the defense includes a three-step clustering based defense comprising representation learning, clustering, and filtering (Mommileti par. 0117; After that, the security platform handles cluster resources and compute networks. Stage 2 involves the security platform performing tracking experiments, running jobs, performing runs, and analyzing datasets. Then the security platform creates tracking models, different versions and artifacts. After that, the security platform creates tracking parameters, creates and analyzes metrics, and makes predictions and artifacts. After that, the security platform performs LLM tracking and determines interactions of any malware. Stage 3 involves the security platform performing pipeline analysis, and analyzing data sources and data sinks. The security platform then creates a map based on topology of streams related to potential malware. Various actions can be done at different points, and so this is not the only series of events that the security platform can perform). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANCHIT K SARKER whose telephone number is (571)270-7907. The examiner can normally be reached M-F 8:30 AM-5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FARID HOMAYOUNMEHR can be reached at 571-272-3739. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANCHIT K SARKER/Primary Examiner, Art Unit 2495
Read full office action

Prosecution Timeline

Oct 23, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579285
CENTRAL DATA GOVERNANCE AND ACCESS CONTROL FOR ENTERPRISE DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12579291
SYSTEMS AND METHODS FOR ADAPTIVE DIGITAL REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 17, 2026
Patent 12579305
DATA SECURITY FOR MACHINE LEARNING SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12566870
COMMUNICATION METHOD, DEVICE, AND SYSTEM FOR OBTAINING AUTHORIZATION INFORMATION OF USER-RELATED DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12561471
METHOD AND SYSTEM FOR DATA COMMUNICATION WITH DIFFERENTIALLY PRIVATE SET INTERSECTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+49.5%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 391 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month