Prosecution Insights
Last updated: April 19, 2026
Application No. 18/649,259

DEEP LEARNING IN A DATA PLANE

Final Rejection §103
Filed
Apr 29, 2024
Examiner
ABDULLAH, SAAD AHMAD
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
54 granted / 70 resolved
+19.1% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
112
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
61.6%
+21.6% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 70 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 18/649,259 is presented for examination by the examiner. Claims 1, 13, and 20 are amended. Claim 8 has been cancelled. Claims 1-7 and 9-20 have been examined. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 13, and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Examiner’s Interpretations Regarding claims 1, 13 and 20, the examiner interprets the claim limitation "bypass the prefiltering operation" as meaning the processor ends or skips the remaining pre-filtering function upon determining the cached verdict exist, consistent with the disclosure at paragraph 124 of the specification. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 and 13-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Estep (US 11,8436,24 B1), in view of Mutolo (US 2023/0083949 A1). Regarding Claim 1 Estep discloses: A system, comprising: a processor configured to: monitor a session at a security platform, wherein the session includes network traffic (Estep Column 10, Lines 19-28 and Column 16, Lines 16-29: Monitoring a communication session at a network security system (NSS 110) by intercepting client-to-cloud network traffic, capturing the communication session, and analyzing it for policy enforcement/C2 detection.); execute a local deep learning model on the network traffic, wherein the local deep learning model is executed on the security platform (Estep Column 22, Lines 49-55 and Column 23, Lines 9-50: Teaches executing a deep learning classifier (e.g., MLP, CNN, RNN, Transformer, GAN, etc.) to process network traffic (captured communication sessions and extracted features of requests). Estep further specifies the classifier is trained on benign/malicious traffic datasets and is part of the Cloud C2 Traffic Analyzer 112, which resides in the NSS 110 (security platform).); and perform an action in response to determining that the monitored session is associated with malware based at least in part on a verdict from the deep learning model (Estep Column 15, Lines 53-64: Teaches performing concrete actions (blocking, quarantining, blacklisting) in response to a malicious verdict generated by the network security system’s classifier.); and a memory coupled to the processor and configured to provide the processor with instructions (Estep Column 22, Line 37-48: teaches NSS 110 monitoring network traffic, executing a local deep learning model (classifier 816), performing actions like blocking or quarantining on malicious verdicts, and using Storage 114 as memory coupled to the processor.). Estep teaches monitoring client-to-cloud communication sessions at a network security system, executing a local deep learning classifier on the captured network traffic and extracted feature, and performing actions such as blocking based on the model’s malicious verdict. Estep however does not teach the use of a set flag/status and stored result lookup/bypass arrangement as claimed. However, Mutolo on the other hand teaches evaluating a potential malicious domain using a fitness function that generates a value indictive of a risk (¶173) and initiating and maintain a DNS flag for the candidate (¶174). Mutolo further teaches combining the candidate domain, fitness value, and DNS status into a candidate record and storing such candidate infraction for further processing (¶175-176). Mutolo also teaches retrieving DNS information and checking DNS databases instead of re-performing more expensive processing and updating the DNS status based on lookup results, if the DNS information has been previously cached then the system can bypass the intervening computational steps rather than repeating the process (¶149, 178-179, 183-184). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Estep’s security platform to incorporate Mutolo’s flag/status-based lookup and cached result bypass technique, such that when risk indicates further analysis is implicated, the system check whether a prior stored verdict/result already exist and if it does, it bypass the repeated downstream analysis. One would have been motivated to do so because Mutolo expressly teaches that such use of status indicators, lookup tables, and cache reuse improves computational efficiency, reduce latency and avoids repeating expensive analysis, which as well-known design goal in inline network security systems. Regarding Claim 2 Estep discloses: The system of claim 1, wherein the local deep learning model is a machine learning model for automatically detecting malware related network traffic (Estep Column 22, Lines 53-55: teaches that the local deep learning model is a machine learning model for automatically detecting malware-related network traffic.). Regarding Claim 3 Estep discloses: The system of claim 1, wherein the local deep learning model is a machine learning model for automatically detecting command and control (C2) traffic (Estep Column 22, Lines 53-55: teaches that the local deep learning model is a machine learning model specifically for automatically detecting command and control (C2) traffic.). Regarding Claim 4 Estep discloses: The system of claim 1, wherein the local deep learning model is a machine learning model for automatically detecting malware related DNS network traffic (Estep Column 21, Lines 8-11: the system uses machine learning classifiers to analyze DNS query patterns to determine if the traffic is malware-related C2 traffic.). Regarding Claim 5 Estep discloses: The system of claim 1, wherein the local deep learning model is a machine learning model for advanced URL filtering (Estep Column 22, Lines 33-55: Teaches applying a machine learning classifier (e.g., MLP, CNN, RNN) to extracted URL features in DNS and HTTP requests, comparing against known malicious endpoints and sequences to detect malicious C2 traffic, thereby performing advanced URL filtering beyond static blacklists.). Regarding Claim 6 Estep discloses: The system of claim 1, wherein the local deep learning model is a machine learning model for automatically detecting malware related streaming traffic (Estep Column 22, Line 53 - Column 23, Line 55: Teaches applying a machine learning classifier (cloud classifier 816) to features extracted from DNS/HTTP requests, including payload data such as video, audio, and multimedia streams, to classify cloud traffic as malicious C2 or benign, thereby detecting malware-related streaming traffic.). Regarding Claim 7 Estep discloses: The system of claim 1, wherein the action includes dropping the network traffic, blocking the network traffic, generating an alert, logging the network traffic, quarantining an endpoint associated with the network traffic, and/or sending the network traffic to a security cloud entity for further analysis (Estep Column 15, Lines 55-64: Teaches blocking incoming requests, quarantining malicious resources, and blacklisting endpoints when traffic is determined malicious, thereby performing actions such as dropping, blocking, and quarantining network traffic.). Regarding Claim 13 Claim 13 is directed to a method corresponding to the processor-implemented system in claim 1. Claim 13 is similar in scope to claim 1 and is therefore rejected under similar rationale. Regarding Claim 14 Claim 14 is directed to a method corresponding to the processor-implemented system in claim 2. Claim 14 is similar in scope to claim 2 and is therefore rejected under similar rationale. Regarding Claim 15 Claim 15 is directed to a method corresponding to the processor-implemented system in claim 3. Claim 15 is similar in scope to claim 3 and is therefore rejected under similar rationale. Regarding Claim 16 Claim 16 is directed to a method corresponding to the processor-implemented system in claim 4. Claim 16 is similar in scope to claim 4 and is therefore rejected under similar rationale. Regarding Claim 17 Claim 17 is directed to a method corresponding to the processor-implemented system in claim 5. Claim 17 is similar in scope to claim 5 and is therefore rejected under similar rationale. Regarding Claim 18 Claim 18 is directed to a method corresponding to the processor-implemented system in claim 6. Claim 18 is similar in scope to claim 6 and is therefore rejected under similar rationale. Regarding Claim 19 Claim 19 is directed to a method corresponding to the processor-implemented system in claim 7. Claim 19 is similar in scope to claim 7 and is therefore rejected under similar rationale. Claims 9-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Estep (US 11,8436,24 B1), in view of Mutolo (US 2023/0083949 A1) and in further view of Shubham (US 2025/0181832 A1). Regarding Claim 9 Estep and Mutolo combined teaches monitoring client-to-cloud communication sessions at a network security system, executing a local deep learning classifier on the captured network traffic and extracted feature, and performing actions such as blocking based on the model’s malicious verdict. However, they do not disclose the following limitation “wherein the processor is further configured to: input a byte stream associated with the network traffic into the local deep learning model” However, in an analogous art, Shubham discloses a deep learning model system/method that includes: The system of claim 1, wherein the processor is further configured to: input a byte stream associated with the network traffic into the local deep learning model (Shubham ¶16, 29, 40: Teaches inputting raw executable/instruction data (equivalent to a byte stream) from unknown samples (e.g., API call sequences, executables) into a machine learning classification model for analysis.). Given the teachings of Shubham, a person having ordinary skill in the art before the effective filing date would have recognized the desirability of modifying the teachings of Estep and Mutolo by inputting a raw byte stream associated with network traffic into a local deep learning model. Shubham shows that executables and API call sequences, equivalent to byte streams, can be directly fed into a machine learning model for malware classification, allowing the model to detect malicious patterns at the byte level and support mitigation actions (Shubham ¶16, 29, 40). Regarding Claim 10 Estep and Mutolo combined teaches monitoring client-to-cloud communication sessions at a network security system, executing a local deep learning classifier on the captured network traffic and extracted feature, and performing actions such as blocking based on the model’s malicious verdict. However, they do not disclose the following limitation “wherein the processor is further configured to: input a byte stream associated with the network traffic into the local deep learning model; and perform tokenization processing of the byte stream provided as input into the local deep learning model” However, in an analogous art, Shubham discloses a deep learning model system/method that includes: The system of claim 1, wherein the processor is further configured to: input a byte stream associated with the network traffic into the local deep learning model (Shubham ¶16, 29, 40: Teaches inputting raw executable/instruction data (equivalent to a byte stream) from unknown samples (e.g., API call sequences, executables) into a machine learning classification model for analysis.); and perform tokenization processing of the byte stream provided as input into the local deep learning model (Shubham ¶46, 74, 84: teaches tokenization processing of byte-level/API call sequence data, where the ML model parses the sequence into sections and converts them into tokens (e.g., using word embedding techniques like word2vec) for further neural network processing.). Given the teachings of Shubham, a person having ordinary skill in the art before the effective filing date would have recognized the desirability of modifying the teachings of Estep and Mutolo by inputting a byte stream associated with network traffic into a local deep learning model and performing tokenization processing of that stream. Shubham demonstrates that raw executable or instruction data, equivalent to a byte stream, can be provided as input to a classification model, and further teaches parsing such sequences into sections and converting them into tokens (e.g., via word embedding techniques such as word2vec) for neural network analysis. It would have been obvious to apply these combined teachings so that the processor both ingests byte-level network data and performs tokenization, thereby enabling the deep learning model to capture contextual relationships within the traffic for more accurate malware detection (Shubham ¶46, 74, 84). Regarding Claim 11 Estep and Mutolo combined teaches monitoring client-to-cloud communication sessions at a network security system, executing a local deep learning classifier on the captured network traffic and extracted feature, and performing actions such as blocking based on the model’s malicious verdict. However, they do not disclose the following limitation “input a byte stream associated with the network traffic into the local deep learning model; and perform tokenization processing of the byte stream provided as input into the local deep learning model, wherein one or more bytes are extracted from the byte stream and translated into one or more tokens” However, in an analogous art, Shubham discloses a deep learning model system/method that includes: The system of claim 1, wherein the processor is further configured to: input a byte stream associated with the network traffic into the local deep learning model (Shubham ¶16, 29, 40: Teaches inputting raw executable/instruction data (equivalent to a byte stream) from unknown samples (e.g., API call sequences, executables) into a machine learning classification model for analysis.); and perform tokenization processing of the byte stream provided as input into the local deep learning model, wherein one or more bytes are extracted from the byte stream and translated into one or more tokens (Shubham ¶46, 74, 84: Teaches performing tokenization of byte-level/API call sequence data by parsing the stream into sections, extracting bytes, and converting them into tokens (e.g., via word embeddings like word2vec) for input into the ML model.). Given the teachings of Shubham, a person having ordinary skill in the art before the effective filing date would have recognized the desirability of modifying the teachings of Estep and Mutolo by inputting a byte stream associated with network traffic into a local deep learning model and performing tokenization of that stream. Shubham shows that raw executable or instruction data, equivalent to a byte stream, can be directly ingested by a classification model, and further teaches parsing such sequences by extracting bytes and translating them into tokens (e.g., via word embedding techniques like word2vec) for subsequent neural network processing. It would have been obvious to implement this combined approach so that the processor converts byte-level traffic into tokens, enabling the deep learning model to capture both low-level and semantic features of the network data for improved malware detection (Shubham ¶46, 74, 84). Regarding Claim 12 Estep and Mutolo combined teaches monitoring client-to-cloud communication sessions at a network security system, executing a local deep learning classifier on the captured network traffic and extracted feature, and performing actions such as blocking based on the model’s malicious verdict. However, they do not disclose the following limitation “input a byte stream associated with the network traffic into the local deep learning model; perform tokenization processing of the byte stream provided as input into the local deep learning model, wherein one or more bytes are extracted from the byte stream and translated into one or more tokens; and generate a score using the local deep learning model that processes the one or more tokens” However, in an analogous art, Shubham discloses a deep learning model system/method that includes: The system of claim 1, wherein the processor is further configured to: input a byte stream associated with the network traffic into the local deep learning model (Shubham ¶16, 29, 40: Teaches inputting raw executable/instruction data (equivalent to a byte stream) from unknown samples (e.g., API call sequences, executables) into a machine learning classification model for analysis.); perform tokenization processing of the byte stream provided as input into the local deep learning model, wherein one or more bytes are extracted from the byte stream and translated into one or more tokens (Shubham ¶46, 74, 84: Teaches performing tokenization of byte-level/API call sequence data by parsing the stream into sections, extracting bytes, and converting them into tokens (e.g., via word embeddings like word2vec); and generate a score using the local deep learning model that processes the one or more tokens (Shubham ¶49 and 86: Teaches generating a probability score using the ML model that processes the extracted tokens to classify whether the input is benign or malicious.). Given the teachings of Shubham, a person having ordinary skill in the art before the effective filing date would have recognized the desirability of modifying the teachings of Estep and Mutolo by inputting a byte stream associated with network traffic into a local deep learning model, performing tokenization by extracting bytes and translating them into tokens, and generating a score based on the processed tokens. Shubham teaches providing raw executable/instruction data, equivalent to a byte stream, as input into a classification model, and further describes parsing such sequences into tokens using techniques like word2vec for neural network analysis. Shubham also discloses producing probability from the tokenized input to classify whether the traffic is benign or malicious. It would have been obvious to implement these combined teachings so that the processor ingests byte-level traffic, tokenizes it into learned features, and generates a score, thereby enabling the local deep learning model to more effectively detect malicious network activity (Shubham ¶16, 29, 40, 46, 74, 84). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD A ABDULLAH whose telephone number is (571) 272-1531. The examiner can normally be reached on Monday - Friday, 8:30am - 5:00pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached on (571) 272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAAD AHMAD ABDULLAH/Examiner, Art Unit 2431 /SHIN-HON (ERIC) CHEN/Primary Examiner, Art Unit 2431
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §103
Dec 18, 2025
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603895
PACKET METADATA CAPTURE IN A SOFTWARE-DEFINED NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12592961
QUANTUM-BASED ADAPTIVE DEEP LEARNING FRAMEWORK FOR SECURING NETWORK FILES
2y 5m to grant Granted Mar 31, 2026
Patent 12580886
Network security gateway onboard an aircraft to connect low and high trust domains of an avionics computing infrastructure
2y 5m to grant Granted Mar 17, 2026
Patent 12554871
SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR SECURE AND PRIVATE DATA VALUATION AND TRANSFER
2y 5m to grant Granted Feb 17, 2026
Patent 12554832
AUTOMATED LEAST PRIVILEGE ASSIGNMENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.1%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 70 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month