Prosecution Insights
Last updated: April 19, 2026
Application No. 18/521,979

CONTEXT-BASED CYBERATTACK SIGNATURE GENERATION WITH LARGE LANGUAGE MODELS

Final Rejection §103
Filed
Nov 28, 2023
Examiner
HERZOG, MADHURI R
Art Unit
2438
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
516 granted / 662 resolved
+19.9% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
697
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 662 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Final Office action in response to communications received 10/31/2025. Response to Amendment Claims 2, 3, 10-12, and 17-19 have been cancelled. Claims 21-29 have been newly added. Claims 1, 5, 6, 9, 13, and 16 have been amended. Claims 1, 4-9, 13-16, and 20-29 have been examined. Applicant’s arguments with respect to claims 1, 9, and 15 regarding the new limitations: “wherein testing the first syntax description comprises, prompting the language model with one or more prompts to obtain one or more cyberattack signatures in response, wherein the one or more prompts comprise the first syntax description and data of respective ones of one or more cyberattack types; and determining whether a threshold percentage of the one or more cyberattack signatures satisfy minimum signature conditions for corresponding types of cyberattacks in the one or more cyberattack types; and based on determining that a threshold percentage of the one or more cyberattack signatures satisfy minimum signature conditions”, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 5-9, 13-16, 20, 21, 23, 24, 26, 27, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild by Siracusano et al (hereinafter D1) and prior art of record US 20240403428 to Lal et al (hereinafter Lal). As per claim 1, D1 teaches: A method comprising: generating a first syntax description, wherein the first syntax description describes syntax for cyberattack signatures to a language model (D1: page 8, left column: paragraph 2-4: Therefore, for the preprocessing our goal is to generate multiple descriptions of the same attack pattern, to enhance our ability to discover similarities between such descriptions and the taxonomy’s examples. The first strategy prompts the LLM to extract blocks of raw text or sentences that explicitly contain formal descriptions of attack patterns. The output of this strategy is generally a paragraph, or in some cases a single sentence. The second strategy leverages the LLM’s reasoning abilities and prompts it to describe step-by-step the attack’s events, seeking to identify implicit descriptions; testing the first syntax description, wherein testing the first syntax description comprises (D1: page 8, right column, last paragraph: We trained all the selected models on the same dataset, using the same train/test/validation set split, to ensure a fair comparison. Page 9, left column: paragraph 1: We then test the trained methods and tools performance using our dataset, since it focuses on CTI-metrics), prompting the language model with one or more prompts to obtain one or more cyberattack signatures in response, wherein the one or more prompts comprise the first syntax description and data of respective ones of one or more cyberattack types (D1: page 11, left column: last 3 paragraphs: aCTIon uses two different strategies for selecting the text block. The first strategy prompts the LLM to retrieve portions of the text that contains the attack pattern description (i.e., more than one sentence). However, the LLM may not recognize the specific attack pattern, indeed there is no guarantee that the LLM knows this specific attack pattern. To avoid such cases, a second strategy is used together with the previous one. The LLM is also prompted to reason about the key steps performed in the attack. The sentence below is the output of the second strategy. This sentence not only clearly expresses the attack pattern (cyberattack signature) but it easier to process in the classification step); and determining whether a threshold percentage of the one or more cyberattack signatures satisfy minimum signature conditions for corresponding types of cyberattacks in the one or more cyberattack types (D1: page 9, left column: B. Performance metrics: We compared each method against the Ground Truth (GT) from our dataset using the following metrics as defined in [19]: • Recall: fraction of unique entities in the GT that have been correctly extracted • Precision: fraction of unique extracted entities that are correct (i.e. part of the GT) • F1-score: harmonic mean of Precision and Recall. Page 10: right column: D. Attack Pattern Extraction: The first plot of Figure 8a reports the number of attack pattern extracted from each report by different methods. aCTIon outperforms all the baselines in terms of overall performance (F1-score) by about 10% point. More importantly, the recall is higher than any other solution, and the average precision is about 50%. These results make a manual verification by CTI analysts manageable: the average number of attack patterns extracted per-report is 25 (cf. Figure 8a). Page 13, left column: paragraph 3: Other issues: The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness); and based on determining that a threshold percentage of the one or more cyberattack signatures satisfy minimum signature conditions (D1: Page 13, left column: paragraph 3: Other issues: The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness. Right column: VIII. Conclusion: Our evaluation on the proposed benchmark dataset shows that aCTIon largely outperforms previous solutions. Currently, aCTIon is in testing within our organization for daily production deployment), D1 teaches “for daily production deployment” but does not teach actual deployment. Specifically, D1 does not explicitly teach: generating a prompt to the language model with the first syntax description, first data for a type of cyberattack, and second data describing context of the first data for the type of cyberattack; and prompting the language model with the prompt to obtain a cyberattack signature in response. generating a prompt to the language model with the first syntax description, first data for a type of cyberattack, and second data describing context of the first data for the type of cyberattack; and prompting the language model with the prompt to obtain a cyberattack signature in response (Lal: [0030]: the one or more LLMs may be deployed as components within the cyber threat detection engine, the cyber threat autonomous response engine, the cyberattack simulation engine, and/or the cyberattack restoration engine. [0056]: For instance, the LLM can generate filters (cyberattack signatures) based on specific criteria, such as specific times or periods of time, a specific IP address or IP address range, specific attack signatures (e.g., specific patterns of network traffic, email content or code segments, specific packet header content, specific commands or keywords, specific points of access attempts, etc.). [0084]: the LLM(s) 114 can generate JSON elements performing as filters that operate in accordance with specific criteria, such as timeframes, source IP addresses, or specific attack signatures as described above. [0085]: Pre-prompted with a defined specification, example parsing templates, and example pattern and log pairs, the LLM(s) 114 can transform the unstructured data set 111 into a format recognized and utilized by logic within the cyber threat detection engine 130). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Lal in the invention of D1 to include the above limitations. The motivation to do so would be to provide the AI-based cybersecurity system 100 with an operational advantages by (i) simplifying the data processing pipeline through faster integration (AI detection model training) with third-party source data and (ii) enhancing the efficiency of cybersecurity operations through intelligent and targeted gathering of salient data to improve and/or expand cyber threat detection functionality supported by the cyber threat detection engine 130 (Lal: [0084]). As per claim 9, D1 teaches: A non-transitory machine-readable medium having program code stored thereon, the program code comprising instructions to: generate a syntax description, wherein the syntax description describes syntax for cyberattack signatures to a language model (D1: page 8, left column: paragraph 2-4: Therefore, for the preprocessing our goal is to generate multiple descriptions of the same attack pattern, to enhance our ability to discover similarities between such descriptions and the taxonomy’s examples. The first strategy prompts the LLM to extract blocks of raw text or sentences that explicitly contain formal descriptions of attack patterns. The output of this strategy is generally a paragraph, or in some cases a single sentence. The second strategy leverages the LLM’s reasoning abilities and prompts it to describe step-by-step the attack’s events, seeking to identify implicit descriptions); test the syntax description, wherein the instructions to test the syntax description comprise instructions (D1: page 8, right column, last paragraph: We trained all the selected models on the same dataset, using the same train/test/validation set split, to ensure a fair comparison. Page 9, left column: paragraph 1: We then test the trained methods and tools performance using our dataset, since it focuses on CTI-metrics) to, generate a cyberattack signature based, at least in part, on the syntax description and data for a corresponding type of cyberattack, wherein the instructions to generate the cyberattack signature comprise instructions to prompt the language model with a prompt to obtain the cyberattack signature in response, wherein the prompt comprises the syntax description and data of the corresponding type of cyberattack (D1: page 11, left column: last 3 paragraphs: aCTIon uses two different strategies for selecting the text block. The first strategy prompts the LLM to retrieve portions of the text that contains the attack pattern description (i.e., more than one sentence). However, the LLM may not recognize the specific attack pattern, indeed there is no guarantee that the LLM knows this specific attack pattern. To avoid such cases, a second strategy is used together with the previous one. The LLM is also prompted to reason about the key steps performed in the attack. The sentence below is the output of the second strategy. This sentence not only clearly expresses the attack pattern (cyberattack signature) but it easier to process in the classification step); and determine whether the cyberattack signature satisfies minimum signature conditions for the corresponding type of cyberattack (D1: page 9, left column: B. Performance metrics: We compared each method against the Ground Truth (GT) from our dataset using the following metrics as defined in [19]: • Recall: fraction of unique entities in the GT that have been correctly extracted • Precision: fraction of unique extracted entities that are correct (i.e. part of the GT) • F1-score: harmonic mean of Precision and Recall. Page 10: right column: D. Attack Pattern Extraction: The first plot of Figure 8a reports the number of attack pattern extracted from each report by different methods. aCTIon outperforms all the baselines in terms of overall performance (F1-score) by about 10% point. More importantly, the recall is higher than any other solution, and the average precision is about 50%. These results make a manual verification by CTI analysts manageable: the average number of attack patterns extracted per-report is 25 (cf. Figure 8a). Page 13, left column: paragraph 3: Other issues: The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness); and based on determining that the cyberattack signature satisfies the minimum signature conditions passed the testing, deploy the syntax description in combination with the language model to generate cyberattack signatures of additional types of cyberattacks (D1: Page 13, left column: paragraph 3: Other issues: The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness. Right column: VIII. Conclusion: Our evaluation on the proposed benchmark dataset shows that aCTIon largely outperforms previous solutions. Currently, aCTIon is in testing within our organization for daily production deployment). D1 teaches “for daily production deployment” but does not teach deploy the syntax description in combination with the language model to generate cyberattack signatures of additional types of cyberattacks. However, Lal teaches: deploy the syntax description in combination with the language model to generate cyberattack signatures of additional types of cyberattacks (Lal: [0030]: the one or more LLMs may be deployed as components within the cyber threat detection engine, the cyber threat autonomous response engine, the cyberattack simulation engine, and/or the cyberattack restoration engine. [0056]: For instance, the LLM can generate filters (cyberattack signatures) based on specific criteria, such as specific times or periods of time, a specific IP address or IP address range, specific attack signatures (e.g., specific patterns of network traffic, email content or code segments, specific packet header content, specific commands or keywords, specific points of access attempts, etc.). [0084]: the LLM(s) 114 can generate JSON elements performing as filters that operate in accordance with specific criteria, such as timeframes, source IP addresses, or specific attack signatures as described above. [0085]: Pre-prompted with a defined specification, example parsing templates, and example pattern and log pairs, the LLM(s) 114 can transform the unstructured data set 111 into a format recognized and utilized by logic within the cyber threat detection engine 130). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Lal in the invention of D1 to include the above limitations. The motivation to do so would be to provide the AI-based cybersecurity system 100 with an operational advantages by (i) simplifying the data processing pipeline through faster integration (AI detection model training) with third-party source data and (ii) enhancing the efficiency of cybersecurity operations through intelligent and targeted gathering of salient data to improve and/or expand cyber threat detection functionality supported by the cyber threat detection engine 130 (Lal: [0084]). As per claim 16, D1 teaches: An apparatus comprising: a processor; and a machine-readable medium having instructions stored thereon that are executable by the processor to cause the apparatus to: generate a syntax description, wherein the syntax description describes syntax for cyberattack signatures to a language model (D1: page 8, left column: paragraph 2-4: Therefore, for the preprocessing our goal is to generate multiple descriptions of the same attack pattern, to enhance our ability to discover similarities between such descriptions and the taxonomy’s examples. The first strategy prompts the LLM to extract blocks of raw text or sentences that explicitly contain formal descriptions of attack patterns. The output of this strategy is generally a paragraph, or in some cases a single sentence. The second strategy leverages the LLM’s reasoning abilities and prompts it to describe step-by-step the attack’s events, seeking to identify implicit descriptions); test the syntax description, wherein the instructions to test the syntax description (D1: page 8, right column, last paragraph: We trained all the selected models on the same dataset, using the same train/test/validation set split, to ensure a fair comparison. Page 9, left column: paragraph 1: We then test the trained methods and tools performance using our dataset, since it focuses on CTI-metrics) comprise instruction executable by the processor to cause the apparatus to, generate a cyberattack signature based, at least in part, on the syntax description and indication of a type of cyberattack and a description of cyberattack context, wherein the instructions to generate the cyberattack signature comprise instructions executable by the processor to cause the apparatus to prompt the language model with a prompt to obtain the cyberattack signature in response, wherein the prompt comprises the syntax description, the indication of the type of cyberattack and the description of the cyberattack context (D1: page 9, left column: Attack Pattern Extraction: All the methods employ datasets based on the same taxonomy (i.e., MITRE ATT&CK) and that were directly extracted from the same source, either the description of the MITRE attack patterns or samples of MITRE attack pattern description (both provided by MITRE. page 11, left column: last 3 paragraphs: aCTIon uses two different strategies for selecting the text block. The first strategy prompts the LLM to retrieve portions of the text that contains the attack pattern description (i.e., more than one sentence). However, the LLM may not recognize the specific attack pattern, indeed there is no guarantee that the LLM knows this specific attack pattern. To avoid such cases, a second strategy is used together with the previous one. The LLM is also prompted to reason about the key steps performed in the attack. The sentence below is the output of the second strategy. “The January 2022 version of PlugX malware uses RC4 encryption with a dynamically built key for communications with the command and control (C2) server.” This sentence not only clearly expresses the attack pattern (cyberattack signature) but it easier to process in the classification step); and determine whether the cyberattack signature satisfies minimum signature conditions for the type of cyberattack (D1: page 9, left column: B. Performance metrics: We compared each method against the Ground Truth (GT) from our dataset using the following metrics as defined in [19]: • Recall: fraction of unique entities in the GT that have been correctly extracted • Precision: fraction of unique extracted entities that are correct (i.e. part of the GT) • F1-score: harmonic mean of Precision and Recall. Page 10: right column: D. Attack Pattern Extraction: The first plot of Figure 8a reports the number of attack pattern extracted from each report by different methods. aCTIon outperforms all the baselines in terms of overall performance (F1-score) by about 10% point. More importantly, the recall is higher than any other solution, and the average precision is about 50%. These results make a manual verification by CTI analysts manageable: the average number of attack patterns extracted per-report is 25 (cf. Figure 8a). Page 13, left column: paragraph 3: Other issues: The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness); and based on determining that the cyberattack signature satisfies the minimum signature conditions (D1: Page 13, left column: paragraph 3: Other issues: The current precision and recall of aCTIon is within the 60%-90% range for most entities. In our experience, this is already in line with the performance of a CTI analyst, and unlike human analysts, aCTIon keeps consistent performance over time not being affected by tiredness. Right column: VIII. Conclusion: Our evaluation on the proposed benchmark dataset shows that aCTIon largely outperforms previous solutions. Currently, aCTIon is in testing within our organization for daily production deployment), D1 teaches “for daily production deployment” but does not teach actual deployment. Specifically, D1 does not explicitly teach: prompt the language model with one or more prompts to generate one or more additional cyberattack signatures for one or more additional types of cyberattacks, wherein each of the one or more prompts comprises indications of the syntax description and data for corresponding ones of the one or more additional types of cyberattacks. However, Lal teaches: prompt the language model with one or more prompts to generate one or more additional cyberattack signatures for one or more additional types of cyberattacks, wherein each of the one or more prompts comprises indications of the syntax description and data for corresponding ones of the one or more additional types of cyberattacks (Lal: [0030]: the one or more LLMs may be deployed as components within the cyber threat detection engine, the cyber threat autonomous response engine, the cyberattack simulation engine, and/or the cyberattack restoration engine. [0056]: For instance, the LLM can generate filters (cyberattack signatures) based on specific criteria, such as specific times or periods of time, a specific IP address or IP address range, specific attack signatures (e.g., specific patterns of network traffic, email content or code segments, specific packet header content, specific commands or keywords, specific points of access attempts, etc.). [0084]: the LLM(s) 114 can generate JSON elements performing as filters that operate in accordance with specific criteria, such as timeframes, source IP addresses, or specific attack signatures as described above. [0085]: Pre-prompted with a defined specification, example parsing templates, and example pattern and log pairs, the LLM(s) 114 can transform the unstructured data set 111 into a format recognized and utilized by logic within the cyber threat detection engine 130). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Lal in the invention of D1 to include the above limitations. The motivation to do so would be to provide the AI-based cybersecurity system 100 with an operational advantages by (i) simplifying the data processing pipeline through faster integration (AI detection model training) with third-party source data and (ii) enhancing the efficiency of cybersecurity operations through intelligent and targeted gathering of salient data to improve and/or expand cyber threat detection functionality supported by the cyber threat detection engine 130 (Lal: [0084]). As per claim 5, D1 in view of Lal teaches: The method of claim 1, wherein testing the first syntax description further comprises testing the one or more cyberattack signatures against traffic logs for the corresponding ones of the one or more cyberattack types (Lal: [0183]: The data store 350 stores comprehensive logs for network traffic observed. [0059] The cybersecurity telemetry data uses regular expression pattern analytics to parse fields. Pre-prompted with a defined specification, such as example parsing templates, example regular expression patterns (e.g., GROK patterns) and log pairs, the LLM can produce new templates when given example log entries.), wherein testing the one or more cyberattack signatures against traffic logs for the corresponding ones of the one or more cyberattack types comprises determining at least one of a false positive rate and a false negative rate of malicious detections for each of the one or more cyberattack signatures on traffic logs of corresponding types of cyberattacks (D1: page 9, left column: B. Performance metrics: We compared each method against the Ground Truth (GT) from our dataset using the following metrics as defined in [19]: • Recall: fraction of unique entities in the GT that have been correctly extracted • Precision: fraction of unique extracted entities that are correct (i.e. part of the GT) • F1-score: harmonic mean of Precision and Recall. The Precision is instead impacted by the extracted entities which are wrong (i.e. False Positive), e.g. the ones extracted with a wrong type). The examiner provides the same rationale to combine prior arts D1 and Lal as in claim 1 above. As per claim 6, D1 in view of Lal teaches: The method of claim 5, further comprising, based on a determination that the first syntax description fails the testing, updating the first syntax description to a second syntax description; testing the second syntax description based, at least in part, on at least one of the previously generated cyberattack signatures minimum signature conditions and the traffic logs; and based on determining that the second syntax description passed the testing, deploying the second syntax description for generating prompts to the language model (D1: page 7, left column: last paragraph: For instance, we always provide the definition for an entity we want to extract, even if the LLM has in principle acquired knowledge about such entity definition during its training. Nonetheless, this approach relies exclusively on prompt engineering [30] (process of crafting and refining prompts), and by no means it provides strong guarantees about the produced output. Therefore, we always introduce additional steps with the aim of verifying LLM’s answers. These steps might be of various types, including a second interaction with the LLM to perform a self-check activity: the LLM is prompted with a different request about the same task, with the objective of verifying consistency. Finally, we keep CTI analysts in the output verification loop, always including in our procedures the STIX bundle review step as described in Section III-A. Page 8, left column: paragraph 2-4: Therefore, for the preprocessing our goal is to generate multiple descriptions of the same attack pattern, to enhance our ability to discover similarities between such descriptions and the taxonomy’s examples. Lal: [0030]: the one or more LLMs may be deployed as components within the cyber threat detection engine, the cyber threat autonomous response engine, the cyberattack simulation engine, and/or the cyberattack restoration engine). The examiner provides the same rationale to combine prior arts D1 and Lal as in claim 1 above. As per claims 7, 14, and 20, D1 in view of Lal teaches: The method of claim 1, wherein the cyberattack signature indicates at least one context and at least one pattern, wherein the context comprises one or more fields in a protocol (D1: page 11, left column: last 3 paragraphs: aCTIon uses two different strategies for selecting the text block. The first strategy prompts the LLM to retrieve portions of the text that contains the attack pattern description. The LLM is also prompted to reason about the key steps performed in the attack. The sentence below is the output of the second strategy. This sentence not only clearly expresses the attack pattern (cyberattack signature) but it easier to process in the classification step. Lal: [0084] For instance, as an illustrative example, the LLM(s) 114 can generate JSON elements performing as filters that operate in accordance with specific criteria, such as timeframes, source IP addresses, or specific attack signatures as described above). As per claims 8, 15, D1 in view of Lal teaches: The method of claim 1, wherein the language model comprises a large language model (D1: page 11, left column: last 3 paragraphs: The first strategy prompts the LLM to retrieve portions of the text that contains the attack pattern description). As per claim 13, D1 in view of Lal teaches: The non-transitory machine-readable medium of claim 9, wherein the instructions to deploy the syntax description in combination with the language model to generate cyberattack signatures of additional types of cyberattacks comprises instructions to, generate one or more prompts for corresponding one or more of the additional types of cyberattacks based, at least in part, on the syntax description and data for corresponding ones of the one or more of the additional types of security attacks; and prompt the language model with the one or more prompts to obtain the cyberattack signatures in response (Lal: [0030]: the one or more LLMs may be deployed as components within the cyber threat detection engine, the cyber threat autonomous response engine, the cyberattack simulation engine, and/or the cyberattack restoration engine. [0056]: For instance, the LLM can generate filters (cyberattack signatures) based on specific criteria, such as specific times or periods of time, a specific IP address or IP address range, specific attack signatures (e.g., specific patterns of network traffic, email content or code segments, specific packet header content, specific commands or keywords, specific points of access attempts, etc.). [0084]: the LLM(s) 114 can generate JSON elements performing as filters that operate in accordance with specific criteria, such as timeframes, source IP addresses, or specific attack signatures as described above. [0085]: Pre-prompted with a defined specification, example parsing templates, and example pattern and log pairs, the LLM(s) 114 can transform the unstructured data set 111 into a format recognized and utilized by logic within the cyber threat detection engine 130). As per claims 21, 24, and 27, D1 in view of Lal teaches: The method of claim 1, wherein the minimum signature conditions comprise conditions that, for each cyberattack signature of the one or more cyberattack signatures, the cyberattack signature comprises one or more pairs of contexts and patterns of previously generated cyberattack signatures for a corresponding cyberattack type of the one or more cyberattack types (D1: page 9: left column: B. Performance metrics: We compared each method against the Ground Truth (GT) from our dataset using the following metrics as defined in [19]: • Recall: fraction of unique entities in the GT that have been correctly extracted. aCTIon outperforms all the baselines in terms of overall performance (F1-score) by about 10% point. More importantly, the recall is higher than any other solution, and the average precision is about 50%). As per claims 23, 26, and 29, D1 in view of Lal teaches: The method of claim 1, wherein testing the first syntax description further comprises generating the one or more prompts with the first syntax description, data of respective ones of one or more cyberattack types, and data of context of respective ones of the one or more cyberattack types (D1: page 8: left column: paragraphs 2-4: Therefore, for the preprocessing our goal is to generate multiple descriptions of the same attack pattern, to enhance our ability to discover similarities between such descriptions and the taxonomy’s examples. In particular, we introduce three different description generation strategies. The first strategy prompts the LLM to extract blocks of raw text or sentences that explicitly contain formal descriptions of attack patterns. The output of this strategy is generally a paragraph, or in some cases a single sentence. The second strategy leverages the LLM’s reasoning abilities and prompts it to describe step-by-step the attack’s events, seeking to identify implicit descriptions [34]. Page 9, left column: Attack Pattern Extraction: All the methods employ datasets based on the same taxonomy (i.e., MITRE ATT&CK) and that were directly extracted from the same source, either the description of the MITRE attack patterns or samples of MITRE attack pattern description (both provided by MITRE)). Claims 22, 25, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over D1 in view of Lal as applied to claims 21, 24, and 27 above, and further in view of US 20190230098 to Navarro (hereinafter Navarro). As per claims 22, 25, and 28, D1 in view of Lal does not teach the limitations of claims 22, 25, and 28. However, Navarro teaches: wherein the one or more pairs of contexts and patterns comprise patterns for fields of one or more Internet protocols included in the previously generated cyberattack signatures (Navarro: [0009]: More specifically, an Indicator of Compromise Calculation (IoC-C) system is described that may monitor a client interaction performed on a computing device, and further identify IoC metadata that may relate to a malicious threat. In various examples, the IoC metadata may include virus signatures, Internet Protocol (IP) addresses, email address, an indication of a service configuration change, an indication of a data file being deleted, registry keys, file hashes (i.e., MD5 hashes), or Hyper Text Transfer Protocol (HTTP) user agents (internet protocols)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Navarro in the invention of D1 in view of Lal to include the above limitations. The motivation to do so would be to use the IoC metadata to identify a malicious threat from data records (Navarro: [0010]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: DeepSign: Deep Learning for Automatic Malware Signature Generation and Classification by David et al: This paper presents a novel deep learning based method for automatic mal ware signature generation and classification. The method uses a deep belief network (DBN), implemented with a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing mal ware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants. Using a dataset containing hundreds of variants for several major mal ware families, our method achieves 98.6% classification accuracy using the signatures generated by the DBN. The presented method is completely agnostic to the type of malware behavior that is logged (e.g., API calls and their parameters, registry entries, web sites and ports accessed, etc.), and can use any raw input from a sandbox to successfully train the deep neural network which is used to generate mal ware signatures. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHURI R HERZOG whose telephone number is (571)270-3359. The examiner can normally be reached 8:30AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at (571)272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MADHURI R. HERZOG Primary Examiner Art Unit 2438 /MADHURI R HERZOG/Primary Examiner, Art Unit 2438 /MADHURI R HERZOG/Primary Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Jul 09, 2025
Non-Final Rejection — §103
Oct 01, 2025
Interview Requested
Oct 02, 2025
Interview Requested
Oct 08, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Examiner Interview Summary
Oct 31, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103
Apr 01, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603766
QKD SWITCHING SYSTEM AND PROTOCOLS
2y 5m to grant Granted Apr 14, 2026
Patent 12592925
METHOD AND SYSTEM FOR AUTHENTICATING A USER ON AN IDENTITY-AS-A-SERVICE SERVER WITH A TRUSTED THIRD PARTY
2y 5m to grant Granted Mar 31, 2026
Patent 12592820
SYSTEMS AND METHODS FOR DIGITAL RETIREMENT OF INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12587383
METHOD AND SYSTEM FOR OUT-OF-BAND USER IDENTIFICATION IN THE METAVERSE VIA BIOGRAPHICAL (BIO) ID
2y 5m to grant Granted Mar 24, 2026
Patent 12556550
THREAT DETECTION PLATFORMS FOR DETECTING, CHARACTERIZING, AND REMEDIATING EMAIL-BASED THREATS IN REAL TIME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
90%
With Interview (+11.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 662 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month