Prosecution Insights
Last updated: April 19, 2026
Application No. 18/619,834

THREAT-INFORMED ADVERSARY ATTACK SIMULATION

Final Rejection §103§112
Filed
Mar 28, 2024
Examiner
ALI, AFAQ
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Fortinet Inc.
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
119 granted / 132 resolved
+32.2% vs TC avg
Moderate +12% lift
Without
With
+12.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
156
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Objection to the abstract has been overcome due to applicant’s amendments Claim objection to claims 2 and 3 has been overcome due to applicant’s amendments Some USC 112(b) rejection for claims 1-5 have been overcome due to applicant’s amendments Claims 1, 2, 4, and 5 have been amended Claim 3 has been cancelled Claims 1, 2, 4, and 5 are pending Response to Arguments Applicant’s arguments filed on 12/11/2025 have been fully considered. With respect to the objection to the abstract. The objection has been overcome due to applicant’s amendments. With respect to the objection to claims 2 and 3. The objection has been overcome due to applicant’s amendments. With respect to USC 112(b) rejection for claim 1 reciting the limitation “A computer-implemented method in a threat simulation system on an enterprise network, and at least partially implemented in hardware, to perform a method for assessing threat defenses”. The rejection has been overcome due to applicant’s filed amendments. With respect to USC 112(b) rejection for claims 1-5 reciting the limitation "the simulated attack pattern". The rejection has been overcome due to applicant’s amendments. With respect to USC 112(b) rejection for Claim 5 reciting the limitation "the deceptive proxy ". The rejection has been overcome due to applicant’s amendments. With respect to USC 112(f) interpretation for claim 5, applicant has argued that the newly amended limitation of “communicatively coupled to the processor and storing source code that, when executed by the processor, comprises: …” overcomes the 112(f) interpretation. Examiner respectfully disagrees. The claim limitation currently only recites that once the source code is executed by the processor it comprises a threat profile generator, a profile selector, an attack simulator, and an attack assessor. However, the amended limitation does not recite that when the source code is executed by the processor it implements these features. Examiner suggests amending the claim to recite “communicatively coupled to the processor and storing source code that, when executed by the processor, implements: …”. Appropriate correction is required. With respect to USC 112(b) rejection for claim 5 due to the written description failing to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The rejection is being maintained due to the amendments not overcoming the 112(f) claim interpretation. With respect to newly added amendments of simulating an advanced persistent threat (ATP) attack for claim 1. BORT teaches this limitation as can be seen in the previous office action’s rejection of now cancelled claim 3. Therefore, claim 1 is rejected under ALEXANDER-BUTCHKO-BORT. Furthermore, the newly amended limitation of “simulating … attack on the components by injecting data packets, from within the enterprise network …” for claims 1, 4, and 5. This limitation is taught by ALEXANDER as can be seen in figure 2 and para. 0005 of ALEXANDER. Additional arguments are moot in view of new grounds of rejection necessitated by the claim amendments. Claim Objections Claims 1 is objected to because of the following informalities: claim 1 recites “advanced persistent threat (ATP)”. The acronym of ATP is incorrect. Examiner believes the acronym should be amended to APT. For the purpose of examination examiner is interpreting this limitation as “advanced persistent threat (APT)” and all other recitations of ATP as APT. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a threat profile generator to generate” in claim 5 “a profile selector to select” in claim 5 “… an attack simulator to simulate” in claim 5 “… an attack assessor to collect logs” in claim 5 Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. See specification paragraphs [0046, 0047, 0037] for structural support for “a threat profile generator”, described as Threat Actor/APT Profile Generating Engine in the specification. See specification paragraphs [0020, 0022] for functional support for “a threat profile generator” See specification paragraphs [0046, 0047, 0037] for structural support for “… an attack assessor”, described as validation server in the specification. See specification paragraphs [0034, 0026] for functional support for “… an attack assessor” If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1, 2, and 5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites of a “simulating advanced persistent threat (ATP) attack” and “generating a dynamic adversary profile for a simulated attack”. The claim further recites “based on results of the simulated ATP attack, measure defenses to the simulated attack on components …”. It is unclear if the simulated attack is different from simulated ATP attack. Examiner suggests amending the claim to recite of “the simulated ATP attack” instead of “the simulated attack”. Appropriate correction is required. Claim 2 depends on claim 1 and further recites of “the simulated attack”. Therefore, claim 2 also inherits the rejection. Claim limitations: “a profile selector to select” in claim 5 “… an attack simulator to simulate” in claim 5 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification fails to describe structure for a profile selector. The specification describes of selecting group profiles but does not link the selecting to a profile selector with corresponding hardware. As for attack simulator the specification also fails to describe structure. The specification describes of a Threat Simulation Agent (TS Agent). However, there is no structural support for TS agent in the specification. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over ALEXANDER (US-20170331847-A1) in view of BUTCHKO (US-20210264038-A1), and further in view of BORT (US-20180219902-A1) hereinafter ALEXANDER-BUTCHKO-BORT. Regarding claim 1, ALEXANDER teaches “A computer-implemented method in a threat simulation system on an enterprise network, and at least partially implemented in hardware, for assessing threat defenses by simulating … attack using real-time threat intelligence, the computer- implemented method comprising: ([ALEXANDER, abstract] “Systems and methods are disclosed herein to provide improved online security testing of security devices and networks”) ([ALEXANDER, para. 0005] “With reference to FIG. 2, a representation of a possible online test setup is depicted. Corporate main office 10 connected to Internet 11 with security device 12 interposed between Internet 11 router 13 may utilize online tester 20 to conduct periodic security assessments and determine that an adequate security posture is being maintained. Such attacks may simulate the effect of attack traffic arriving from Internet 11 and directed at protected device 15”) ([ALEXANDER, para. 0006] “online tester 20 may be set up to perform a simulated attack after a new software version has been loaded into security device 12 or router 13, or after the network or devices have been reconfigured”) … wherein the simulated attack is based on historical attack data, threat intelligence feeds, and real-time monitoring of adversary profiles; ([ALEXANDER, para. 0053] “master attack DB 163, that may hold the complete set of attacks available to be conducted, and from which subsets may be downloaded to remote probes”) ([ALEXANDER, para. 0057] “attack profiles in master attack DB 163 within online test manager 110 may need to be periodically updated, as new security issues are found and new attack vectors are developed. This may also be done through user interface connection 168, for example by enabling the download of attack profiles to master attack DB 163 over the Internet via a secured connection to a centralized repository. … the remote probes may always be kept up to date with the latest set of attack profiles”) ([ALEXANDER, para. 0047] “Security device 63 may intercept and examine the simulated attack traffic, and, if it matches signatures or attributes of known attack vectors”) ([ALEXANDER, para. 0007] “verify that security device 12 is properly configured and functioning by simulating the signature of the data being exfiltrated (using attack generator 21), injecting traffic into router 13, and detecting (using attack checker 22) whether the exfiltrated data is observed at the Internet-facing side of security device 12.”) selecting relevant adversary group profile; ([ALEXANDER, para. 0054] “online test manager 110 may accept user commands at user interface 162, and may act on them to select a subset of the available attack profiles in master attack DB 163 and download this subset to the local attack DB in one or more remote probes (such as local attack DB 155 in remote probe 112 in FIG. 6).”) simulating … attack on the components by injecting data packets, from within the enterprise network, based on the specific threat group without malicious components of the specific threat group to test security defenses of the components; ([ALEXANDER, para. 0054] “After online test manager 110 has downloaded the desired subset of attack profiles, it may command the remote probe(s) to begin processing these profiles and injecting and processing attack traffic. While the remote probe(s) are in operation, online test manager 110 may monitor the progress of the simulated attack”) ([ALEXANDER, para. 0079] “Simulated attack traffic that is generated by remote probe 112 and then encapsulated and sent via a dedicated tunnel to modified remote attack reflector 221 in this manner may thus take the appearance of originating from an actual attacker located on the Internet.”) ([ALEXANDER, para. 0076] “Simulated wireless attack traffic 363 generated by attack generator and checker 113 may be injected into AP 355 via wireless antenna 361; alternatively, simulated wired attack traffic also generated by attack generator and checker 113 may be injected into security device 356 via wired link 367”) ([ALEXANDER, para. 0005] “With reference to FIG. 2, a representation of a possible online test setup is depicted. Corporate main office 10 connected to Internet 11 with security device 12 interposed between Internet 11 router 13 may utilize online tester 20 to conduct periodic security assessments and determine that an adequate security posture is being maintained. … Attack simulation may be conducted by generating simulated attack traffic 23 from attack generator 21 within online tester 20. Attack traffic 23 may be injected into security device 12 on its Internet-facing side. If security device 12 is improperly configured or has unexpected vulnerabilities, some fraction of attack traffic 23 may be inadvertently allowed to pass through as “leaking” attack traffic 24. Attack checker 22 may simulate a protected entity, such as protected device 15 or protected data 16, and may receive the leaking attack traffic 24, and this may effectively indicate that an attacker could gain access to an actual protected device as a result. Online tester 20 may then determine the vulnerabilities of security device 12 by comparing the generated attack traffic 23 with leaking attack traffic 24, and may create a report detailing the problems.”) collecting logs from the simulated … attack; ([ALEXANDER, para. 0074] “At step 310, after all the desired attack sequences have been performed a report is generated and issued to the user of the test system. For example, online test manager 110 may generate a report that indicates the results of each attack sequence. The report may include raw statistics for each attack sequence, such as the number of malicious packets that make it through a security device, versus the total of number of packets transmitted to the security device”) and based on results of the simulated … attack, measure defenses to the simulated attack on components and take a security action concerning at least one of the components. ([ALEXANDER, para. 0054] “When the desired set of attacks has been completed, online test manager 110 may receive attack results and status indications from the remote probe(s) and may utilize them to assess the security posture and generate reports that may be passed to the user via user interface 162. As the simulated attack traffic may be injected into a wireless AP by wireless interface 156 and antenna 159 of FIG. 6, it is therefore possible to test the ability the WLAN (e.g., the combination of wireless AP 64 and security device 63 in FIG. 5) to detect and prevent security breaches by wireless attackers.”) ([ALEXANDER, para. 0006] “the objective of using online tester 20 may generally be to detect and close off security “holes” before they become an actual problem.”). However, ALEXANDER does not teach “generating a dynamic adversary profile for a simulated attack on components of the enterprise network by selecting a profile of at least one specific threat group”. In analogous teaching BUTCHKO teaches “generating a dynamic adversary profile for a simulated attack on components of the enterprise network by selecting a profile of at least one specific threat group” ([BUTCHKO, para. 0050] “a risk analysis and assessment (RAA) system receives a request for a risk analysis report for an asset, such as risk analysis report 270 for assets 210.”) ([BUTCHKO, para. 0051] “The flow chart continues at 320, where the RAA system identifies at least one attacker and/or attack type based on an asset type for the asset”) ([BUTCHKO, para. 0034] “the attack method library 152 may include attack types relevant to the organization's particular configuration of assets”) ([BUTCHKO, para. 0053] “At 350, the RAA system identifies at least one vulnerability of the protection measures for the asset to the attack type based on the simulated attack scenario. … Once all of the attacker and/or attack types identified at 320 have been simulated, the RAA system generates the risk analysis report for the asset at 370. In some embodiments, the risk analysis report includes a detailed description of the simulated attack scenario, including significant events, actions, decisions and outcomes affecting successful completion of the attack.”). Thus, given the teaching of BUTCHKO, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of generating a dynamic adversary profile for a simulated attack by BUTCHKO into the teaching of a threat simulation system by ALEXANDER. One of ordinary skill in the art would have been motivated to do so because BUTCHKO recognizes the need the need for improved risk analysis ([BUTCHKO, para. 0002] “what is needed is an improved risk analysis and management system capable of standardizing qualitative assessments, decreasing the amount of initial data input needed to perform assessments, and providing contextualized recommendations for mitigation strategies”). However, ALEXANDER-BUTCHKO does not teach “simulating advanced persistent threat (ATP) attack”. In analogous teaching BORT teaches “simulating advanced persistent threat (ATP) attack” ([BORT, para. 0018] “An operator may select a preset configuration from this Threat Catalog for the campaign. The Threat Catalog contains a list of previously published exploits. … if an operator wants to simulate the Miniduke Advanced Persistent Threat (APT) on a target network, the operator selects this element from the Threat Catalog for the campaign. The platform will emulate the attack vectors used by Miniduke APT against the target network.”). Thus, given the teaching of BORT, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of simulating APT attack by BORT into the teaching of a threat simulation system by ALEXANDER-BUTCHKO. One of ordinary skill in the art would have been motivated to do so because BORT recognizes the need to improve network security ([BORT, para. 0005] “existing systems for discovery of vulnerabilities are limited because they do not actually attempt exploitation on an endpoint in a production system, and do not scale. Thus, what is needed is a system for allowing network security personnel to quickly discern malicious messages from a large volume of reported threats”) Regarding claim 2, ALEXANDER-BUTCHKO-BORT teaches all limitations of claim 1. ALEXANDER further teaches “wherein the simulated attack comprises one or more of simulating network traffic patterns, simulating download samples, playbook simulation, simulating exfiltrating demonstration files and simulating pinging known C2 infrastructure.” ([ALEXANDER, para. 0005] “Attack simulation may be conducted by generating simulated attack traffic 23 from attack generator 21 within online tester 20. Attack traffic 23 may be injected into security device 12 on its Internet-facing side.”) Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over ALEXANDER (US-20170331847-A1) in view of BUTCHKO (US-20210264038-A1), hereinafter ALEXANDER-BUTCHKO. Regarding claim 4, ALEXANDER teaches “A non-transitory computer-readable medium in a threat simulation system on an enterprise network, and at least partially implemented in hardware, to perform a method for assessing threat defenses by simulating an attack using real-time threat intelligence, the method comprising: ([ALEXANDER, abstract] “Systems and methods are disclosed herein to provide improved online security testing of security devices and networks”) ([ALEXANDER, para. 0005] “With reference to FIG. 2, a representation of a possible online test setup is depicted. Corporate main office 10 connected to Internet 11 with security device 12 interposed between Internet 11 router 13 may utilize online tester 20 to conduct periodic security assessments and determine that an adequate security posture is being maintained. Such attacks may simulate the effect of attack traffic arriving from Internet 11 and directed at protected device 15”) ([ALEXANDER, para. 0006] “online tester 20 may be set up to perform a simulated attack after a new software version has been loaded into security device 12 or router 13, or after the network or devices have been reconfigured”) … wherein the simulated attack is based on historical attack data, threat intelligence feeds, and real-time monitoring of adversary profiles; ([ALEXANDER, para. 0053] “master attack DB 163, that may hold the complete set of attacks available to be conducted, and from which subsets may be downloaded to remote probes”) ([ALEXANDER, para. 0057] “attack profiles in master attack DB 163 within online test manager 110 may need to be periodically updated, as new security issues are found and new attack vectors are developed. This may also be done through user interface connection 168, for example by enabling the download of attack profiles to master attack DB 163 over the Internet via a secured connection to a centralized repository. … the remote probes may always be kept up to date with the latest set of attack profiles”) ([ALEXANDER, para. 0047] “Security device 63 may intercept and examine the simulated attack traffic, and, if it matches signatures or attributes of known attack vectors”) ([ALEXANDER, para. 0007] “verify that security device 12 is properly configured and functioning by simulating the signature of the data being exfiltrated (using attack generator 21), injecting traffic into router 13, and detecting (using attack checker 22) whether the exfiltrated data is observed at the Internet-facing side of security device 12.”) selecting relevant adversary group profile; ([ALEXANDER, para. 0054] “online test manager 110 may accept user commands at user interface 162, and may act on them to select a subset of the available attack profiles in master attack DB 163 and download this subset to the local attack DB in one or more remote probes (such as local attack DB 155 in remote probe 112 in FIG. 6).”) simulating the attack on the components from within the enterprise network, by injecting data packets based on the specific threat group without malicious components of the specific threat group to test security defenses of the components; ([ALEXANDER, para. 0054] “After online test manager 110 has downloaded the desired subset of attack profiles, it may command the remote probe(s) to begin processing these profiles and injecting and processing attack traffic. While the remote probe(s) are in operation, online test manager 110 may monitor the progress of the simulated attack”) ([ALEXANDER, para. 0079] “Simulated attack traffic that is generated by remote probe 112 and then encapsulated and sent via a dedicated tunnel to modified remote attack reflector 221 in this manner may thus take the appearance of originating from an actual attacker located on the Internet.”) ([ALEXANDER, para. 0076] “Simulated wireless attack traffic 363 generated by attack generator and checker 113 may be injected into AP 355 via wireless antenna 361; alternatively, simulated wired attack traffic also generated by attack generator and checker 113 may be injected into security device 356 via wired link 367”) ([ALEXANDER, para. 0005] “With reference to FIG. 2, a representation of a possible online test setup is depicted. Corporate main office 10 connected to Internet 11 with security device 12 interposed between Internet 11 router 13 may utilize online tester 20 to conduct periodic security assessments and determine that an adequate security posture is being maintained. … Attack simulation may be conducted by generating simulated attack traffic 23 from attack generator 21 within online tester 20. Attack traffic 23 may be injected into security device 12 on its Internet-facing side. If security device 12 is improperly configured or has unexpected vulnerabilities, some fraction of attack traffic 23 may be inadvertently allowed to pass through as “leaking” attack traffic 24. Attack checker 22 may simulate a protected entity, such as protected device 15 or protected data 16, and may receive the leaking attack traffic 24, and this may effectively indicate that an attacker could gain access to an actual protected device as a result. Online tester 20 may then determine the vulnerabilities of security device 12 by comparing the generated attack traffic 23 with leaking attack traffic 24, and may create a report detailing the problems.”) collecting logs from the simulated attack; ([ALEXANDER, para. 0074] “At step 310, after all the desired attack sequences have been performed a report is generated and issued to the user of the test system. For example, online test manager 110 may generate a report that indicates the results of each attack sequence. The report may include raw statistics for each attack sequence, such as the number of malicious packets that make it through a security device, versus the total of number of packets transmitted to the security device”) and based on results of the simulated attack, measure defenses to the simulated attack on components and take a security action concerning at least one of the components. ([ALEXANDER, para. 0054] “When the desired set of attacks has been completed, online test manager 110 may receive attack results and status indications from the remote probe(s) and may utilize them to assess the security posture and generate reports that may be passed to the user via user interface 162. As the simulated attack traffic may be injected into a wireless AP by wireless interface 156 and antenna 159 of FIG. 6, it is therefore possible to test the ability the WLAN (e.g., the combination of wireless AP 64 and security device 63 in FIG. 5) to detect and prevent security breaches by wireless attackers.”) ([ALEXANDER, para. 0006] “the objective of using online tester 20 may generally be to detect and close off security “holes” before they become an actual problem.”). However, ALEXANDER does not teach “generating a dynamic adversary profile for a simulated attack on components of the enterprise network by selecting a profile of at least one specific threat group”. In analogous teaching BUTCHKO teaches “generating a dynamic adversary profile for a simulated attack on components of the enterprise network by selecting a profile of at least one specific threat group” ([BUTCHKO, para. 0050] “a risk analysis and assessment (RAA) system receives a request for a risk analysis report for an asset, such as risk analysis report 270 for assets 210.”) ([BUTCHKO, para. 0051] “The flow chart continues at 320, where the RAA system identifies at least one attacker and/or attack type based on an asset type for the asset”) ([BUTCHKO, para. 0034] “the attack method library 152 may include attack types relevant to the organization's particular configuration of assets”) ([BUTCHKO, para. 0053] “At 350, the RAA system identifies at least one vulnerability of the protection measures for the asset to the attack type based on the simulated attack scenario. … Once all of the attacker and/or attack types identified at 320 have been simulated, the RAA system generates the risk analysis report for the asset at 370. In some embodiments, the risk analysis report includes a detailed description of the simulated attack scenario, including significant events, actions, decisions and outcomes affecting successful completion of the attack.”). Thus, given the teaching of BUTCHKO, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of generating a dynamic adversary profile for a simulated attack by BUTCHKO into the teaching of a threat simulation system by ALEXANDER. One of ordinary skill in the art would have been motivated to do so because BUTCHKO recognizes the need the need for improved risk analysis ([BUTCHKO, para. 0002] “what is needed is an improved risk analysis and management system capable of standardizing qualitative assessments, decreasing the amount of initial data input needed to perform assessments, and providing contextualized recommendations for mitigation strategies”). Regarding claim 5, this claim recites of a threat simulation system that performs the features of claim 4. Therefore, claim 5 is rejected in a similar manner as in the rejection of claim Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. CHEN (US-20210112092-A1): This prior art teaches of systems for preventing an APT attack and non-transitory machine-readable storage mediums are disclosed. In one aspect, communication data is obtained in a network, association analysis is performed for the communication data, threat data is obtained from the communication data based on an association analysis result, each piece of the obtained threat data is mapped to a corresponding APT attack phase based on a kill chain model; and for each piece of threat data, prevention is performed for a network entity associated with the piece of the threat data based on prevention strategies corresponding to the plurality of APT attack phases. CONNELL (US-20200358806-A1): This prior art teaches of a system and method for developing rich data for holistic metrics for gauging an enterprise cyber security posture to enable proactive and preventative measures in order to minimize the enterprise's exposure to a cyberattack. By taking an enterprise-wide holistic approach to cyber security, the enterprise will have information needed to identify areas of its network systems for remediation that will result in making the enterprise a less attractive target for cyber threat actors. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.A./ 02/05/2026 /AFAQ ALI/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Aug 07, 2025
Non-Final Rejection — §103, §112
Nov 17, 2025
Response Filed
Nov 17, 2025
Response after Non-Final Action
Dec 11, 2025
Response Filed
Feb 05, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585791
ENCRYPTED COMMUNICATION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572656
CONTROL FLOW INTEGRITY MONITORING BASED INSIGHTS
2y 5m to grant Granted Mar 10, 2026
Patent 12563050
TECHNIQUES FOR DETECTING CYBER-ATTACK SCANNERS
2y 5m to grant Granted Feb 24, 2026
Patent 12554828
MULTI-FACTOR AUTHENTICATION USING BLOCKCHAIN
2y 5m to grant Granted Feb 17, 2026
Patent 12549585
VULNERABILITY SCANNING OF HIDDEN NETWORK SYSTEMS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+12.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month