Prosecution Insights
Last updated: April 19, 2026
Application No. 18/845,199

SECURITY METHOD FOR IDENTIFYING KILL CHAINS

Non-Final OA §101§103§112
Filed
Sep 09, 2024
Examiner
POUDEL, SAMIKSHYA NMN
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
British Telecommunications Public Limited Company
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
8 granted / 18 resolved
-13.6% vs TC avg
Strong +80% interview lift
Without
With
+80.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
16.2%
-23.8% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/09/2024, and 12/02/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8, 11, and 12 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 8 recites the limitation “identifying, from the linked detected attack events, one or more attack techniques having a high likelihood of progression to a subsequent attack tactic in the sequence”. The term “high likelihood” is a term of degree that lacks objective boundaries in the claim for determining when likelihood is “high” including how likelihood is determined and what threshold or metric is used. Examiner suggest to clarify the scope of claim. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required. Claim 11 recites “high risk attack paths” without specifying objective criteria for determining what constitutes high risk. The claim does not define “risk” is based on frequency, impact, or other metrics. Examiner suggest to clarify the scope of claim. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required. Claim 12 recites “link with high frequency” without specifying objective criteria for determining what frequency qualifies as “high”. Including the measurement window or baseline. Examiner suggest to clarify the scope of claim. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Independent claims 1, 13 and 14: Step1: Claims 1 is drawn to “a method”, claim 13 is drawn to “a system”, and claim 14 is drawn to “ a computer program product”, therefore each of these claim groups falls under one of four categories of statutory subject matter (process/method, machines/products/apparatus, manufactures, and compositions of matter). Step 2A, Prong 1: Claims 1, 13, and 14 are directed to a judicially recognized exception of an abstract idea without significantly more. Each of claims 1, 13, and 14 recites limitations “defining a sequence of attack tactics, each attack tactic representing a generalization of a set of attack techniques”, “associating one or more attack detection rules with each of the attack techniques; detecting attack events based on the attack detection rules”, “correlating the detected attack events with the attack tactics based on the attack technique associated with the attack detection rule used to detect the attack events”, and “linking the detected attack events based on one or more criteria; and identifying one or more paths of attack techniques through the sequence in dependence on the linked attack events” that under its broadest reasonable interpretation, enumerates abstract ideas. Other than reciting a generic “a processor and memory” (Claim 13), nothing in the claims preclude the steps from practically being performed in the human mind. For example, other than the “a processor” language, the claims encompass a user visually and manually define sequences of attack tactics and techniques, associate detection rules with techniques, detect attack events based on rules, corelate and link detected event based on criteria, and identify attack paths through the sequence. The mere nominal recitation of a generic computer component (a processor) to automate the mental steps and are nothing more than abstract ideas(See MPEP 2106.04(a)(2)(I)(III)). Step 2A, Prong 2: Claim 1 do not recite any additional elements/or steps that would integrate the abstract idea into a practical application. However, claims 13, and 14 recites additional element “memory” to store computer program instructions and “one or more computer processors” to execute the computer program instructions. The computer memory and the computer processor are recited at a high level of generality (i.e., as generic computer components performing generic computer functions to store and to process data respectively). These generic computer functions are no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements does not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(f)). Step 2B: The additional elements “memory” to store computer program instructions and “a processor” to execute the computer program instructions are no more than generic, off-the-shelf computer components, and the Symantec, TLI, OIP Techs, and Versata court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection/receipt of data over a network and/or storing and retrieving information in memory are well-understood, routine, and conventional functions when it is claimed in a merely generic manner (See MPEP 2106.05(d)(II)(IV)). As such, claims 1, 13, and 14 are not patent eligible. Dependent claims 2-12: Step 1: Claims 2-12 are drawn to “a method”, therefore each of these claims falls under one of four categories of statutory subject matter (process/method, machines/products/apparatus, manufactures, and compositions of matter). Steps 2A-2B: Dependent claims 2-12 are also ineligible for the same reasons given with respect to claim 1. Claims 2-12 recite further abstract ideas of adding additional analysis such as frequency analysis, timestamps, trend analysis, likelihood prediction, risk identification (MPEP 2106.04(a)(2)(I)). Claims 2-12 fail to recite any additional elements/steps that might integrates the abstract idea into a practical application. As such, claims 2-12 are not patent eligible. Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claims do not include at least one hardware element in the bodies as required by MPEP 2106(I). Claim 14 is directed to a computer readable claim and recites “A computer program element comprising computer program code”. The specification explicitly discloses that the computer program may be embodied in communication medium as electronic signal or carrier wave, (see spec of instant application, par [0079]), does not limit the medium to be only instead uses an open-ended language broadly interpreted medium also encompass signal per se. Thus, claim 14 should be amended to recite a non-transitory computer readable medium in place of a “computer program element”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hertz (US 20250133110 A1) in view of Hassanzadeh (US 20190141058 A1). Regarding claim 1, Hertz teaches a computer implemented security method for detecting attacks on a system or network, the method comprising: defining a sequence of attack tactics, each attack tactic representing a generalization of a set of attack techniques (Hertz, the attack-vector scenario comprising a sequence of cyber tactics, each of the cyber tactics (i.e., a sequence of attack tactics) being associated with one or more respective cyber techniques (i.e., a set of attack techniques) which are possible manifestations of the corresponding cyber tactic in the context of the attack-vector scenario, each cyber technique is associated with a corresponding event type of a plurality of event types that can occur on one or more entities of an organizational network, wherein occurrence of an actual event of the respective event type indicates implementation of the respective cyber technique, [0016] An attack-vector scenario 110 (i.e. attack path) is a sequence of steps taken by an adversary to attack an organizational network. The attack-vector scenario 110 is associated with a sequence of cyber tactics (e.g., cyber tactic A 120-a, cyber tactic B 120-b, . . . , cyber tactic N 120-n). Each cyber tactic embodies a logical step within the respective attack-vector scenario. Each cyber tactic represents the “why” behind the adversary's cyber-attack action. Each cyber tactic describes what the adversary is trying to accomplish in that step, [0051] Each cyber technique represents “how” an adversary achieves a tactical goal of the corresponding cyber tactic by performing a cyber-attack action. ..a persistence attack-vector scenario 110 and its two associated cyber tactics: discovery cyber tactic and persistence cyber tactic. The discovery cyber tactic can be associated with one or more cyber techniques. For example, with a file and directory discovery cyber technique, where adversaries enumerate files and directories within systems of the attacked organizational network or may search within specific locations of the attacked organizational network share for certain information within a file system. Adversaries may use the information gathered by the file and directory discovery cyber technique during follow-on cyber tactics, [0052]); associating one or more attack detection rules with each of the attack techniques (Hertz, Each cyber technique is associated with one or more event types that can occur on one or more entities of the attacked organizational network. Each entity can be an asset of the attacked organization, [0054] Identifying cyber techniques that actually occurred on the attacked organizational network can be achieved by matching the event types of the actual events with the event types associated with the cyber techniques…common property/validation analysis can be based on machine learning, rule set analysis, knowledge base, [0056] …a rule-set is used to analyze the relevant actual events and their properties to find that there is a casual connection, for example the date and time of both actual events are within a predefined time window, so there is an indication of occurrence of the persistence attack-vector scenario 110 within the attacked organizational network. [0057]); detecting attack events based on the attack detection rules (Hertz, information about actual events that occurred on the one or more entities of the organizational network, wherein each of the actual events is associated with a respective actual event type; identifying based on the information, the cyber techniques that occurred on the organizational network by matching the actual event types with the event types associated with the cyber techniques, giving rise to implemented cyber techniques, [0025] each cyber technique is associated with a corresponding event type of a plurality of event types that can occur on one or more entities of an organizational network, wherein occurrence of an actual event of the respective event type indicates implementation of the respective cyber technique, and (b) information about actual events that occurred on the one or more entities of the organizational network, wherein each of the actual events is associated with a respective actual event type, [0069] Example events can include: creation of a file on an endpoint of the organizational network, creation of a process—for example creation of a log process with full command line permissions for the current and parent process, execution of a command—for example execution of a “dir” command utilizing a command line of an endpoint, replacement of a file—for example replacement of a “msfte.dll” DLL file within an endpoint, deletion of a file, termination of a process, change of a name of a file on a device, change of a name of a process, loading of a driver, loading of a DLL, accessing a disk, opening of a network connection, changes to values of a registry, or any other event occurring on an asset of the organizational network, [0070] the cyber security system 200 can be an Endpoint Detection and Response (EDR) system monitoring endpoints connected through the organizational network, a Security Information and Events Management (SIEM) system, or any other cyber security system, [0074]) [Examiner interprets that system obtaining information about actual events from assets (EDR/SEIM) such as command executions, file replacement, process creations etc., as limitation above]; correlating the detected attack events with the attack tactics based on the attack technique associated with the attack detection rule used to detect the attack events (Hertz, identifying based on the information, the cyber techniques that occurred on the organizational network by matching the actual event types with the event types associated with the cyber techniques, giving rise to implemented cyber techniques, [0025] A given cyber tactic has occurred within the attacked organizational network when one or more of the cyber techniques associated with the given cyber tactic become implemented cyber techniques. When each of the cyber tactics forming the attack-vector scenario 110, is associated with at least one of the implemented cyber techniques the attack-vector scenario 110 has occurred within the attacked organizational network, [0056]) [Examiner interprets that correlating events to techniques by matching event types that which function as rule to technique association and then treating tactics as occurring when one or more techniques for that tactic are implemented as limitation above]; linking the detected attack events based on one or more criteria (Hertz, The common properties between each of the pairs create a relation chain between the implemented cyber techniques, this chain is in the context of the adversary who is trying to perpetrate the attack-vector scenario 110 within the attacked organizational network…The common property can be found in one or more dimensions of the implemented cyber techniques. For example: time dimension—the implemented cyber techniques occurred within a predefined time window, location dimension—the implemented cyber techniques occurred within one or more entities of the attacked organizational network having a connection between them, cyber tool dimension—the cyber tools used for executing the implemented cyber techniques are the same, and any other dimension of the implemented cyber techniques and their properties, [0056] a rule-set is used to analyze the relevant actual events and their properties to find that there is a casual connection, for example the date and time of both actual events are within a predefined time window, so there is an indication of occurrence of the persistence attack-vector scenario 110 within the attacked organizational network, [0057]) [Examiner interpret that system linking implemented techniques/events across successive tactics using “common properties” such as time window, location, tool similarity etc., to create a relation chain as limitation above]; and identifying one or more paths of attack techniques through the sequence in dependence on the linked attack events (Hertz, When each of the cyber tactics forming the attack-vector scenario 110, is associated with at least one of the implemented cyber techniques the attack-vector scenario 110 has occurred within the attacked organizational network, … The common properties between each of the pairs create a relation chain between the implemented cyber techniques, this chain is in the context of the adversary who is trying to perpetrate the attack-vector scenario 110 within the attacked organizational network, [0056] After obtaining the attack-vector scenarios 110 and the information, the cyber security system 200 can be further configured to identify, based on the information, the cyber techniques that occurred on the organizational network by matching the actual event types with the event types associated with the cyber techniques, giving rise to implemented cyber techniques.. each of the cyber tactics forming the attack vector scenario, is associated with at least one of the implemented cyber techniques, and (b) each pair of implemented cyber techniques associated with a pair of subsequent cyber tactics is associated with a respective common property, [0076]) [Examiner interprets that system identifying multiple tactics in a scenario have occurred and using common property chaining across successive tactics (i.e., a relation chain) as limitation above]; Although Herz teaches mapping event types to cyber techniques whose occurrence indicates implementation of the technique, and it describes using rule sets /ML/expert knowledge to analyze and validate connections, [0056,0057], but Herz does not explicitly teach: associating one or more attack detection rules with each of the attack techniques; detecting attack events based on the attack detection rules; identifying one or more paths of attack techniques However, Hassanzadeh teaches: associating one or more attack detection rules with each of the attack techniques (Hassanzadeh, rule-based filtering performed by each of the rule-based filters 310 and 312 can remove irrelevant events/alerts (e.g., events/alerts that are not determined to be associated with a potential attack) based on a target's profile and/or characteristics of the events/alerts. Rule-based filtering, for example, may apply to defined rules that discard particular events/alerts (e.g., false positives) based on how frequently events/alerts with certain characteristics occur, and their relative rate of change with regard to occurrence, [0036]) [Examiner interprets that system using rule-based filters function as attack detection rules to detect attack events corresponding to techniques/kill chain steps]. detecting attack events based on the attack detection rules (Hassanzadeh, alerts may be triggered in response to an event or a sequence of events, [0025] an appropriate rule-based filter may reference profile data from an appropriate network data source for a target that corresponds to the event/alert (e.g., based on device address), and can determine whether the received event/alert indicates a potential attack, [0036]) [Examiner interprets that system using rule-based filters function as attack detection rules to detect attack events corresponding to techniques/kill chain steps]. identifying one or more paths of attack techniques (Hassanzadeh, one or more attack paths can be determined, each representing a potential path an adversary can take to get into different targets in the network, [0028] The attack path database 520 can include multiple known attack paths or attack trees, where each attack path represents the potential paths an adversary can take to get into different targets (e.g., assets in the IT network domain 102 and/or assets in the OT network domain 104) in the IIOT network, [0050] an attack path 610 can be identified where the attack path 610 includes one alert for each step in the defined IT CKC/ICS CKC, such that the attack path begins with a first step in the CKC (e.g., reconnaissance 608a) and ends with a final step in the ICS CKC (e.g., execute 6081). For example, alert 602a is a first “reconnaissance” step in the CKC depicted in the correlation graph 600, and alert 602b is a final “execute” step in the ICS CKC following attack path 610, [0066]) [Examiner interprets that system identifying, storing and traversing attack paths that spans multiple stages as limitation above]. Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Hertz to include a concept of associating one or more attack detection rules with each of the attack techniques; detecting attack events based on the attack detection rules; identifying one or more paths of attack techniques as taught by Hunt for the purpose of filtering, aggregating, and correlating data from event/alert logs from each domain (e.g., IT and OT domains), and classifying each alert with a respective step in an IT cyber kill chain (IT CKC) and/or an industrial control system cyber kill chain (ICS CKC), detecting complex attack patterns, reporting the attack patterns (e.g., visualization data) to a security analyst, and implementing appropriate courses of action. [Hassanzadeh: 0011]. Regarding claim 2, Hertz and Hassanzadeh teaches a method according to claim 1, wherein the set of attack techniques represented by an attack tactic have a common or similar purpose (Hertz, Each cyber tactic represents the “why” behind the adversary's cyber-attack action. Each cyber tactic describes what the adversary is trying to accomplish in that step. In general, the cyber tactic represents the tactical objective and the reason behind the cyber-attack action of the adversary. The cyber tactic is the adversary's tactical goal: the reason for performing that certain cyber-attack action as part of the attack vector scenario 110, [0051] Each cyber technique represents “how” an adversary achieves a tactical goal of the corresponding cyber tactic by performing a cyber-attack action. Each cyber technique is a possible manifestation of the corresponding cyber tactic in the context of the attack-vector scenario 110, [0052]); Regarding claim 3, Hertz and Hassanzadeh teaches a method according to claim 1, comprising automatically generating a multi-stage attack detection and/or mitigation strategy for inclusion in an attack detection tool based on the identified path(s) of attack techniques (Hertz, The prediction made by cyber security system 200 can be based on the parts of the sequence of cyber tactics of the attack-vector scenario 110 that have already been identified …The prediction can be based on: machine learning, one or more rule sets, experts' knowledgebase or any other method to identify common properties between two or more cyber techniques…. using a rule set for the prediction, the identification of the occurrence of a given cyber tactic within the organizational network can be used by cyber security system 200 to predict that the next step cyber tactic is a subsequent cyber tactic, subsequent to the given cyber tactic within the sequence of cyber tactics of the attack-vector scenario 110, [0077] cyber security system 200 can predict one or more cyber techniques associated with the next step cyber tactic that will be used by the adversary and in some cases, even the properties of the actual events associated with the predicted next step of the adversary. The prevention action can optionally rely on these predictions. Some non-limiting examples of a prevention action can include one or more of: (a) report the next step cyber tactic to the user of the cyber security system, (b) simulate the next step cyber tactic, (c) implement one or more honeypots within one or more entities of the organizational network wherein events associated with the next step cyber tactic can occur, or (d) any other action of actions that can be done to prevent the next step cyber tactic, [0079]) [Examiner interprets that predicting next tactic and performing prevention actions such as report/simulate/honeypots as automated detection/mitigation response for multistage scenario]; Although Hertz teaches proactive workflow (i.e., predict + prevent) to detect/mitigate attacks, Herz does not explicitly teach: automatically generating a multi-stage attack detection and/or mitigation strategy for inclusion in an attack detection tool based on the identified path(s) of attack techniques However, Hassanzadeh teaches: automatically generating a multi-stage attack detection and/or mitigation strategy for inclusion in an attack detection tool based on the identified path(s) of attack techniques (Hassanzadeh, output may be provided by the event correlation system 150 to another system (e.g., a security information and event management (SIEM) system) and/or to a system operator as reporting/visualization data (e.g., in the form of an attack graph or attack tree). Based on the system output, for example, appropriate courses of action may be employed to counter ongoing and/or future attacks, [0027] A multi-step, multi-domain attack detection system 700 (e.g., similar to the multi-step, multi-domain attack detector 218 described with reference to FIG. 2) includes components, modules, and/or engine, for example, a correlation graph modeling engine 704, a pattern recognition and extraction engine 706, an adversary prediction engine 708, and a risk management engine 710…. the multi-step, multi-domain attack detection system 700 (e.g., multi-step, multi-domain attack detector 218) receives classified alerts from alert correlator 502 (e.g., correlator 216), and generates risk management 710 and/or courses of action 722 (e.g., response generator 220) to provide back to the IIOT network, [0068] Adversary prediction engine 708 may determine one or more likely outcomes for the alert 702 and recommend one or more courses of action 722 for the IT/OT network under attack. The multi-step, multi-domain attack detection system 700 may then implement one or more courses of action 722 to block the subsequent step in anticipation of the attack. Examples of counter-attack strategies include blocking, patching and updating, access control updates, white listing, physical security…, [0069]) [Examiner interprets that system generating multistage responses based on identified attack paths and implementing course of actions (i.e., mitigation strategies) based path analysis and prediction as limitation above] same motivation applies as claim 1. Regarding claim 4, Hertz and Hassanzadeh teaches a method according to claim 3, wherein the attack detection strategy is generated in dependence on a frequency of occurrence of the identified path(s) of attack techniques (Hassanzadeh, Rule-based filtering, for example, may apply to defined rules that discard particular events/alerts (e.g., false positives) based on how frequently events/alerts with certain characteristics occur, and their relative rate of change with regard to occurrence, [0036] the system 400 and its various components (e.g., components 410 and 412) can perform functions for processing and aggregating event/alert data received from various different sources. By aggregating event/alert data, for example, data redundancy can be decreased, and the aggregated event/alert data may be further processed to identify trends and correlations in the data, [0039] The correlation graph analytics database 712 can include patterns of previously seen/detected/identified complex ICS threats, for example, Stuxnet, Night Dragon, Crash Override, and the like. A particular alert 702 received by the multi-step, multi-domain attack detection system 700 would then trigger a prediction one or more subsequent (e.g., consequence) steps that an adversary will take in the IT CKC/ICS CKC process, [0069]) [Examiner interprets that system tracking repeated alerts, aggregating alerts into patterns, identifying trends and reoccurrences, using historical pattern to influence response as limitation above] same motivation applies as claim 1. Regarding claim 5, Hertz and Hassanzadeh teaches a method according to claim 1, comprising identifying a frequency for each identified path of attack techniques through the sequence (Hassanzadeh, Rule-based filtering, for example, may apply to defined rules that discard particular events/alerts (e.g., false positives) based on how frequently events/alerts with certain characteristics occur, and their relative rate of change with regard to occurrence, [0036] the system 400 and its various components (e.g., components 410 and 412) can perform functions for processing and aggregating event/alert data received from various different sources. By aggregating event/alert data, for example, data redundancy can be decreased, and the aggregated event/alert data may be further processed to identify trends and correlations in the data, [0039] The correlation graph analytics database 712 can include patterns of previously seen/detected/identified complex ICS threats, for example, Stuxnet, Night Dragon, CrashOverride, and the like. A particular alert 702 received by the multi-step, multi-domain attack detection system 700 would then trigger a prediction one or more subsequent (e.g., consequence) steps that an adversary will take in the IT CKC/ICS CKC process, [0069]) [Examiner interprets that system tracking repeated alerts, aggregating alerts into patterns, identifying trends and reoccurrences, using historical pattern to influence response as limitation above] same motivation applies as claim 1. Regarding claim 6, Hertz and Hassanzadeh teaches a method according to claim 1, comprising identifying a frequency with which attack events associated with a particular one of the techniques are detected (Hassanzadeh, Rule-based filtering, for example, may apply to defined rules that discard particular events/alerts (e.g., false positives) based on how frequently events/alerts with certain characteristics occur, and their relative rate of change with regard to occurrence, [0036] the system 400 and its various components (e.g., components 410 and 412) can perform functions for processing and aggregating event/alert data received from various different sources. By aggregating event/alert data, for example, data redundancy can be decreased, and the aggregated event/alert data may be further processed to identify trends and correlations in the data, [0039] The correlation graph analytics database 712 can include patterns of previously seen/detected/identified complex ICS threats, for example, Stuxnet, Night Dragon, CrashOverride, and the like. A particular alert 702 received by the multi-step, multi-domain attack detection system 700 would then trigger a prediction one or more subsequent (e.g., consequence) steps that an adversary will take in the IT CKC/ICS CKC process, [0069]) [Examiner interprets that system tracking repeated alerts, aggregating alerts into patterns, identifying trends and reoccurrences, using historical pattern to influence response as limitation above] same motivation applies as claim 1. Regarding claim 7, Hertz and Hassanzadeh teaches a method according claim 1, wherein the detected attack events are time stamped, and wherein the linking of the detected attack events comprises forming a time ordered chain of linked attack events (Hertz, a rule-set is used to analyze the relevant actual events and their properties to find that there is a casual connection, for example the date and time of both actual events are within a predefined time window, so there is an indication of occurrence of the persistence attack-vector scenario 110 within the attacked organizational network, [0057] Event properties can also include source process, IP addresses, port numbers, hostnames, date and time of the actual event and port names for the network connection event, [0072]) Although Hertz teaches time window correlation, Hertz does not explicitly teach: the detected attack events are time stamped, and wherein the linking of the detected attack events comprises forming a time ordered chain of linked attack events However, Hassanzadeh teaches: the detected attack events are time stamped, and wherein the linking of the detected attack events comprises forming a time ordered chain of linked attack events (Hassanzadeh, the aggregation system 402 can receive alert data 420 corresponding to potential attacks on an information technology network (e.g., the information technology network domain 102, shown in FIG. 1) and alert data 422 corresponding to potential attacks on an operational technology network (e.g., the operational technology network domain 104, shown in FIG. 1)…. each of the similar alerts may include similar data, yet may have slightly different timestamps (e.g., due to network traffic speeds). If the fuser 410 determines that multiple alerts are related (e.g., the alerts were generated in response to the same packet or event based on having similar data and having timestamps within a threshold similarity value), [0040] A prerequisite alert for a particular classified alert is an alert that is (i) classified as a preceding step in the CKC process and (ii) precedes the particular classified alert in a known attack path. For example, a prerequisite alert for a particular alert classified as a “command and control” alert would be an alert classified as a “act” alert, according to one CKC process (e.g., as depicted in FIG. 6). A consequence alert for the particular classified alert is an alert that is (i) classified as a subsequent step in the CKC and (ii) follows the particular classified alert in a known attack path, [0060] one or more dependencies for the classified alert may be determined by a time-based threshold. For example, the alert dependency engine 512 may determine a dependency between two or more classified alerts 554 if the alerts have timestamps within a suitable time threshold value (e.g., one minute, five minutes, ten minutes, or another suitable value). The time threshold value, for example, may be a configurable tuning parameter, [0062]) [Examiner interprets that system having dependencies time-based (i.e., ordered chain) and directional (i.e., prerequisite -> consequence) as limitation above] same motivation applies as claim 1. Regarding claim 8, Hertz and Hassanzadeh teaches a method according to claim 1, comprising identifying, from the linked detected attack events, one or more attack techniques having a high likelihood of progression to a subsequent attack tactic in the sequence (Hertz, predicting, by the processing circuitry, based on: (a) the attack-vector scenario, (b) the implemented cyber techniques, and (c) the previously identified implemented cyber techniques, a next step cyber tactic of the cyber tactics; and performing, by the processing circuitry, a prevention action to prevent the next step cyber tactic, [0029] The prediction made by cyber security system 200 can be based on the parts of the sequence of cyber tactics of the attack-vector scenario 110 that have already been identified …The prediction can be based on: machine learning, one or more rule sets, experts' knowledgebase or any other method to identify common properties between two or more cyber techniques…. using a rule set for the prediction, the identification of the occurrence of a given cyber tactic within the organizational network can be used by cyber security system 200 to predict that the next step cyber tactic is a subsequent cyber tactic, subsequent to the given cyber tactic within the sequence of cyber tactics of the attack-vector scenario 110, [0077] cyber security system 200 can predict one or more cyber techniques associated with the next step cyber tactic that will be used by the adversary and in some cases, even the properties of the actual events associated with the predicted next step of the adversary, [0079]) [Examiner interprets that predicting next tactic as identifying techniques that are likely to occur next within that subsequent tactic where prediction can include predicted properties and event predict techniques]. Regarding claim 9, Hertz and Hassanzadeh teaches a method according to claim 8, comprising employing one or more mitigating measures for the attack techniques identified as having a high likelihood of progression to a subsequent attack tactic in the sequence (Hertz, the prevention action is one or more of: (a) report the next step cyber tactic to the user of the cyber security system, (b) simulate the next step cyber tactic, or (c) implement one or more honeypots within one or more entities of the organizational network wherein events associated with the next step cyber tactic can occur, [0030] cyber security system 200 can predict one or more cyber techniques associated with the next step cyber tactic that will be used by the adversary and in some cases, even the properties of the actual events associated with the predicted next step of the adversary. The prevention action can optionally rely on these predictions. Some non-limiting examples of a prevention action can include one or more of: (a) report the next step cyber tactic to the user of the cyber security system, (b) simulate the next step cyber tactic, (c) implement one or more honeypots within one or more entities of the organizational network wherein events associated with the next step cyber tactic can occur, or (d) any other action of actions that can be done to prevent the next step cyber tactic, [0079]) [Examiner interprets that system having prevention actions such as report, simulate, implement honeypots as mitigation/ preventative measures aligned with likely next stage activity]. Regarding claim 10, Hertz and Hassanzadeh teaches a method according to claim 1, comprising adjusting the deployment of mitigation measures in dependence on trends or changes in the frequency of attack paths (Hassanzadeh, Rule-based filtering, for example, may apply to defined rules that discard particular events/alerts (e.g., false positives) based on how frequently events/alerts with certain characteristics occur, and their relative rate of change with regard to occurrence, [0036] the system 400 and its various components (e.g., components 410 and 412) can perform functions for processing and aggregating event/alert data received from various different sources. By aggregating event/alert data, for example, data redundancy can be decreased, and the aggregated event/alert data may be further processed to identify trends and correlations in the data, [0039] correlation graph analytics database 712 for a generated correlation graph 720 (e.g., similar to correlation graph 600 in FIG. 6) can be provided by the correlation graph modeling engine 704 to the pattern recognition and extraction module 706. The correlation graph analytics database 712 can include patterns of previously seen/detected/identified complex ICS threats, for example, Stuxnet, Night Dragon, CrashOverride, and the like. A particular alert 702 received by the multi-step, multi-domain attack detection system 700 would then trigger a prediction one or more subsequent (e.g., consequence) steps that an adversary will take in the IT CKC/ICS CKC process.. Adversary prediction engine 708 may determine one or more likely outcomes for the alert 702 and recommend one or more courses of action 722 for the IT/OT network under attack. The multi-step, multi-domain attack detection system 700 may then implement one or more courses of action 722 to block the subsequent step in anticipation of the attack. Examples of counter-attack strategies include blocking, patching and updating, access control updates, white listing, physical security, or a combination thereof, [0069] the risk management engine 710 may provide information to a user (e.g., a network administrator) to assist in installing new software or software patches within a system, based on identified risks provided by the pattern recognition and extraction engine 706, [0070]) [Examiner interprets that system using frequency and rate of change to drive rules/policy (i.e., filtering), identifying trends from aggregated alert data, deploying countermeasures based on predicted outcomes as limitation above] same motivation applies as claim 1. Regarding claim 11, Hertz and Hassanzadeh teaches a method according to claim 1, comprising identifying high risk attack paths and employing mitigation measures in relation to the identified high risk attack paths (Hassanzadeh, one or more attack paths can be determined, each representing a potential path an adversary can take to get into different targets in the network, and stored in an attack path database, [0028] an attack path 610 can be identified where the attack path 610 includes one alert for each step in the defined IT CKC/ICS CKC, such that the attack path begins with a first step in the CKC (e.g., reconnaissance 608a) and ends with a final step in the ICS CKC (e.g., execute 6081). For example, alert 602a is a first “reconnaissance” step in the CKC depicted in the correlation graph 600, and alert 602b is a final “execute” step in the ICS CKC following attack path 610, [0066] the system 400 and its various components (e.g., components 410 and 412) can perform functions for processing and aggregating event/alert data received from various different sources. By aggregating event/alert data, for example, data redundancy can be decreased, and the aggregated event/alert data may be further processed to identify trends and correlations in the data, [0039] A multi-step, multi-domain attack detection system 700 (e.g., similar to the multi-step, multi-domain attack detector 218 described with reference to FIG. 2) includes components, modules, and/or engine, for example, a correlation graph modeling engine 704, a pattern recognition and extraction engine 706, an adversary prediction engine 708, and a risk management engine 710.. the multi-step, multi-domain attack detection system 700 (e.g., multi-step, multi-domain attack detector 218) receives classified alerts from alert correlator 502 (e.g., correlator 216), and generates risk management 710 and/or courses of action 722 (e.g., response generator 220) to provide back to the IIOT network, [0068] correlation graph analytics database 712 for a generated correlation graph 720 (e.g., similar to correlation graph 600 in FIG. 6) can be provided by the correlation graph modeling engine 704 to the pattern recognition and extraction module 706. The correlation graph analytics database 712 can include patterns of previously seen/detected/identified complex ICS threats, for example, Stuxnet, Night Dragon, CrashOverride, and the like. A particular alert 702 received by the multi-step, multi-domain attack detection system 700 would then trigger a prediction one or more subsequent (e.g., consequence) steps that an adversary will take in the IT CKC/ICS CKC process, [0069] the risk management engine 710 may provide information to a user (e.g., a network administrator) to assist in installing new software or software patches within a system, based on identified risks provided by the pattern recognition and extraction engine 706, [0070]) [Examiner interprets that identifying attack paths associated with identified risks (risk management engine), attack paths matching known high impact APT pattens stored in analytics DB, applying courses of action in response to predicted progression along path/graph (i.e., mitigation in relation to the path and its next step) as limitation above] same motivation applies as claim 1. Regarding claim 12, Hertz and Hassanzadeh teaches a method according to claim 1, comprising identifying techniques which link with a high frequency to one or more techniques within a subsequent tactic in the sequence, and/or identifying techniques which link with a high frequency to one or more techniques within a preceding tactic in the sequence (Hassanzadeh, Rule-based filtering, for example, may apply to defined rules that discard particular events/alerts (e.g., false positives) based on how frequently events/alerts with certain characteristics occur, and their relative rate of change with regard to occurrence, [0036] a dependency is established between a pair of classified alerts 554, where the dependency is a relationship (e.g., Alert A is a prerequisite alert to Alert B, or Alert A is a consequence of Alert B) between the pair of classified alerts 554, [0059] an alert aggregator 510 can group together multiple classified 554 alerts based on the classification of each alert with respect to a step in the IT CKC and/or ICS CKC. The grouping of the multiple classified alerts 554 can be based in part on a same classification assigned to each classified alert of the multiple alerts. In one example, a reconnaissance attack may generate thousands of alerts, which can then be classified individually as indicative of a “reconnaissance” step in the CKC. The alert aggregator 510 can then group the set of reconnaissance alerts into a meta-alert labeled “reconnaissance.”, [0056] A prerequisite alert for a particular classified alert is an alert that is (i) classified as a preceding step in the CKC process and (ii) precedes the particular classified alert in a known attack path. For example, a prerequisite alert for a particular alert classified as a “command and control” alert would be an alert classified as a “act” alert, according to one CKC process (e.g., as depicted in FIG. 6). A consequence alert for the particular classified alert is an alert that is (i) classified as a subsequent step in the CKC and (ii) follows the particular classified alert in a known attack path. For example, a consequence alert for a particular alert classified as an “exploit” alert would require a classification of “install/modify” according to one CKC process, [0060]) [Examiner interprets that system linking from preceding stage to a subsequent stage through the dependencies, aggregating large volumes of alert per step, identifying most frequent cross step dependencies (high frequency links) and linking them using prerequisites /consequence dependencies as limitation above, classified alerts/attack actions = techniques, CKC steps = tactics, frequency = repeated cross stage dependencies] same motivation applies as claim 1. Regarding claim 13, Hertz and Hassanzadeh teaches a computer system including a processor and memory storing computer program code for performing the steps of the method of claim 1 (Hertz, at least one processor, computer readable storage medium having computer readable program code, [0036]) Regarding claim 14, Hertz and Hassanzadeh teaches a computer program element comprising computer program code to, when loaded into a computer system and executed thereon, cause the computer to perform the steps of a method as claimed in claim 1 (Hertz, the computer readable program code, executable by at least one processor of a computer to perform a method, [0036]) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20230247031 A1: “directed to network security, and more particularly, to detecting attacks that involve lateral movement within a computer network” US 20160301709 A1: “relates to security and network operations” US 20180048667 A1: “relates to computer and network security and, more particularly, to discovery of attack chains from system monitoring logs” US 20180219888 A1: “relates to intelligence generation and activity discovery from events in a distributed data processing system” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIKSHYA POUDEL whose telephone number is (703)756-1540. The examiner can normally be reached 7:30 AM - 5PM Mon- Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.N.P./Examiner, Art Unit 2436 /TRONG H NGUYEN/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Sep 09, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591663
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12470379
LINK ENCRYPTION AND KEY DIVERSIFICATION ON A HARDWARE SECURITY MODULE
2y 5m to grant Granted Nov 11, 2025
Patent 12452254
SECURE SIGNED FILE UPLOAD
2y 5m to grant Granted Oct 21, 2025
Patent 12341788
NETWORK SECURITY SYSTEMS FOR IDENTIFYING ATTEMPTS TO SUBVERT SECURITY WALLS
2y 5m to grant Granted Jun 24, 2025
Patent 12292969
Provenance Inference for Advanced CMS-Targeting Attacks
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
99%
With Interview (+80.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month