Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The instant application 18/827,962 claims priority to provisional application 63/542,319 which claims the priority filing date of 10/04/2023. Therefore, the effective filing date of the instant application 18/827,962 is 10/04/2023.
Drawings
The drawings submitted on 09/09/2024 with the instant application are acceptable for examination purposes.
Specification
The specification submitted on 09/09/2024 with the instant application are acceptable for examination purposes.
Claim Objections
Claims 1, 2, 5, 16, and 17 objected to because of the following informalities:
In line 9 of Claim 1, the limitation “analyzing determines an indication abnormality” is unclear because the “indication abnormality” is not clearly associated with “the one or more cybersecurity threat protection indications” by the claim language. The limitation could be re-written as “analyzing determines that at least one of the one or more cybersecurity threat protection indications is abnormal”, or the like, for clarity
In line 10-11 of Claim 1, the limitation “and inferring a cybersecurity threat protection application misconfiguration, based on the analyzing” should read: “and inferring a cybersecurity threat protection application misconfiguration, based on the determining”, because the misconfiguration would be inferred in the case in which an abnormal indication was determined, rather than due to the analyzing
Claim 16 includes similar limitations to those of Claim 1 and is objected to for the same reasons as Claim 1
In line 1-2 of Claim 2, the limitation “the misconfiguration indicates a cybersecurity threat protection application false positive” is unclear because it is unclear how the misconfiguration is to indicate an application false positive. The limitation could be re-written similarly to Claim 5, such as: “the cybersecurity threat protection application misconfiguration is determined by a cybersecurity threat protection indication having a false positive status”, or “the cybersecurity threat protection application misconfiguration is based on a false positive indication”, according to parag. [0035] of the Specification.
Claim 17 includes similar limitations to those of Claim 2 and is objected to for the same reasons as Claim 2
In line 2 of Claim 5, the limitation “… causes the cybersecurity threat protection indication” should read: “… is based on the at least one of the one or more cybersecurity threat protection indications” in accordance with the correction suggest for Claim 1 above, and for consistency with the antecedent basis established in Claim 1
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5, 9, 10, 12, and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narula et al. (US 20210297427 A1), hereinafter Narula, in view of Swafford et al. (US 20190036969 A1), hereinafter Swafford.
Regarding Claim 1:
Narula teaches A computer-implemented method for cybersecurity management comprising: accessing a plurality of cybersecurity threat protection applications (Narula – Figure 1: illustration of a network architecture in which a SOAR platform is deployed along with various cybersecurity applications/tools; and Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources; and Paragraph [0041]: The SOAR platform 102 may facilitate identification and responding to cyberattacks by processing a large volume of alert and log data that may contain information indicative of the type and nature of an attack that may be underway. In an embodiment, the SOAR platform 102 is operable to receive and process actionable alert data pertaining to an alert … related to contextual data via Application Programming Interface (API) 104-1. API 104-1 can enable determining security alerts from multiple sources along with contextual data … the received actionable alert data at 104-3 can be a Security Information and Event Management (SIEM) alert. SIEM tools may generate SIEM alerts, for example, by aggregating data from different internal sources of the monitored network to identify anomalous behavior that may be indicative of a cyberattack. Furthermore, other actionable alerts can be received at 104-4 by SOAR platform 102, for example, from solutions and software applications related to Endpoint Detection and Response (EDR) tools and/or services, an Intrusion Detection System (IDS) and so forth); and Paragraph [0035]: The SOAR platform integrates with SIEM, EDR, threat intelligence, and others tools to facilitate threat hunting across multiple tools and sources from a single location), wherein the plurality of cybersecurity threat protection applications is deployed across a managed cybersecurity network (Narula – Figure 1: illustration of a network architecture in which a SOAR platform is deployed along with various cybersecurity applications/tools; and Paragraph [0039]: FIG. 1 is a network architecture 100 in which aspects of the present invention may be implemented in accordance with an embodiment of the present invention. In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident), and wherein the plurality of cybersecurity threat protection applications is managed using a security orchestration, automation, and response (SOAR) platform (Narula – Paragraph [0035]: The SOAR platform integrates with SIEM, EDR, threat intelligence, and others tools to facilitate threat hunting across multiple tools and sources from a single location. The mind map view proposed herein enables visualization, querying, enrichment, and initiation of manual or automated actions from a centralized location in the SOAR platform. The mind map approach is a preferable solution to developing a playbook in certain scenarios; and Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources. According to one embodiment, SOAR platform 102 is operable to apply decision making logic, combined with context, to provide formalized workflows and enable informed prioritization (triage) of remediation tasks relating to threats observed at the SOC); accumulating one or more cybersecurity threat protection indications from the plurality of cybersecurity threat protection applications (Narula – Figure 1: illustration of a network architecture in which a SOAR platform is deployed along with various cybersecurity applications/tools; and Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources; and Paragraph [0041]: The SOAR platform 102 may facilitate identification and responding to cyberattacks by processing a large volume of alert and log data that may contain information indicative of the type and nature of an attack that may be underway. In an embodiment, the SOAR platform 102 is operable to receive and process actionable alert data pertaining to an alert … related to contextual data via Application Programming Interface (API) 104-1. API 104-1 can enable determining security alerts from multiple sources along with contextual data … the received actionable alert data at 104-3 can be a Security Information and Event Management (SIEM) alert. SIEM tools may generate SIEM alerts, for example, by aggregating data from different internal sources of the monitored network to identify anomalous behavior that may be indicative of a cyberattack. Furthermore, other actionable alerts can be received at 104-4 by SOAR platform 102, for example, from solutions and software applications related to Endpoint Detection and Response (EDR) tools and/or services, an Intrusion Detection System (IDS) and so forth); and Paragraph [0035]: The SOAR platform integrates with SIEM, EDR, threat intelligence, and others tools to facilitate threat hunting across multiple tools and sources from a single location); analyzing the one or more cybersecurity threat protection indications that were accumulated (Narula – Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources. According to one embodiment, SOAR platform 102 is operable to apply decision making logic, combined with context, to provide formalized workflows and enable informed prioritization (triage) of remediation tasks relating to threats observed at the SOC; and Paragraph [0041]: The SOAR platform 102 may facilitate identification and responding to cyberattacks by processing a large volume of alert and log data that may contain information indicative of the type and nature of an attack that may be underway. In an embodiment, the SOAR platform 102 is operable to receive and process actionable alert data pertaining to an alert … related to contextual data).
Narula does not expressly teach wherein the analyzing determines an indication abnormality; and inferring a cybersecurity threat protection application misconfiguration, based on the analyzing.
However, Swafford teaches wherein the analyzing determines an indication abnormality (Swafford – Paragraph [0074]: Certain embodiments of the invention likewise reflect an appreciation that large numbers of false positives may be inadvertently generated when a security policy's associated rule fails to accommodate certain circumstances, situations, or conditions; and Paragraph [0171]: As used herein, a risk-adaptive security policy 914 broadly refers to a security policy implemented to be monitored by security analytics system 118 to detect whether it is generating an undesirable number of false positives, and if so, be remediated to lower the number of false positives being generated); and inferring a cybersecurity threat protection application misconfiguration, based on the analyzing (Swafford – Paragraph [0024]: Certain aspects of the invention reflect an appreciation that a security policy may be inadvertently violated as a consequence of the occurrence of a legitimate event or user behavior. Certain aspects of the invention likewise reflect an appreciation that such violations are often the result of certain security policy rules that have been unintentionally misconfigured. Likewise, certain aspects of the invention reflect an appreciation that the number, or volume, or false positives generated by such misconfigurations may be quite large; and Paragraph [0034]: As used herein, an endpoint agent 306 broadly refers to a software agent used in combination with an endpoint device 304 to establish a protected endpoint 302 … In certain of these approaches the software agent is implemented to autonomously decide if a particular action is appropriate for a given event, such as an observed user behavior; and Paragraph [0175]: Referring now to FIG. 9, noisy security policy remediation operations are begun in certain embodiments by the security analytics system 118 performing ongoing monitoring operations to identify a risk-adaptive security policy 914 that has been violated. In certain embodiments, the endpoint agent 306 may be implemented to determine if the risk-adaptive security policy 914 has been violated. In certain embodiments, the endpoint agent 306 may be implemented to notify the security analytics system 118 if the risk-adaptive security policy 914 has been violated; and Paragraph [0176]: If it is determined that a risk-adaptive security policy 914 has been violated, then a determination is made whether the violated risk-adaptive security policy 914 is noisy … the determination may be made as a result of statistical analysis; and Paragraph [0178]: If it is determined the risk-adaptive security policy 914 is noisy, then the risk of revising the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 to reduce the number of false positives is assessed; and Paragraph [0180]: If it is determined the assessed risk is acceptable, the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 are revised to reduce the number of false positives).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula, further incorporating Swafford to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Swafford’s misconfiguration inference based on alert analyses into Narula’s method for cybersecurity management via SOAR platform. This combination would enable beneficial application of the alert intelligence gathered by the SOAR tool(s) for more efficient use of system/network resources in incident resolution, e.g. by reducing false positives.
Regarding Claim 2:
The combination of Narula and Swafford teaches the method of claim 1.
Swafford further teaches wherein the misconfiguration indicates a cybersecurity threat protection application false positive (Swafford – Paragraph [0024]: A method, system and computer-usable medium are disclosed for remediating noisy security policies. Certain aspects of the invention reflect an appreciation that a security policy may be inadvertently violated as a consequence of the occurrence of a legitimate event or user behavior. Certain aspects of the invention likewise reflect an appreciation that such violations are often the result of certain security policy rules that have been unintentionally misconfigured. Likewise, certain aspects of the invention reflect an appreciation that the number, or volume, or false positives generated by such misconfigurations may be quite large (e.g., thousands or tens-of-thousands). Consequently, the false positive generated by these “noisy” security policies may result in the unnecessary consumption of security administration resources); and Paragraph [0175]: In certain embodiments, the endpoint agent 306 may be implemented to determine if the risk-adaptive security policy 914 has been violated. In certain embodiments, the endpoint agent 306 may be implemented to notify the security analytics system 118 if the risk-adaptive security policy 914 has been violated; and Paragraph [0176]: If it is determined that a risk-adaptive security policy 914 has been violated, then a determination is made whether the violated risk-adaptive security policy 914 is noisy, as described in greater detail herein; and Paragraph [0178]: If it is determined the risk-adaptive security policy 914 is noisy, then the risk of revising the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 to reduce the number of false positives is assessed).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 3:
The combination of Narula and Swafford teaches the method of claim 2.
Swafford further teaches wherein the false positive is determined by non-permitted application scenarios (Swafford – Paragraph [0072]: a false positive broadly refers to an incorrect conclusion resulting from correctly meeting certain conditions of a test. In particular, as it relates to a noisy security policy, a false positive broadly refers to an incorrect indication that a security policy has been violated. More particularly, such a false positive may be generated as a result of the defined bounds of the security policy's associated rule being met or exceeded as a result the occurrence of a legitimate event, the enactment of a legitimate behavior, or a combination thereof).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 5:
The combination of Narula and Swafford teaches the method of claim 1.
Swafford further teaches wherein the cybersecurity threat protection application misconfiguration causes the cybersecurity threat protection indication (Swafford – Paragraph [0034]: As used herein, an endpoint agent 306 broadly refers to a software agent used in combination with an endpoint device 304 to establish a protected endpoint 302 … In certain of these approaches the software agent is implemented to autonomously decide if a particular action is appropriate for a given event, such as an observed user behavior; and Paragraph [0053]: the security analytics system 118 may be implemented in combination with one or more endpoint agents 306; and Paragraph [0024]: A method, system and computer-usable medium are disclosed for remediating noisy security policies. Certain aspects of the invention reflect an appreciation that a security policy may be inadvertently violated as a consequence of the occurrence of a legitimate event or user behavior. Certain aspects of the invention likewise reflect an appreciation that such violations are often the result of certain security policy rules that have been unintentionally misconfigured. Likewise, certain aspects of the invention reflect an appreciation that the number, or volume, or false positives generated by such misconfigurations may be quite large (e.g., thousands or tens-of-thousands)).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 9:
The combination of Narula and Swafford teaches the method of claim 1.
Swafford further teaches wherein the indication abnormality is based on two or more cybersecurity threat protection indications from a plurality of cybersecurity threat protection applications (Swafford – Paragraph [0034]: As used herein, an endpoint agent 306 broadly refers to a software agent used in combination with an endpoint device 304 to establish a protected endpoint 302 … In certain of these approaches the software agent is implemented to autonomously decide if a particular action is appropriate for a given event, such as an observed user behavior; and Paragraph [0053]: the security analytics system 118 may be implemented in combination with one or more endpoint agents 306; and Paragraph [0177]: As an example, violation of a particular risk-adaptive security policy may generate 5,000 alerts in a 24 hour period. In this example, if the 5,000 alerts are distributed across 5,000 different users, then an assessment may be made that the risk-adaptive security policy 914 is noisy).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 10:
The combination of Narula and Swafford teaches the method of claim 9.
Swafford further teaches wherein the indication abnormality is based on a time-sequenced commonality among the plurality of cybersecurity threat protection applications (Swafford – Paragraph [0034]: As used herein, an endpoint agent 306 broadly refers to a software agent used in combination with an endpoint device 304 to establish a protected endpoint 302 … In certain of these approaches the software agent is implemented to autonomously decide if a particular action is appropriate for a given event, such as an observed user behavior; and Paragraph [0053]: the security analytics system 118 may be implemented in combination with one or more endpoint agents 306; and Paragraph [0177]: As an example, violation of a particular risk-adaptive security policy may generate 5,000 alerts in a 24 hour period. In this example, if the 5,000 alerts are distributed across 5,000 different users, then an assessment may be made that the risk-adaptive security policy 914 is noisy).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 12:
The combination of Narula and Swafford teaches the method of claim 1.
Swafford further teaches further comprising providing a remedial action, based on the inferring (Swafford – Paragraph [0178]: If it is determined the risk-adaptive security policy 914 is noisy, then the risk of revising the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 to reduce the number of false positives is assessed; and Paragraph [0180]: If it is determined the assessed risk is acceptable, the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 are revised to reduce the number of false positives).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 14:
The combination of Narula and Swafford teaches the method of claim 1.
Swafford further teaches wherein the analyzing or the inferring are performed using machine learning (ML) (Swafford – Paragraph [0051]: In certain embodiments, the analysis techniques for detecting a noisy security policy include one or more of a probabilistic modeling technique, a statistical analysis technique and a machine learning technique. In certain embodiments, the machine learning technique may be either supervised or unsupervised).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 15:
The combination of Narula and Swafford teaches the method of claim 14.
Narula further teaches wherein the ML is trained by data gathered by one or more instantiations of the SOAR platform (Narula – Paragraph [0045]: The SOAR platform 102 can also include one or more interface(s) 206 ... Interface(s) 206 may also provide a communication pathway for one or more components of SOAR platform 102. Examples of such components include, but are not limited to, processing engine(s) 208 and database 210; and Paragraph [0047]: processing engine(s) 208 can include an alert data receiving unit 212, a mind map view generating unit 214, a mind map nodes attaching and training unit 216, a machine learning unit 218, and a mind map view updating and presenting unit 220; and Paragraph [0049]: mind map nodes attaching and training unit 216 may be responsible for feeding information to the machine learning unit 218. For example, the mind map nodes attaching and training unit 216 may extract features of the incident at issue to form a feature set and provide the feature set along with information regarding actions taken by an analyst with respect to the incident at issue to the machine learning unit 218; and Paragraph [0050]: Machine learning unit 218 is responsible for learning associations among incidents, field nodes and actions based on observed actions taken by analysts with respect to incidents. Machine learning unit 218 is also responsible for providing suggested field nodes and actions for a given incident based on the given incident's similarity to prior observed incidents and interactions relating thereto).
The motivation to combine the arts is the same as that of Claim 1.
Regarding Claim 16:
Narula teaches A computer system for cybersecurity management comprising: a memory which stores instructions; one or more processors coupled to the memory wherein the one or more processors, when executing the instructions which are stored (Narula – Paragraph [0019]: Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process), are configured to: access a plurality of cybersecurity threat protection applications (Narula – Figure 1: illustration of a network architecture in which a SOAR platform is deployed along with various cybersecurity applications/tools; and Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources; and Paragraph [0041]: The SOAR platform 102 may facilitate identification and responding to cyberattacks by processing a large volume of alert and log data that may contain information indicative of the type and nature of an attack that may be underway. In an embodiment, the SOAR platform 102 is operable to receive and process actionable alert data pertaining to an alert … related to contextual data via Application Programming Interface (API) 104-1. API 104-1 can enable determining security alerts from multiple sources along with contextual data … the received actionable alert data at 104-3 can be a Security Information and Event Management (SIEM) alert. SIEM tools may generate SIEM alerts, for example, by aggregating data from different internal sources of the monitored network to identify anomalous behavior that may be indicative of a cyberattack. Furthermore, other actionable alerts can be received at 104-4 by SOAR platform 102, for example, from solutions and software applications related to Endpoint Detection and Response (EDR) tools and/or services, an Intrusion Detection System (IDS) and so forth); and Paragraph [0035]: The SOAR platform integrates with SIEM, EDR, threat intelligence, and others tools to facilitate threat hunting across multiple tools and sources from a single location), wherein the plurality of cybersecurity threat protection applications is deployed across a managed cybersecurity network (Narula – Figure 1: illustration of a network architecture in which a SOAR platform is deployed along with various cybersecurity applications/tools; and Paragraph [0039]: FIG. 1 is a network architecture 100 in which aspects of the present invention may be implemented in accordance with an embodiment of the present invention. In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident), and wherein the plurality of cybersecurity threat protection applications is managed using a security orchestration, automation, and response (SOAR) platform (Narula – Paragraph [0035]: The SOAR platform integrates with SIEM, EDR, threat intelligence, and others tools to facilitate threat hunting across multiple tools and sources from a single location. The mind map view proposed herein enables visualization, querying, enrichment, and initiation of manual or automated actions from a centralized location in the SOAR platform. The mind map approach is a preferable solution to developing a playbook in certain scenarios; and Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources. According to one embodiment, SOAR platform 102 is operable to apply decision making logic, combined with context, to provide formalized workflows and enable informed prioritization (triage) of remediation tasks relating to threats observed at the SOC); accumulate one or more cybersecurity threat protection indications from the plurality of cybersecurity threat protection applications (Narula – Figure 1: illustration of a network architecture in which a SOAR platform is deployed along with various cybersecurity applications/tools; and Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources; and Paragraph [0041]: The SOAR platform 102 may facilitate identification and responding to cyberattacks by processing a large volume of alert and log data that may contain information indicative of the type and nature of an attack that may be underway. In an embodiment, the SOAR platform 102 is operable to receive and process actionable alert data pertaining to an alert … related to contextual data via Application Programming Interface (API) 104-1. API 104-1 can enable determining security alerts from multiple sources along with contextual data … the received actionable alert data at 104-3 can be a Security Information and Event Management (SIEM) alert. SIEM tools may generate SIEM alerts, for example, by aggregating data from different internal sources of the monitored network to identify anomalous behavior that may be indicative of a cyberattack. Furthermore, other actionable alerts can be received at 104-4 by SOAR platform 102, for example, from solutions and software applications related to Endpoint Detection and Response (EDR) tools and/or services, an Intrusion Detection System (IDS) and so forth); and Paragraph [0035]: The SOAR platform integrates with SIEM, EDR, threat intelligence, and others tools to facilitate threat hunting across multiple tools and sources from a single location); analyze the one or more cybersecurity threat protection indications that were accumulated (Narula – Paragraph [0039]: In the context of the present example, a SOAR platform 102 may be operatively coupled with a SOC of a monitored network to facilitate receipt of alert data pertaining to an incident. SOAR platform 102 may represent a cloud-based SOAR service or a platform provided by a managed security service provider (MSSP). Alternatively or additionally, SOAR platform 102 may include an on-premise SOAR platform that receives data from a wide range of different sources. According to one embodiment, SOAR platform 102 is operable to apply decision making logic, combined with context, to provide formalized workflows and enable informed prioritization (triage) of remediation tasks relating to threats observed at the SOC; and Paragraph [0041]: The SOAR platform 102 may facilitate identification and responding to cyberattacks by processing a large volume of alert and log data that may contain information indicative of the type and nature of an attack that may be underway. In an embodiment, the SOAR platform 102 is operable to receive and process actionable alert data pertaining to an alert … related to contextual data).
Narula does not expressly teach wherein the analyzing determines an indication abnormality; and inferring a cybersecurity threat protection application misconfiguration, based on the analyzing.
However, Swafford teaches wherein the analyzing determines an indication abnormality (Swafford – Paragraph [0074]: Certain embodiments of the invention likewise reflect an appreciation that large numbers of false positives may be inadvertently generated when a security policy's associated rule fails to accommodate certain circumstances, situations, or conditions; and Paragraph [0171]: As used herein, a risk-adaptive security policy 914 broadly refers to a security policy implemented to be monitored by security analytics system 118 to detect whether it is generating an undesirable number of false positives, and if so, be remediated to lower the number of false positives being generated); and infer a cybersecurity threat protection application misconfiguration, based on the analyzing (Swafford – Paragraph [0024]: Certain aspects of the invention reflect an appreciation that a security policy may be inadvertently violated as a consequence of the occurrence of a legitimate event or user behavior. Certain aspects of the invention likewise reflect an appreciation that such violations are often the result of certain security policy rules that have been unintentionally misconfigured. Likewise, certain aspects of the invention reflect an appreciation that the number, or volume, or false positives generated by such misconfigurations may be quite large; and Paragraph [0034]: As used herein, an endpoint agent 306 broadly refers to a software agent used in combination with an endpoint device 304 to establish a protected endpoint 302 … In certain of these approaches the software agent is implemented to autonomously decide if a particular action is appropriate for a given event, such as an observed user behavior; and Paragraph [0175]: Referring now to FIG. 9, noisy security policy remediation operations are begun in certain embodiments by the security analytics system 118 performing ongoing monitoring operations to identify a risk-adaptive security policy 914 that has been violated. In certain embodiments, the endpoint agent 306 may be implemented to determine if the risk-adaptive security policy 914 has been violated. In certain embodiments, the endpoint agent 306 may be implemented to notify the security analytics system 118 if the risk-adaptive security policy 914 has been violated; and Paragraph [0176]: If it is determined that a risk-adaptive security policy 914 has been violated, then a determination is made whether the violated risk-adaptive security policy 914 is noisy … the determination may be made as a result of statistical analysis; and Paragraph [0178]: If it is determined the risk-adaptive security policy 914 is noisy, then the risk of revising the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 to reduce the number of false positives is assessed; and Paragraph [0180]: If it is determined the assessed risk is acceptable, the rules, actions, or a combination thereof, associated with the noisy risk-adaptive security policy 914 are revised to reduce the number of false positives).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula, further incorporating Swafford to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Swafford’s misconfiguration inference based on alert analyses into Narula’s system for cybersecurity management via SOAR platform. This combination would enable beneficial application of the alert intelligence gathered by the SOAR tool(s) for more efficient use of system/network resources in incident resolution, e.g. by reducing false positives.
Regarding Claim 17:
Claim 17 is a system claim with limitations corresponding to those of method Claim 2. Therefore, Claim 17 is rejected with the same combination and rationale as that of the rejection of Claim 2.
Regarding Claim 18:
Claim 18 is a system claim with limitations corresponding to those of method Claim 6. Therefore, Claim 18 is rejected with the same combination and rationale as that of the rejection of Claim 6.
Regarding Claim 19:
Claim 19 is a system claim with limitations corresponding to those of method Claim 9. Therefore, Claim 19 is rejected with the same combination and rationale as that of the rejection of Claim 9.
Regarding Claim 20:
Claim 20 is a system claim with limitations corresponding to those of method Claim 14. Therefore, Claim 20 is rejected with the same combination and rationale as that of the rejection of Claim 14.
Claim(s) 4 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narula, in view of Swafford and Bridges et al. (Bridges, R. A., Rice, A. E., Oesch, S., Nichols, J. A., Watson, C., Spakes, K., Norem, S., Huettel, M., Jewell, B., Weber, B., Gannon, C., Bizovi, O., Hollifield, S. C., & Erwin, S. (2023). Testing soar tools in use. Computers & Security, 129, 103201. https://doi.org/10.1016/j.cose.2023.103201)), hereinafter Bridges.
Regarding Claim 4:
The combination of Bridges and Swafford teaches the method of claim 2.
The combination of Bridges and Swafford does not expressly teach wherein the false positive is determined by security operations center personnel.
However, Bridges teaches wherein the false positive is determined by security operations center personnel (Bridges – Section 5.6: To test a SOAR tool, representative investigations needed to be possible for the analysts to work; and Section 5.6.1: The NIDS scenarios were the first type of investigations designed to be representative of common tasks for a Tier 1 (i.e., junior) operator. For these scenarios, the analyst investigated an alert generated by Suricata, the NIDS component of the test environment. The analyst had to understand the given NIDS alert and determine if it was a true or false indicator of compromise on the network).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula and Swafford, further incorporating Bridges to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Bridges’s teachings to receive false positive determinations from SOC personnel into Narula and Swafford’s system for cybersecurity management via SOAR platform. This additional consideration would enhance the system by integrating machine and human intelligence to optimize security outcomes in the system.
Regarding Claim 6:
The combination of Bridges and Swafford teaches the method of claim 1.
The combination of Bridges and Swafford does not expressly teach wherein the indication abnormality is based on one or more cybersecurity threat protection indications from a single type of cybersecurity threat protection application.
However, Bridges further teaches wherein the indication abnormality is based on one or more cybersecurity threat protection indications from a single type of cybersecurity threat protection application (Bridges – Section 5.6: To test a SOAR tool, representative investigations needed to be possible for the analysts to work; and Section 5.6.1: The NIDS scenarios were the first type of investigations designed to be representative of common tasks for a Tier 1 (i.e., junior) operator. For these scenarios, the analyst investigated an alert generated by Suricata, the NIDS component of the test environment. The analyst had to understand the given NIDS alert and determine if it was a true or false indicator of compromise on the network).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula and Swafford, further incorporating Bridges to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Bridges’s teachings to detect indication anomalies by observing the behavior of security applications of a single type into Narula and Swafford’s system for cybersecurity management via SOAR platform. This combination would enhance the method with precision in identifying targeted scenarios that indicate potential system malfunction.
Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narula, in view of Swafford and Gelman et al. (US 20230362184 A1), hereinafter Gelman.
Regarding Claim 7:
The combination of Narula and Swafford teaches the method of claim 1.
The combination of Narula and Swafford does not expressly teach wherein the indication abnormality is based on a time-sequenced commonality among two or more cybersecurity threat protection indications.
However, Gelman teaches wherein the indication abnormality is based on a time-sequenced commonality among two or more cybersecurity threat protection indications (Gelman – Paragraph [0074]: The temporal computation processor 309 can provide context classification data to the machine learning module 306 that tracks data across the entirety of a security operations center (SOC), for example, shown in FIG. 1. Context data can be captured from various data sources having information regarding customer vulnerability and alert activity patterns, computer network size, configuration, state, etc., detection sensor activity, and so on … Several temporal features and summary statistics over numerous granularity time windows are used to encapsulate these behavioral signals. This data is capable of capturing information such as sudden anomalous peaks in alert volume that may indicate sensor noise or misconfigurations).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula and Swafford, further incorporating Gelman to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Gelman’s teaching to identify anomalous alert patterns based on activity in varying-granularity time windows into Narula and Swafford’s combined method for cybersecurity management via SOAR platform. This additional functionality would enhance the method by providing concrete parameters for detecting abnormal sensor/tool behavior.
Regarding Claim 8:
The combination of Narula, Swafford, and Gelman teaches the method of claim 7.
Gelman further teaches wherein the time-sequenced commonality occurs over more than one login of an endpoint application (Gelman – Paragraph [0074]: The temporal computation processor 309 can provide context classification data to the machine learning module 306 that tracks data across the entirety of a security operations center (SOC), for example, shown in FIG. 1. Context data can be captured from various data sources having information regarding customer vulnerability and alert activity patterns, computer network size, configuration, state, etc., detection sensor activity, and so on … This data is capable of capturing information such as sudden anomalous peaks in alert volume that may indicate sensor noise or misconfigurations. Event features can be computed by the temporal computation processor 309 based on different time windows and predicates over various time periods. This is performed to capture activity at different levels of temporal granularity, for example, time patterns measured in seconds, hours, weeks, and so on resulting in a plurality of numerical features. Possible predicates may include … count of alerts on an endpoint).
The motivation to combine the arts is the same as that of Claim 7.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narula, in view of Swafford and Levy (US 10601876 B1), hereinafter Levy.
Regarding Claim 11:
The combination of Narula and Swafford teaches the method of claim 9.
The combination of Narula and Swafford does not expressly teach wherein the indication abnormality comprises a positive threat protection indication from one cybersecurity threat protection application and a contemporaneous negative threat protection indication from another cybersecurity threat protection application.
However, Levy teaches wherein the indication abnormality comprises a positive threat protection indication from one cybersecurity threat protection application and a contemporaneous negative threat protection indication from another cybersecurity threat protection application (Levy – Col. 1, Line 18-26: when two or more security applications reach conflicting decisions about how to handle a particular action (e.g., downloading a file, writing data to a server, creating a new cloud computing account, accessing a secure database, etc.), insecurities may occur. If one security application determines that the action should be permitted, and another decides to block it, there is no decision-making as to which of the conflicting decisions should override the other; and Figure 2: Illustration of security application policy conflict; and Col. 8, Line 17-52: security application 101 has a corresponding policy 201, and security application 102 has its own policy 202 … in some embodiments both policies may also have corresponding timing data (e.g., times actions are to be performed, time durations, time zones, etc.) … It should be noted that, depending on timing data associated with policy 201 and policy 202, a conflict may or may not exist as described above. For example, if timing is not considered and the “Action” of each policy is applicable across all times, a conflict will arise as noted above).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula and Swafford, further incorporating Levy to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Levy’s teaching to identify and resolve conflicts in security application policies into Narula and Swafford’s combined method for cybersecurity management via SOAR platform. This added consideration from Levy would ensure that any potential conflicting security policies among security tools in a network would not result in exploitation by attackers.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narula, in view of Swafford and Hanson et al. (US 12511395 B1), hereinafter Hanson.
Regarding Claim 13:
The combination of Narula and Swafford teaches the method of claim 12.
The combination of Narula and Swafford does not expressly teach wherein the remedial action is ingested by the SOAR for automatic reconfiguration.
However, Hanson teaches wherein the remedial action is ingested by the SOAR for automatic reconfiguration (Hanson – Col. 9, Line 37-47: In some examples, a SOAR service 106 includes a playbooks manager 164 that enables users to automate actions or series of actions by creating digital “playbooks” that can be executed by the SOAR service 106. At a high level, a playbook represents a customizable computer program that can be executed by a SOAR service 106 to automate a wide variety of possible operations related to an IT environment. These operations-such as quarantining devices, modifying firewall settings, restarting servers, and so forth—are typically performed by various security products by abstracting product capabilities using an integrated “app model.”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Narula and Swafford, further incorporating Hanson to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Hanson’s teaching of automated SOAR triggered responses including reconfiguring/resetting network security tools into Narula and Swafford’s combined method for cybersecurity management via SOAR platform. This additional functionality would results in further ease of use and efficiency of the system.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dojka et al. (US 20210352136 A1) teaches a system for security monitoring of user accounts in a cloud environment, including a capability to determine misconfigured application settings
Niv et al. (US 12008222 B1) teaches systems and methods for detecting cyberattacks via orchestration platform, including those that abuse misconfigured permissions
Kunchakarra et al. (US 20210049127 A1) teaches a method for establishing and maintaining resource configurations in a monitored network
van Ede et al. (van Ede, T., Khasuntsev, N., Steen, B., & Continella, A. (2022). Detecting anomalous misconfigurations in AWS Identity and Access Management Policies. Proceedings of the 2022 on Cloud Computing Security Workshop, 63–74. https://doi.org/10.1145/3560810.3564264) teaches a method for detecting overly permissive or overly restrictive IAM policies
Pandurangi et al. (US 20220046059 A1) teaches systems and methods for identifying and remediating cloud application misconfigurations that create significant security risks
Maloney et al. (US 20230161604 A1) teaches methods and systems for implementing a configuration tool which monitors behavior and status of network applications in order to automatically update the application configurations as necessary
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOSEPH DILUZIO whose telephone number is (703)756-1229. The examiner can normally be reached Mon - Fri -- 7:30 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS JOSEPH DILUZIO/Examiner, Art Unit 2498
/YIN CHEN SHAW/Supervisory Patent Examiner, Art Unit 2498