Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 6-7, 12-18, 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Murphy et al. (US 20250350628 A1).
Regarding claim 1, Murphy teaches a system comprising: (system 0009+0059) a processor system; and a memory that stores computer-executable instructions that are executable by the processor system to at least: (0009+0059; computing system including a processor and memory is configured to perform operations )
receive an alert, (system receives an alert) which indicates that a potentially anomalous event (security event) has occurred with regard to an entity; (0955; event related to network entity) ([1083] In response to receiving 3300 such an alert (e.g., alert 3350) concerning an event (e.g., event 62) within a computing platform (e.g., computing platform 60), threat mitigation process 10 may define a query (e.g., query 3352) for researching the alert (e.g., alert 3350). For example, assume that the alert (e.g., alert 3350) concerns an event (e.g., event 62) in which a user (e.g., 42) is e.g., downloading a large quantity of files to an IP address in Russia, wherein these files are highly confidential and are being downloaded in the middle of the night. Accordingly and in response to this alert (e.g., alert 3350), threat mitigation process 10 may to define a query (e.g., query 3352) that inquires into the specifics of the email traffic of the user (e.g., user 42) who is the subject of this alert (e.g., alert 3350). The result set (e.g., result set 3354) generated by this query (e.g., query 3352) may be provided to a generative AI model (e.g., generative AI model 3356) for subsequent processing. Accordingly, threat mitigation process 10 may be configured to ensure that the result set (e.g., result set 3354) produced in response to the query (e.g., query 3352) is sized so that it is processable (i.e., not too big and not too small) by the generative AI model (e.g., generative AI model 3356).).
0078; For example, when a security event is detected, SIEM system 230 might log additional information, generate an alert and instruct other security controls to mitigate the security event. Accordingly, SIEM system 230 may be configured to monitor and log the activity of security-relevant subsystems 226 [0390] Alert Generation: Upon detecting suspicious activities, agents may generate alerts. These alerts can be configured according to severity levels and are sent to administrators or a central monitoring system for further action. [0967] Threat mitigation process 10 may obtain 3002 entity data for the network entity (e.g., one or more network entities 64) from a plurality of data sources (e.g., data sources 66), thus defining a plurality of network entity data portions (e.g., network entity data portions 68). 0070] Such an AI/ML process (e.g., AI/ML process 56) may begin with the collection of vast amounts of data from multiple sources within the computer network. This may include logs from firewalls, intrusion detection and prevention systems (IDS/IPS), endpoints, applications, servers, and user activity. [0072] For instance, an AI/ML model may detect that a user is accessing files at unusual hours or transferring unusually large amounts of data to an external server, which is a behavior that might be missed by traditional tools. [0085] Threat mitigation process 10 may generate 336 a security profile (e.g., security profile 350) based, at least in part, upon system-defined consolidated platform information 236. Through the use of security profile (e.g., security profile 350), the user/owner/operator of computing platform 60 may be able to see that e.g., they have a security score of 605 out of a possible score of 1,000, wherein the average customer has a security score of 237. While security profile 350 in shown in the example to include several indicators that may enable a user to compare (in this example) computing platform 60 to other computing platforms, this is for illustrative purposes only and is not intended to be a limitation of this disclosure, as it is understood that other configurations are possible and are considered to be within the scope of this disclosure)
generate a profile of a threat actor using information (0894-0895; rules defining concerning behavior) that describes behavior of the threat actor, ((rules defining concerning behavior) the profile indicating a plurality of behaviors that the threat actor is known to use; (0894-0895; historically defined concerning behavior including [0900] Threat mitigation process 10 may compare 2706 such monitored activity (e.g., monitored activity 326) to the one or more detection rules (e.g., detection rules 324) to determine if such monitored activity (e.g., monitored activity 326) includes suspect activity indicative of a security event. [0901] As discussed above, examples of such security events may include but are not limited to access auditing; anomalies; authentication; denial of services; exploitation; malware; phishing; spamming; reconnaissance; and/or web attack within a monitored computing platform (e.g., computing platform 60). [0787] Upon executing 2316 this recommended next step, threat mitigation process 10 may determine that User X is acting in a very suspicious manner. Accordingly, threat mitigation process 10 may automatically perform 2318 one or more investigative operations concerning User X with respect to the security event. For example, threat mitigation process 10 may automatically perform 2318 one or more investigative operations concerning the network usage of User X, the background of User X, the web browsing history of User X, etc. All of this research and investigation may result in threat mitigation process 10 defining the recommended action of disabling all accounts of User X. [0365] Anomaly Detection: Generative models, such as Generative Adversarial Networks (GANs), can be trained on normal network traffic data to understand what typical network behavior looks like. Once trained, these models can generate new network traffic data that is expected to be similar to the “normal” traffic. By comparing real network traffic to these generated patterns, anomalies that could indicate potential threats, such as DDoS attacks or unauthorized access, can be detected more efficiently. Anomalies stand out because they deviate significantly from the generated “normal” patterns. )
trigger an artificial intelligence (AI) model to determine (analyze and conclude see 0220) that the threat actor performs a malicious activity with regard to the entity ([0387] Monitoring Network Traffic: Agents may continuously monitor network traffic for signs of unusual or suspicious behavior. This includes analyzing packets, inspecting protocols, and scrutinizing port activity, among other things. [0388] Detection of Anomalies: Agents may use predefined rules or sophisticated algorithms (including machine learning models) to identify deviations from normal network behavior, which could indicate an intrusion or an attempt at one.; [0220] Once the investigation is complete, threat mitigation process 10 may generate 1556 a conclusion (e.g., conclusion 288) concerning the detected security event (e.g., a Denial of Services attack) based, at least in part, upon the detected security event (e.g., a Denial of Services attack), the one or more artifacts (e.g., artifacts 250), and artifact information 286. Threat mitigation process 10 may document 1558 the conclusion (e.g., conclusion 288), report 1560 the conclusion (e.g., conclusion 288) to a third-party (e.g., the user/owner/operator of computing platform 60). Further, threat mitigation process 10 may obtain 1562 supplemental artifacts and artifact information (if needed to further the investigation).
by taking into consideration an extent to which the potentially anomalous event (mapping above security event or log data) corresponds to the plurality of behaviors, (0216-0220; 0389; teach log data and artifacts used in the analysis/AI system; 0441; A loop facilitates the sequential examination of collected data, enabling the AI system to methodically identify unusual patterns or signatures indicative of malicious activities. The complexity of network security investigations is further addressed through the implementation of nested loops, where a loop is embedded within another, thereby allowing for multi-layered analysis.)
wherein the AI model is triggered by providing an AI prompt (0427-0432; 0465; prompt tailored for threat analysis and mitigation) together with contextual information as inputs to the AI model, (0408; [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output.)
the AI prompt requesting a determination whether the threat actor performs the malicious activity with regard to the entity, (0427-0432; 0465; prompt tailored for threat analysis and mitigation)
the contextual information comprising the profile of the threat actor and a description of the potentially anomalous event, (0141-0142; 0219-0220; Artifacts such use information on known bad actor; [0982] Resources External to the Computing Platform: These include external threat intelligence feeds, blacklists, vulnerability databases, and industry alerts. Such resources may provide context and enrich local network data, helping identify known malicious IPs, emerging attack vectors, and exploitable vulnerabilities relevant to the organization.)
wherein the contextual information comprises context regarding the AI prompt; ([0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output. [0982] Resources External to the Computing Platform: These include external threat intelligence feeds, blacklists, vulnerability databases, and industry alerts. Such resources may provide context and enrich local network data, helping identify known malicious IPs, emerging attack vectors, and exploitable vulnerabilities relevant to the organization. [1280] Prompts (e.g., prompt 3660) may include additional context to improve output relevance (e.g., data points, sample inputs and outputs, formatting preferences, audience specifications, or quality criteria). In advanced use cases, users may employ techniques like zero-shot prompting (e.g., asking the generative AI model to perform a task without examples), few-shot prompting (e.g., providing examples of desired behavior to the generative AI model), or chain-of-thought prompting (e.g., encouraging the generative AI model to reason step-by-step before giving a final answer). Such strategies may help enhance output accuracy, reliability, and transparency, especially in high-stakes or logic-driven tasks.)
and as a result of the AI model determining that the threat actor performs the malicious activity with regard to the entity, trigger execution of a remedial operation with regard to the potentially anomalous event. (0144-0147; [0135] In response to detecting such a DoS attack, threat mitigation process 10 may effectuate 808 one or more remedial operations. For example and with respect to such a DoS attack, threat mitigation process 10 may effectuate 808 e.g., a remedial operation that instructs WAF (i.e., Web Application Firewall) 212 to deny all incoming traffic from the identified attacker based upon e.g., protocols, ports or the originating IP addresses.)
Regarding claim 2, Murphy teaches the system of claim 1, and is disclosed above, Murphy further teaches wherein the computer-executable instructions are executable by the processor system to at least:
reduce a likelihood that a determination of the AI model, which indicates that the threat actor performs the malicious activity with regard to the entity, is a false positive (providing clarity such as context reduces false positives) (0408; [0409] Preprocess User Inputs: Clean and structure user queries into a format that the model can more effectively understand and process. This could involve correcting typos, removing unnecessary punctuation, or structuring the input into a more coherent prompt. [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output.)
by providing the AI prompt together with the contextual information, (0575; input sequences with context; see above and claim 1 mapping on prompt engineering)
which comprises the profile of the threat actor and the description of the potentially anomalous event, (0216-0220; 0389; teach log data and artifacts used in the analysis/AI system; 0441; A loop facilitates the sequential examination of collected data, enabling the AI system to methodically identify unusual patterns or signatures indicative of malicious activities. The complexity of network security investigations is further addressed through the implementation of nested loops, where a loop is embedded within another, thereby allowing for multi-layered analysis.) as the inputs to the AI model. [0409] Preprocess User Inputs: Clean and structure user queries into a format that the model can more effectively understand and process. This could involve correcting typos, removing unnecessary punctuation, or structuring the input into a more coherent prompt. [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output.) (0141-0142; 0219-0220; Artifacts such use information on known bad actor; [0982] Resources External to the Computing Platform: These include external threat intelligence feeds, blacklists, vulnerability databases, and industry alerts. Such resources may provide context and enrich local network data, helping identify known malicious IPs, emerging attack vectors, and exploitable vulnerabilities relevant to the organization.)
Regarding claim 6, Murphy teaches a method implemented by a computing system, the method comprising: (0005; method)
receiving an alert,(alerts based on activity) which indicates that a potentially anomalous event (event data) has occurred with regard to an entity (0955; event related to network entity) (0078; For example, when a security event is detected, SIEM system 230 might log additional information, generate an alert and instruct other security controls to mitigate the security event. Accordingly, SIEM system 230 may be configured to monitor and log the activity of security-relevant subsystems 226 [0390] Alert Generation: Upon detecting suspicious activities, agents may generate alerts. These alerts can be configured according to severity levels and are sent to administrators or a central monitoring system for further action. [0967] Threat mitigation process 10 may obtain 3002 entity data for the network entity (e.g., one or more network entities 64) from a plurality of data sources (e.g., data sources 66), thus defining a plurality of network entity data portions (e.g., network entity data portions 68). 0070] Such an AI/ML process (e.g., AI/ML process 56) may begin with the collection of vast amounts of data from multiple sources within the computer network. This may include logs from firewalls, intrusion detection and prevention systems (IDS/IPS), endpoints, applications, servers, and user activity. [0072] For instance, an AI/ML model may detect that a user is accessing files at unusual hours or transferring unusually large amounts of data to an external server, which is a behavior that might be missed by traditional tools. [0085] Threat mitigation process 10 may generate 336 a security profile (e.g., security profile 350) based, at least in part, upon system-defined consolidated platform information 236. Through the use of security profile (e.g., security profile 350), the user/owner/operator of computing platform 60 may be able to see that e.g., they have a security score of 605 out of a possible score of 1,000, wherein the average customer has a security score of 237. While security profile 350 in shown in the example to include several indicators that may enable a user to compare (in this example) computing platform 60 to other computing platforms, this is for illustrative purposes only and is not intended to be a limitation of this disclosure, as it is understood that other configurations are possible and are considered to be within the scope of this disclosure)
generating a profile of a threat actor (gathering information related to an event or a userID/username) using information that describes behavior (log data) of the threat actor; (0070; collect and cleaning data including user activity and intrusion activity; 0072They can identify a wide range of security events, such as attempts at unauthorized access, insider threats, phishing attacks, data exfiltration, lateral movement within the network, and signs of malware or ransomware. For instance, an AI/ML model may detect that a user is accessing files at unusual hours or transferring unusually large amounts of data to an external server, which is a behavior that might be missed by traditional tools; [0234] Naturally, the subject matter of these individual data fields may vary depending upon the type of information available via these security-relevant subsystems (e.g., security-relevant subsystem 1650, security-relevant subsystem 1652 and security-relevant subsystem 1654). As (in this example) these are security-relevant subsystems, the information available from these security-relevant subsystems concerns the security of computing platform 60 and/or any security events (e.g., access auditing; anomalies; authentication; denial of services; exploitation; malware; phishing; spamming; reconnaissance; and/or web attack) occurring therein. For example, some of these data fields may concern e.g., user names, user IDs, device locations, device types, device IP addresses, source IP addresses, destination IP addresses, port addresses, deployed operating systems, utilized bandwidth, etc. [0242] As discussed above, data field 1686 within unified platform 290 (e.g., a platform effectuated by threat mitigation process 10) concerns a user ID (and is entitled USER_ID). For this example, assume that: [0243] data field 1656 within security-relevant subsystem 1650 also concerns a user ID and is entitled USER; [0244] data field 1666 within security-relevant subsystem 1652 also concerns a user ID and is entitled ID; and [0245] data field 1676 within security-relevant subsystem 1654 also concerns a user ID and is entitled USR_ID.)
and triggering an artificial intelligence (AI) model (user instruction prompt), to determine whether the threat actor performs a malicious activity (0068-0071; anomaly and security event detection/analysis/and processing) with regard to the entity by providing an AI prompt (user instruction prompt), ([1278] Threat mitigation process 10 may enable 3622 a user (e.g., user 42) to define a prompt (e.g., prompt 3660) for at least one of the two or more generative AI nodes (e.g., generative AI nodes 3650). 0407-0410; 0427-0432; fine tuned AI models accepting user prompts; [0465] Referring also to FIG. 36, threat mitigation process 10 may define 2100 a formatting script (e.g., formatting script 304) for use with a Generative AI model (e.g., generative AI model 302). An example of such a formatting script (e.g., formatting script 304) may include but is not limited to a group of one or more prompts that are tailored to the specific use case or application for which the Generative AI model (e.g., generative AI model 302) is deployed. Specifically, the formatting script (e.g., formatting script 304) may include one or more discrete instructions for the Generative AI model (e.g., generative AI model 302) and/or the large language model (e.g., large language model 308). Such instructions for the Generative AI model (e.g., generative AI model 302) and/or the large language model (e.g., large language model 308) may include: formatting instructions and/or content instructions.)
which comprises the profile of the threat actor (mapping above + [0990] Threat mitigation process 10 may process 3006 the consolidated network entity data (e.g., consolidated network entity data 70) to generate analysis data (e.g., analysis data 72) that concerns the event (e.g., event 62) and/or the network entity (e.g., one or more network entities 64). For this example, assume that analysis data (e.g., analysis data 72) concerns one or more network entities 64 (e.g., who is user 42, what is their title, what kind of content do they have access to, how long have they been with the company, do they have any incident history, is this activity normal for them, etc.) and event 62 (e.g., what is being streamed, is it confidential/sensitive, how large is the content, where is the recipient located, is the recipient a known bad actor, etc.). [0991] Accordingly and when processing 3006 the consolidated network entity data (e.g., consolidated network entity data 70) to generate analysis data (e.g., analysis data 72) that concerns the event (e.g., event 62) and/or the network entity (e.g., one or more network entities 64), threat mitigation process 10 may determine 3008 a position and a history of any network user (in this example, user 42) involved in the event (e.g., event 62). For example, if user 42 is a marketing executive and they are streaming the latest marketing video, that may not be concerning. However, if user 42 is a mail room employee and they are streaming the latest technical disclosure video, that may be concerning. [0218] Threat mitigation process 10 may obtain 1552 artifact information (e.g., artifact information 286) concerning the one or more artifacts (e.g., artifacts 250), wherein artifact information 286 may be obtained from information resources include within (or external to) computing platform 60. [0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors). [0220] Once the investigation is complete, threat mitigation process 10 may generate 1556 a conclusion (e.g., conclusion 288) concerning the detected security event (e.g., a Denial of Services attack) based, at least in part, upon the detected security event (e.g., a Denial of Services attack), the one or more artifacts (e.g., artifacts 250), and artifact information 286. Threat mitigation process 10 may document 1558 the conclusion (e.g., conclusion 288), report 1560 the conclusion (e.g., conclusion 288) to a third-party (e.g., the user/owner/operator of computing platform 60). Further, threat mitigation process 10 may obtain 1562 supplemental artifacts and artifact information (if needed to further the investigation).)
and a description of the potentially anomalous event, (See also 217-220; [0182] Investigative Information (a portion of analytical information): Unified searching and/or automated searching, such as e.g., a security event occurring and searches being performed to gather artifacts concerning that security event. [0141] Upon detecting 900 such a security event within computing platform 60, threat mitigation process 10 may gather 904 artifacts (e.g., artifacts 250) concerning the above-described security event. When gathering 904 artifacts (e.g., artifacts 250) concerning the above-described security event, threat mitigation process 10 may gather 906 artifacts concerning the security event from a plurality of sources associated with the computing platform, wherein examples of such plurality of sources may include but are not limited to the various log files maintained by SIEM system 230, and the various log files directly maintained by the security-relevant subsystems.)
as an input to the AI model, (0363; the threat mitigation process using AI models; see mapping above describing prompting the AI models; 0211 teaches the input of the log files into the threat mitigation process; [0212] Threat mitigation process 10 may identify 1460 more threat-pertinent content 280 included within the processed content, wherein identifying 1460 more threat-pertinent content 280 included within the processed content may include processing 1462 the processed content to identify actionable processed content that may be used by a threat analysis engine (e.g., STEM system 230) for correlation purposes. Threat mitigation process 10 may route 1464 more threat-pertinent content 280 to this threat analysis engine (e.g., SIEM system 230). [0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors).)
the AI prompt requesting a determination (a prompt engineered around the application and purpose such as threat mitigation) whether the threat actor performs the malicious activity (0242-0245; threat event associated with a user; [0786] Review the user identity associated with the event and look for suspicious activity that may be associated with the user) with regard to the entity. (See 0427-0433; + [0428] prompt engineering is an essential aspect of working with large language models (e.g., large language model 308), as it provides a way to guide the AI model's responses and ensure that they are accurate, relevant, and appropriate for the intended application. [0429] In general, prompt engineering involves designing and fine-tuning prompts (e.g., formatting script 304) that may be used to train or fine-tune a large language model, such as OpenAI's GPT-3. The prompts (e.g., formatting script 304) can take a variety of forms, including natural language queries, prompts with specific keywords or phrases, or a combination of both. [0430] The goal of prompt engineering is to create a set of prompts (e.g., formatting script 304) that are tailored to the specific use case or application, such as generating conversational responses, answering specific questions, or generating creative writing. By designing prompts (e.g., formatting script 304) that are closely aligned with the intended use case, developers can improve the accuracy and relevance of the model's responses, resulting in more effective and engaging interactions. [ [0923] Threat mitigation process 10 may associate 2804 the monitored activity (e.g., monitored activity 326) with a user of the computing platform (e.g., computing platform 60), thus defining an associated user (e.g., user 328). [0924] Threat mitigation process 10 may assign 2806 a risk level to the monitored activity (e.g., monitored activity 326) to determine if such monitored activity (e.g., monitored activity 326) is indicative of a security event, wherein the assigned risk level is based, at least in part, upon the associated user (e.g., user 328). Accordingly, if the associated user (e.g., user 328) is the owner of the company, the assigned risk level may be reduced due to the position of user 328. Conversely, if the associated user (e.g., user 328) is a new hire of the company (or someone who has shown questionable judgement in the past), the assigned risk level may be increased.)
Regarding claim 7, Murphy teaches the method of claim 6, and is disclosed above, Murphy further teaches wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity (see mapping above) comprises:
triggering the AI model (0069-0073; identifying potentially malicious/anomalous behavior) to determine that the threat actor performs the malicious activity with regard to the entity as a result of a maliciousness criterion (security event 0072) being satisfied; (See also mapping above + 0072-0071 when monitored activity is suspicious the AI/ML process begins analyzing data; 0441; A loop facilitates the sequential examination of collected data, enabling the AI system to methodically identify unusual patterns or signatures indicative of malicious activities. [0787] Upon executing 2316 this recommended next step, threat mitigation process 10 may determine that User X is acting in a very suspicious manner. Accordingly, threat mitigation process 10 may automatically perform 2318 one or more investigative operations concerning User X with respect to the security event. For example, threat mitigation process 10 may automatically perform 2318 one or more investigative operations concerning the network usage of User X, the background of User X, the web browsing history of User X, etc. All of this research and investigation may result in threat mitigation process 10 defining the recommended action of disabling all accounts of User X.) [1164] Additionally, threat mitigation process 10 may provide guidance concerning how rules were applied and what actions may be taken to address the event (e.g., event 62) going forward. For example, this may include a technical breakdown of how the rule was evaluated (e.g., thresholds for failed login attempts, timing constraints, data movement, or system behavior patterns). See also 0923-0924 teaches monitoring user activity; [0387] Monitoring Network Traffic: Agents may continuously monitor network traffic for signs of unusual or suspicious behavior. This includes analyzing packets, inspecting protocols, and scrutinizing port activity, among other things. [0389] Log Activity: Agents may log network activity, providing a detailed record of traffic patterns, access attempts, and potentially malicious activities.
and wherein the method further comprises: as a result of the maliciousness criterion being satisfied, triggering execution of a computer-executable instruction to block access of the threat actor to the entity. ([0073] When a potential threat is detected, AI/ML process 56 may generate an alert for cybersecurity analysts to investigate further or, in more advanced setups, trigger automated responses. These could include isolating compromised devices, blocking suspicious IP addresses, or throttling data transfers to prevent data loss. Furthermore, feedback from these events (e.g., whether a detection was accurate or a false positive) may be used to retrain and improve AI/ML models over time, enhancing its precision and adaptability. [0185] Automate Information (a portion of automation): The execution of a single (and possibly simple) action one time, such as the blocking an IP address from accessing computing platform 60 whenever such an attempt is made. [0393] Active Agents: In addition to monitoring, active agents can take predefined actions when a threat is detected, such as blocking traffic, isolating affected network segments, or directly interacting with the threat to mitigate its impact. [0454] Recommended Actions may provide examples of responsive actions that may be implemented (e.g., port blocking/stream shutdown/perpetrator account disablement) to mitigate the negative impact of the security event.)
Regarding claim 12, Murphy teaches the method of claim 6, and is disclosed above, Murphy further teaches wherein the AI prompt further comprises logs that are associated with the entity; (mapping above + [0990] Threat mitigation process 10 may process 3006 the consolidated network entity data (e.g., consolidated network entity data 70) to generate analysis data (e.g., analysis data 72) that concerns the event (e.g., event 62) and/or the network entity (e.g., one or more network entities 64). For this example, assume that analysis data (e.g., analysis data 72) concerns one or more network entities 64 (e.g., who is user 42, what is their title, what kind of content do they have access to, how long have they been with the company, do they have any incident history, is this activity normal for them, etc.) and event 62 (e.g., what is being streamed, is it confidential/sensitive, how large is the content, where is the recipient located, is the recipient a known bad actor, etc.). [0991] Accordingly and when processing 3006 the consolidated network entity data (e.g., consolidated network entity data 70) to generate analysis data (e.g., analysis data 72) that concerns the event (e.g., event 62) and/or the network entity (e.g., one or more network entities 64), threat mitigation process 10 may determine 3008 a position and a history of any network user (in this example, user 42) involved in the event (e.g., event 62). For example, if user 42 is a marketing executive and they are streaming the latest marketing video, that may not be concerning. However, if user 42 is a mail room employee and they are streaming the latest technical disclosure video, that may be concerning. [0218] Threat mitigation process 10 may obtain 1552 artifact information (e.g., artifact information 286) concerning the one or more artifacts (e.g., artifacts 250), wherein artifact information 286 may be obtained from information resources include within (or external to) computing platform 60. [0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors). [0220] Once the investigation is complete, threat mitigation process 10 may generate 1556 a conclusion (e.g., conclusion 288) concerning the detected security event (e.g., a Denial of Services attack) based, at least in part, upon the detected security event (e.g., a Denial of Services attack), the one or more artifacts (e.g., artifacts 250), and artifact information 286. Threat mitigation process 10 may document 1558 the conclusion (e.g., conclusion 288), report 1560 the conclusion (e.g., conclusion 288) to a third-party (e.g., the user/owner/operator of computing platform 60). Further, threat mitigation process 10 may obtain 1562 supplemental artifacts and artifact information (if needed to further the investigation).)
and wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model to compare the profile of the threat actor, (known bad actor) the description of the potentially anomalous event, (security event) and the logs (see mapping above) to determine whether the threat actor performs the malicious activity (conclusion on the bad actor and event) with regard to the entity. (0147] Further and when executing 912 a remedial action plan, threat mitigation process 10 may autonomously execute 920 a threat mitigation plan (shutting down the stream and closing the port) when e.g., threat mitigation process 10 assigns 908 a “severe” threat level to the above-described security event (e.g., assuming that it is determined that the streaming of the content is very concerning, as the content is high value and the recipient is a known bad actor). [0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors). [0220] Once the investigation is complete, threat mitigation process 10 may generate 1556 a conclusion (e.g., conclusion 288) concerning the detected security event (e.g., a Denial of Services attack) based, at least in part, upon the detected security event (e.g., a Denial of Services attack), the one or more artifacts (e.g., artifacts 250), and artifact information 286. Threat mitigation process 10 may document 1558 the conclusion (e.g., conclusion 288), report 1560 the conclusion (e.g., conclusion 288) to a third-party (e.g., the user/owner/operator of computing platform 60). Further, threat mitigation process 10 may obtain 1562 supplemental artifacts and artifact information (if needed to further the investigation).)
Regarding claim 13, Murphy teaches the method of claim 12, and is disclosed above, Murphy further teaches further comprising: selecting the logs from a plurality of logs, which are associated with the entity, ([0971] User Behavior Analytics (UBA) Systems: UBA systems may analyze the behavior patterns of users to detect anomalies. [0978] Data Logs: Logs from applications, systems, and network devices may record events such as connections, transactions, errors, and user actions. Analyzing these logs may be critical for spotting anomalies, investigating incidents, and correlating data across sources to understand attack vectors. [0070] Such an AI/ML process (e.g., AI/ML process 56) may begin with the collection of vast amounts of data from multiple sources within the computer network. This may include logs from firewalls, intrusion detection and prevention systems (IDS/IPS), endpoints, applications, servers, and user activity. This raw data may then be preprocessed to clean and normalize it, followed by feature extraction, wherein relevant characteristics may be identified (e.g., access times, login frequencies, the volume and destination of data transfers, protocol usage, and command sequences).
as a result of embeddings, which represent the logs, satisfying a representation criterion. ([1066] Once formed, threat mitigation process 10 may effectuate 3222 a query (e.g., query 3280) on at least a portion of the enriched data repository (e.g., enriched data repository 3276) that spans the plurality of technology types (e.g., Splunk and QRadar, Cribl, etc.). Specifically, being the enriched data repository (e.g., enriched data repository 3276) spans the plurality of technology types (e.g., Splunk and QRadar, Cribl, etc.), queries (e.g., query 3280) may be defined that e.g., identify: [1067] all unsuccessful logins for any users of computing platform 60 over the past 24 hours; [1068] all successful login for user BPM over the past 30 days; and [1069] all downloads of files over 20 mb in size by user JTP.; 0977; Security analysts and AI/ML models may query this data to identify trends and detect long-term threats. [0978] Data Logs: Logs from applications, systems, and network devices may record events such as connections, transactions, errors, and user actions. Analyzing these logs may be critical for spotting anomalies, investigating incidents, and correlating data across sources to understand attack vectors. [1070] Referring also to FIG. 49-50, the following discussion concerns the manner in which threat mitigation process 10 may define a query to generate a result set, wherein the size of the result set may be compared to a target result set size so that the query cane be broaden (or narrowed) to increase (or decrease) the size of the result set)
Regarding claim 14, Murphy teaches the method of claim 12, and is disclosed above, Murphy further teaches further comprising: selecting the logs from a plurality of logs, (querying log data) which are associated with the entity, (log data including user activity) as a result of the logs being associated with the potentially anomalous event (potentially anomalous events such as repeated failed logins). (([1066] Once formed, threat mitigation process 10 may effectuate 3222 a query (e.g., query 3280) on at least a portion of the enriched data repository (e.g., enriched data repository 3276) that spans the plurality of technology types (e.g., Splunk and QRadar, Cribl, etc.). Specifically, being the enriched data repository (e.g., enriched data repository 3276) spans the plurality of technology types (e.g., Splunk and QRadar, Cribl, etc.), queries (e.g., query 3280) may be defined that e.g., identify: [1067] all unsuccessful logins for any users of computing platform 60 over the past 24 hours; [1068] all successful login for user BPM over the past 30 days; and [1069] all downloads of files over 20 mb in size by user JTP.; 0977; Security analysts and AI/ML models may query this data to identify trends and detect long-term threats. [0978] Data Logs: Logs from applications, systems, and network devices may record events such as connections, transactions, errors, and user actions.)
Regarding claim 15, Murphy teaches the method of claim 12, and is disclosed above, Murphy further teaches further comprising:
determining that the logs comprise an identified event (activity in the logs) that corresponds to the potentially anomalous event (security event) (0069-0073; identify security events in the security logs;)
by comparing (identifying) first embeddings (data in the logs) that represent the logs (the data in the logs represents the logs) and a second embedding (0197; Accordingly, security-relevant information that e.g., defines the symptoms of e.g., a Denial of Services attack and security-relevant rules that define the behavior of e.g., a Denial of Services attack may be utilized by threat mitigation process 10 when defining training routine 272.) that represents the potentially anomalous event (0169; 0175; threat definitions; mapping above the system uses definitions and rules (second embedding) to match the activity in the event logs)
wherein the AI prompt further comprises a statement (examiner notes an AI prompt is a statement see also 0119-1127 discussing tailoring prompts as well as 1140-1144) that the identified event in the logs corresponds to the potentially anomalous event; ([0430] The goal of prompt engineering is to create a set of prompts (e.g., formatting script 304) that are tailored to the specific use case or application, such as generating conversational responses, answering specific questions, or generating creative writing. By designing prompts (e.g., formatting script 304) that are closely aligned with the intended use case, developers can improve the accuracy and relevance of the model's responses, resulting in more effective and engaging interactions. [1122] A script/generative AI model pair represents a technique for directing and refining the behavior of large language models to accomplish specific tasks. In this pairing, the generative AI model may serve as the computational engine capable of producing sophisticated and context-aware content, while the prompt may function as a programmable interface that shapes the generative AI model's response by providing it with detailed instructions, structure, and contextual framing. This approach may enable a general-purpose model (which may be capable of responding to a wide range of inputs) to be precisely focused on a particular role, workflow, or domain-specific problem. result set 3452) is appropriately sized for being processed by a generative AI model, threat mitigation process 10 may: [1120] provide 3414 the result set (e.g., result set 3452) to a first prompt/generative AI model pair (e.g., first prompt/generative AI model pair 3466) to generate a first output (e.g., first output 3468); and [1121] provide 3416 the result set (e.g., result set 3452) to at least a second prompt/generative AI model pair (e.g., at least a second prompt/generative AI model pair 3470) to generate at least a second output (e.g., at least a second output 3472).)
and wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model to compare (comparing) the profile of the threat actor,(user activity) the description of the potentially anomalous event,(0197; 0069; definitions of threats, predefined rules, and signature based detection) the logs (log data including user data), and the statement that the identified event in the logs corresponds to the potentially anomalous event to determine whether the threat actor performs the malicious activity with regard to the entity (. ([0900] Threat mitigation process 10 may compare 2706 such monitored activity (e.g., monitored activity 326) to the one or more detection rules (e.g., detection rules 324) to determine if such monitored activity (e.g., monitored activity 326) includes suspect activity indicative of a security event. [0364] Here's how these capabilities are being harnessed for network threat detection: [0365] Anomaly Detection: Generative models, such as Generative Adversarial Networks (GANs), can be trained on normal network traffic data to understand what typical network behavior looks like. Once trained, these models can generate new network traffic data that is expected to be similar to the “normal” traffic. By comparing real network traffic to these generated patterns, anomalies that could indicate potential threats, such as DDoS attacks or unauthorized access, can be detected more efficiently. Anomalies stand out because they deviate significantly from the generated “normal” patterns. [0070] Such an AI/ML process (e.g., AI/ML process 56) may begin with the collection of vast amounts of data from multiple sources within the computer network. This may include logs from firewalls, intrusion detection and prevention systems (IDS/IPS), endpoints, applications, servers, and user activity. This raw data may then be preprocessed to clean and normalize it, followed by feature extraction, wherein relevant characteristics may be identified (e.g., access times, login frequencies, the volume and destination of data transfers, protocol usage, and command sequences).)
Regarding claim 16, Murphy teaches the method of claim 6, and is disclosed above, Murphy further teaches wherein generating the profile comprises:
triggering the AI model to generate the profile (0894-0895; rules defining concerning behavior) by providing a second AI prompt as an input to the AI model; ([0409] Preprocess User Inputs: Clean and structure user queries into a format that the model can more effectively understand and process. This could involve correcting typos, removing unnecessary punctuation, or structuring the input into a more coherent prompt. [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output. [0409] Preprocess User Inputs: Clean and structure user queries into a format that the model can more effectively understand and process. This could involve correcting typos, removing unnecessary punctuation, or structuring the input into a more coherent prompt. [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output.))
wherein the second AI prompt comprises at least one of the following: (427-432; prompt engineering; 1279-1281; [0409] Preprocess User Inputs: Clean and structure user queries into a format that the model can more effectively understand and process. This could involve correcting typos, removing unnecessary punctuation, or structuring the input into a more coherent prompt. [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output.
a historical log that indicates behavior of the threat actor, (0895; Accordingly, threat mitigation process 10 may generate 2700 such detection rules (e.g., detection rules 324) that are indicative of a security event based upon historical suspect activity and/or historical security events defined within data repository 312. [0894] Referring also to FIG. 42, threat mitigation process 10 may generate 2700 one or more detection rules (e.g., detection rules 324) that are indicative of a security event, wherein the one or more detection rules are based upon historical suspect activity and/or historical security events; (0141-0142; 0219-0220; Artifacts such use information on known bad actor; [0218] Threat mitigation process 10 may obtain 1552 artifact information (e.g., artifact information 286) concerning the one or more artifacts (e.g., artifacts 250), wherein artifact information 286 may be obtained from information resources include within (or external to) computing platform 60. [0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors). [0220] Once the investigation is complete, threat mitigation process 10 may generate 1556 a conclusion (e.g., conclusion 288) concerning the detected security event (e.g., a Denial of Services attack) based, at least in part, upon the detected security event (e.g., a Denial of Services attack), the one or more artifacts (e.g., artifacts 250), and artifact information 286. Threat mitigation process 10 may document 1558 the conclusion (e.g., conclusion 288), report 1560 the conclusion (e.g., conclusion 288) to a third-party (e.g., the user/owner/operator of computing platform 60). Further, threat mitigation process 10 may obtain 1562 supplemental artifacts and artifact information (if needed to further the investigation).)
an intelligence report that indicates a method used by the threat actor to perform a malicious attack, ([0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors). [0220] Once the investigation is complete, threat mitigation process 10 may generate 1556 a conclusion (e.g., conclusion 288) concerning the detected security event (e.g., a Denial of Services attack) based, at least in part, upon the detected security event (e.g., a Denial of Services attack), the one or more artifacts (e.g., artifacts 250), and artifact information 286. Threat mitigation process 10 may document 1558 the conclusion (e.g., conclusion 288), report 1560 the conclusion (e.g., conclusion 288) to a third-party (e.g., the user/owner/operator of computing platform 60). Further, threat mitigation process 10 may obtain 1562 supplemental artifacts and artifact information (if needed to further the investigation).)
or information regarding a historical attack associated with the threat actor; (0895; Accordingly, threat mitigation process 10 may generate 2700 such detection rules (e.g., detection rules 324) that are indicative of a security event based upon historical suspect activity and/or historical security events defined within data repository 312. [0894] Referring also to FIG. 42, threat mitigation process 10 may generate 2700 one or more detection rules (e.g., detection rules 324) that are indicative of a security event, wherein the one or more detection rules are based upon historical suspect activity and/or historical security events;
and wherein the second AI prompt requests generation of the profile using a specified structure. ([0407] A formatting script (e.g., formatting script 304) may include a set of instructions or codes configured to structure, preprocess, or format data (input or output) in a way that's optimal for interaction with or processing by a large language model. This can include tasks like cleaning data, structuring prompts, or formatting the model's outputs for specific applications. The exact nature of formatting script 304 can vary widely depending on the requirements of the task at hand and the specifics of the model's interface. [0410] Format Model Prompts: Tailor prompts to fit specific use cases or to elicit more accurate responses from the model. This might include adding specific instructions or context to the prompt that guides the model in generating the desired output. [0411] Post-Process Model Outputs: Clean or format the text generated by the model to meet user expectations or application requirements. This could involve correcting grammar, structuring the output into a specific format (e.g., HTML, JSON), or truncating responses to fit length constraints. [0412] Handle Special Formatting: For certain applications, such as code generation or creating structured data from unstructured text, the script might include rules or templates to format the output in a specific syntax or schema.)
Regarding claim 17, Murphy teaches the method of claim 6, and is disclosed above, Murphy further teaches further comprising:
in response to providing the AI prompt as the input to the AI model, (mapping claim 6) receiving an AI-generated request from the AI model, the AI-generated request asking for feedback (threat mitigation requesting feedback) regarding a determination made by the AI model that the threat actor performs the malicious activity (report which includes reports such as: 0146-0147 security event reports including bad actors) with regard to the entity; ([0572] Threat mitigation process 10 may prompt 2108 a user (e.g., analyst 256) to provide feedback concerning the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306). And (if provided), threat mitigation process 10 may receive 2110 feedback concerning the summarized human-readable report (e.g., summarized human-readable report 306) from a user (e.g., analyst 256). For example, the user (e.g., analyst 256) may be asked to give “thumbs-up/thumbs-down” feedback concerning the quality of the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306). In the event that the feedback provided is e.g., marginal or poor, threat mitigation process 10 may ask the user (e.g., analyst 256) to provide additional commentary, examples of which may include but are not limited to: “the summary is too long”, “the summary is too short”, “I would appreciate a more detailed roadmap for remediation”, “more concise language would be helpful”, etc. And (if feedback is provided), threat mitigation process 10 may utilize 2112 the feedback to revise the above-described formatting script (e.g., formatting script 304) so that the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306) may be tailored based upon such feedback.) [0749] As discussed above, threat mitigation process 10 may present 2208 the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306) to a user (e.g., analyst 256) and may prompt 2210 the user (e.g., analyst 256) to provide feedback concerning the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306). [0750] Threat mitigation process 10 may receive 2212 feedback concerning the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306) from a user (e.g., analyst 256) and may utilize 2214 the feedback to revise the above-described formatting script (e.g., formatting script 304) so that the (above-illustrated) summarized human-readable report (e.g., summarized human-readable report 306) may be tailored based upon such feedback. [0073] When a potential threat is detected, AI/ML process 56 may generate an alert for cybersecurity analysts to investigate further or, in more advanced setups, trigger automated responses. These could include isolating compromised devices, blocking suspicious IP addresses, or throttling data transfers to prevent data loss. Furthermore, feedback from these events (e.g., whether a detection was accurate or a false positive) may be used to retrain and improve AI/ML models over time, enhancing its precision and adaptability.)
in response to receiving the AI-generated request from the AI model,(feedback request from the threat mitigation process) providing a representation of the AI-generated request to a security analyst via a user interface; (0572 + mapping above; prompting the analyst with a feedback request)
receiving a response to the representation of the AI-generated request from the security analyst, the response comprising the feedback that is requested by the AI model; (0572 + mapping above; the analyst submits the feedback)
and in response to receiving the response from the security analyst, providing the feedback to the AI model. (0572 + mapping above; the feedback is used by the threat mitigation process)
Regarding claim 18, Murphy teaches the method of claim 6, and is disclosed above, Murphy further teaches wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model to generate a report,(event report; 1202; report generation AIs) which indicates whether the threat actor (bad actor; artifacts (gathered data) includes 0158-0159 user behavior analytics; 0961; network user behavior; 0968 providing the user behavior data to the threat mitigation process) performs the malicious activity (0386; reporting on malicious activities) with regard to the entity;(0990; event directed at a network entity) ([0073] When a potential threat is detected, AI/ML process 56 may generate an alert for cybersecurity analysts to investigate further or, in more advanced setups, trigger automated responses. These could include isolating compromised devices, blocking suspicious IP addresses, or throttling data transfers to prevent data loss. Furthermore, feedback from these events (e.g., whether a detection was accurate or a false positive) may be used to retrain and improve AI/ML models over time, enhancing its precision and adaptability. [0146] Further and when executing 912 a remedial action plan, threat mitigation process 10 may generate 916 a security event report (e.g., security event report 254) based, at least in part, upon the artifacts (e.g., artifacts 250) gathered 904; and provide 918 the security event report (e.g., security event report 254) to an analyst (e.g., analyst 256) for further review when e.g., threat mitigation process 10 assigns 908 a “moderate” threat level to the above-described security event (e.g., assuming that it is determined that while the streaming of the content is concerning, the content is low value and the recipient is not a known bad actor). [0147] Further and when executing 912 a remedial action plan, threat mitigation process 10 may autonomously execute 920 a threat mitigation plan (shutting down the stream and closing the port) when e.g., threat mitigation process 10 assigns 908 a “severe” threat level to the above-described security event (e.g., assuming that it is determined that the streaming of the content is very concerning, as the content is high value and the recipient is a known bad actor; [0990] Threat mitigation process 10 may process 3006 the consolidated network entity data (e.g., consolidated network entity data 70) to generate analysis data (e.g., analysis data 72) that concerns the event (e.g., event 62) and/or the network entity (e.g., one or more network entities 64).; [0961] A network user is an individual or system that accesses resources within a computer platform (e.g., computing platform 60), typically authenticated through credentials. Users interact with computing devices and network services to perform tasks. In intrusion detection, monitoring user behavior is critical. Anomalies such as accessing sensitive files outside of business hours, logging in from unusual locations, or repeated failed login attempts could indicate compromised accounts or insider threats. [1083] In response to receiving 3300 such an alert (e.g., alert 3350) concerning an event (e.g., event 62) within a computing platform (e.g., computing platform 60), threat mitigation process 10 may define a query (e.g., query 3352) for researching the alert (e.g., alert 3350). For example, assume that the alert (e.g., alert 3350) concerns an event (e.g., event 62) in which a user (e.g., 42) is e.g., downloading a large quantity of files to an IP address in Russia, wherein these files are highly confidential and are being downloaded in the middle of the night. Accordingly and in response to this alert (e.g., alert 3350), threat mitigation process 10 may to define a query (e.g., query 3352) that inquires into the specifics of the email traffic of the user (e.g., user 42) who is the subject of this alert (e.g., alert 3350). The result set (e.g., result set 3354) generated by this query (e.g., query 3352) may be provided to a generative AI model (e.g., generative AI model 3356) for subsequent processing. Accordingly, threat mitigation process 10 may be configured to ensure that the result set (e.g., result set 3354) produced in response to the query (e.g., query 3352) is sized so that it is processable (i.e., not too big and not too small) by the generative AI model (e.g., generative AI model 3356).).
and wherein the method further comprises:
as a result of the AI model generating the report, (see above) receiving an assessment of the report from a user, (feedback) the assessment indicating whether the threat actor (bad actor see mapoing above) performs the malicious activity with regard to the entity from a perspective of the user; (mapping above + [0073] When a potential threat is detected, AI/ML process 56 may generate an alert for cybersecurity analysts to investigate further or, in more advanced setups, trigger automated responses. These could include isolating compromised devices, blocking suspicious IP addresses, or throttling data transfers to prevent data loss. Furthermore, feedback from these events (e.g., whether a detection was accurate or a false positive) may be used to retrain and improve AI/ML models over time, enhancing its precision and adaptability.)
and training the AI model using the assessment. [0073] When a potential threat is detected, AI/ML process 56 may generate an alert for cybersecurity analysts to investigate further or, in more advanced setups, trigger automated responses. These could include isolating compromised devices, blocking suspicious IP addresses, or throttling data transfers to prevent data loss. Furthermore, feedback from these events (e.g., whether a detection was accurate or a false positive) may be used to retrain and improve AI/ML models over time, enhancing its precision and adaptability.)
Regarding claim 20, the claim inherits the same rejection as claim 1 above for reciting similar limitations in the form of a system claim Murphy teaches (system 0009+0059)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3-4, 8-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy et al. (US 20250350628 A1) in view of Urmanov et al. (US 20250392607 A1).
Regarding claim 8, Murphy teaches the method of claim 6, and is disclosed above, Murphy does not explicitly teach wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises: triggering the AI model to determine that the threat actor performs the malicious activity with regard to the entity as a result of a similarity between the potentially anomalous event and the profile being greater than or equal to a similarity threshold.
In an analogous art Urmanov teaches wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises: triggering the AI model (0073-0074; behavior and activity characterization models) to determine that the threat actor performs the malicious activity (anomalous activity modeling) with regard to the entity as a result of a similarity between the potentially anomalous event (anomalous activity modeling) and the profile (user account) being greater than or equal to a similarity threshold (satisfy a similarity threshold). ([0046] At block 220, activity characterization method 200 predicts activity of one or more user accounts to be non-conformant (i.e., anomalous or deviant). The prediction is based on other accounts in a behavioral group (to which the one or more user accounts belong) satisfying a threshold for similarity with respect to the one or more user accounts. The predictions are made based on the activity modeling results from block 215. In one embodiment, at block 220, activity characterization method predicts whether activity of one or more individual user accounts is conformant or non-conformant with respect to the assigned behavioral groups for the individual user account. This determination is based on similarity or dissimilarity of the activity of the individual account to the activities of other user accounts in the assigned behavioral group. In one embodiment, activity characterization method 200 predicts conformance or non-conformance by generating per-user activity vectors, determining similarities of activity of behavioral group members with respect to individual users, and comparing the similarities with a threshold value for similarity that discriminates between conformant and non-conformant activity.
It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Murphy] to include [an AI model the compares anomalous behavior to account activity to determine similarity] as is taught by [Urmanov].
The suggestion/motivation for doing so is to [improve user and entity behavior analytics 0001-0002].
Regarding claim 9 Murphy in view of Urmanov teach the method of claim 8, and is disclosed above, Murphy further teaches wherein the profile (user account) indicates a plurality of behaviors (anomalous or deviant) that the threat actor is known to use; (mapping above + [0049] And, in one embodiment, activity characterization system 100 compares the aggregate similarity value that resulted from the TPA similarity analysis to a threshold. The threshold determines whether the activity of the user accounts is non-conformant (anomalous or deviant) with the behavioral group assigned to the user account. For example, an aggregate TPA similarity of other user accounts in the behavioral group with respect to the individual user account that is in excess of 0.5 may be used as a threshold to determine non-conformance of the individual user account. Satisfying the threshold indicates that the activities of the other accounts in the behavioral group are substantially more similar to themselves than they are to the activity of the individual user account, indicating that the individual user account is an outlier that does not conform with the behavioral group.)
and wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model (0073-0074; AI/ML models) to determine whether the threat actor (user/actor see Fig 9B) performs the malicious activity (0055; malicious activity) with regard to the entity (0015; cloud application and services) by taking into consideration an extent to which the potentially anomalous event (non-conformant/anomalous/malicious behavior) corresponds to the plurality of behaviors (Claim 8 mapping + mapping above; user-behavior). (0046;user account behavior determines to be anomalous or deviant; [0055] Non-conformant user activity is potentially malicious, and is reported by the electronic alert to UEBA decisioning processes. And, based on the two-phase analysis, the non-conformant activity may be labeled either anomalous or deviant, indicating a higher or lower extent of non-conformance, respectively. An anomaly—indicating gross or substantial non-conformance with expected activity—is detectable in the first phase, group level analysis. A deviance—indicating subtle or minor non-conformance with expected activity—is detectable in the second phase, user-level analysis. The label indicating the extent of non-conformance may also be reported by the electronic alert to UEBA decisioning processes.
It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Murphy] to include [an AI model the compares anomalous behavior to account activity to determine similarity] as is taught by [Urmanov].
The suggestion/motivation for doing so is to [improve user and entity behavior analytics 0001-0002].
Regarding claim 10, Murphy in view of Urmanov teach the method of claim 9, and is disclosed above, Murphy further teaches wherein generating the profile of the threat actor comprises:
identifying the plurality of behaviors by analyzing embeddings (0070; data in the logs are essentially embeddings) that represent logs associated with the entity,(0955; threat event concerning a network entity) wherein the plurality of behaviors (activity/behavior/event) are identified as a result of events, (0957; multiple failed logins suggesting an attack; [0070] Such an AI/ML process (e.g., AI/ML process 56) may begin with the collection of vast amounts of data from multiple sources within the computer network. This may include logs from firewalls, intrusion detection and prevention systems (IDS/IPS), endpoints, applications, servers, and user activity. This raw data may then be preprocessed to clean and normalize it, followed by feature extraction, wherein relevant characteristics may be identified (e.g., access times, login frequencies, the volume and destination of data transfers, protocol usage, and command sequences).
which are indicated by the embeddings (data in the logs are essentially embeddings), occurring more than a threshold number of times (threshold number of failed login attempts) during a period of time (timing constraints). ([1164] Additionally, threat mitigation process 10 may provide guidance concerning how rules were applied and what actions may be taken to address the event (e.g., event 62) going forward. For example, this may include a technical breakdown of how the rule was evaluated (e.g., thresholds for failed login attempts, timing constraints, data movement, or system behavior patterns). Additionally, the system may recommend follow-up actions, examples of which may include but are not limited to: applying patches, isolating affected systems, notifying stakeholders, or updating access controls.)
Regarding claim 11, Murphy in view of Urmanov teach the method of claim 9, and is disclosed above, Murphy further teaches wherein the AI prompt ([1281] Ultimately, prompts (e.g., prompt 3660) may be the creative and functional blueprint of a generative AI model (e.g., one of generative AI models 3658), as they convert user goals into machine-understandable instructions, unlocking the capacity of the generative AI model (e.g., one or generative AI models 3658) to generate outputs that are coherent, relevant, and contextually aligned. As such, prompt engineering (i.e., the practice of crafting precise, effective prompts) has become an important skill for maximizing the power and utility of generative AI models (e.g., generative AI models 3658).) further comprises a plurality of thresholds regarding the plurality of behaviors; ([1164] Additionally, threat mitigation process 10 may provide guidance concerning how rules were applied and what actions may be taken to address the event (e.g., event 62) going forward. For example, this may include a technical breakdown of how the rule was evaluated (e.g., thresholds for failed login attempts, timing constraints, data movement, or system behavior patterns).; 01212; 1214; quality threshold)
and wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model to compare the description of the potentially anomalous event (user log see mapping above) and the plurality of thresholds (1164; plurality of thresholds) and to determine whether the threat actor performs the malicious activity with regard to the entity (see mapping in preceding claims) by taking into consideration an extent to which the potentially anomalous event satisfies the plurality of thresholds.(if the number of failed login attempts or data movement or behavior patterns satisfy the threshold) ([1164] Additionally, threat mitigation process 10 may provide guidance concerning how rules were applied and what actions may be taken to address the event (e.g., event 62) going forward. For example, this may include a technical breakdown of how the rule was evaluated (e.g., thresholds for failed login attempts, timing constraints, data movement, or system behavior patterns). Additionally, the system may recommend follow-up actions, examples of which may include but are not limited to: applying patches, isolating affected systems, notifying stakeholders, or updating access controls.)
Regarding claim 3, the claim inherits the same rejection as claim 10 above for reciting similar limitations in the form of a system claim Murphy teaches (system 0009+0059)
Regarding claim 4, the claim inherits the same rejection as claim 11 above for reciting similar limitations in the form of a system claim Murphy teaches (system 0009+0059)
Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy et al. (US 20250350628 A1) in view of Bazalgette et al. (US 20240333743 A1)
Regarding claim 19, Murphy teaches the method of claim 6, and is disclosed above, Murphy further teaches wherein generating the profile of the threat actor comprises:
generating a plurality of profiles of a plurality of threat actors (collecting data about network users including username, userID, IP address information etc) using information that describes a plurality of behaviors of the plurality of threat actors,(log data describing events and traffic as well as [0987] Accordingly, assuming that the network entity (e.g., one or more network entities 64) is the user (e.g., user 42), threat mitigation process 10 may obtain 3002 entity data for the network entity (e.g., one or more network entities 64) from a plurality of data sources (e.g., data sources 66), wherein examples include but are not limited to: content delivery network systems; database activity monitoring systems; user behavior analytics systems; mobile device management systems; identity and access management systems; domain name server systems; antivirus systems; operating systems; data lakes; data logs; security-relevant software applications; security-relevant hardware systems; security information and event management (SIEM) systems; and resources external to the computing platform (e.g., computing platform 60).)
the plurality of threat actors comprising the threat actor (information is collected on all network users, 0990,0998 known bad actors; (0070; collect and cleaning data including user activity and intrusion activity; 0072They can identify a wide range of security events, such as attempts at unauthorized access, insider threats, phishing attacks, data exfiltration, lateral movement within the network, and signs of malware or ransomware. For instance, an AI/ML model may detect that a user is accessing files at unusual hours or transferring unusually large amounts of data to an external server, which is a behavior that might be missed by traditional tools; [0234] Naturally, the subject matter of these individual data fields may vary depending upon the type of information available via these security-relevant subsystems (e.g., security-relevant subsystem 1650, security-relevant subsystem 1652 and security-relevant subsystem 1654). As (in this example) these are security-relevant subsystems, the information available from these security-relevant subsystems concerns the security of computing platform 60 and/or any security events (e.g., access auditing; anomalies; authentication; denial of services; exploitation; malware; phishing; spamming; reconnaissance; and/or web attack) occurring therein. For example, some of these data fields may concern e.g., user names, user IDs, device locations, device types, device IP addresses, source IP addresses, destination IP addresses, port addresses, deployed operating systems, utilized bandwidth, etc. [0242] As discussed above, data field 1686 within unified platform 290 (e.g., a platform effectuated by threat mitigation process 10) concerns a user ID (and is entitled USER_ID). For this example, assume that: [0243] data field 1656 within security-relevant subsystem 1650 also concerns a user ID and is entitled USER; [0244] data field 1666 within security-relevant subsystem 1652 also concerns a user ID and is entitled ID; and [0245] data field 1676 within security-relevant subsystem 1654 also concerns a user ID and is entitled USR_ID.; [0961] A network user is an individual or system that accesses resources within a computer platform (e.g., computing platform 60), typically authenticated through credentials. Users interact with computing devices and network services to perform tasks. In intrusion detection, monitoring user behavior is critical. Anomalies such as accessing sensitive files outside of business hours, logging in from unusual locations, or repeated failed login attempts could indicate compromised accounts or insider threats. [0219] For example and when obtaining 1552 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250), threat mitigation process 10 may obtain 1554 artifact information 286 concerning the one or more artifacts (e.g., artifacts 250) from one or more investigation resources (such as third-party resources that may e.g., provide information on known bad actors).)
Murphy does not explicitly teach and wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model to rank the plurality of profiles by assigning a plurality of ranks to the plurality of profiles based at least on a plurality of extents to which the plurality of profiles correspond to the potentially anomalous event by providing the AI prompt, which comprising the plurality of profiles and the description of the potentially anomalous event, as the input to the AI model, the AI prompt further requesting that the plurality of profiles be ranked with regard to the potentially anomalous event.
In an analogous art Bazalgette teaches wherein triggering the AI model to determine whether the threat actor performs the malicious activity with regard to the entity comprises:
triggering the AI model to rank the plurality of profiles (0147; 0215-0222; teaching behavior profiles) by assigning a plurality of ranks to the plurality of profiles (ranking the profiles) based at least on a plurality of extents to which the plurality of profiles correspond to the potentially anomalous event (known threat actors) (0079; anomaly detection and deviation based on user behavior; 0224; The profiles can be matched up based on their characteristics to see if they match up to known past threat actors, and how similar they are, and if based on their similarity, raise up the rank of how threatening they are. 0147; )
by providing the AI prompt,(0243; input data representative of one or more entities; [0040] Note, a data analysis process can be algorithms/scripts written by humans to perform their function discussed herein; and can in various cases use AI classifiers as part of their operation. ) which comprising the plurality of profiles (0243; input data representative of one or more entities and mapping above) and the description of the potentially anomalous event, (0030; 0038; 0040; potentially anomalous behavior) as the input to the AI model, (AI classifiers) the AI prompt further requesting that the plurality of profiles be ranked with regard to the potentially anomalous event. (0224; The profiles can be matched up based on their characteristics to see if they match up to known past threat actors, and how similar they are, and if based on their similarity, raise up the rank of how threatening they are. 0147; [0040] Note, a data analysis process can be algorithms/scripts written by humans to perform their function discussed herein; and can in various cases use AI classifiers as part of their operation. ))
It would have been obvious to one of ordinary skill in the art prior to the effective filing of the application to modify the teachings of [Murphy] to include [ranking profiles based on ai processing of profile data, behavior, and log data] as is taught by [Bazalgette].
The suggestion/motivation for doing so is to [improve cyber-security 0004].
Regarding claim 5, the claim inherits the same rejection as claim 19 above for reciting similar limitations in the form of a system claim Murphy teaches (system 0009+0059)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDERRAHMEN H CHOUAT whose telephone number is (571)431-0695. The examiner can normally be reached on Mon-Fri from 9AM to 5PM PST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry, can be reached at telephone number 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Abderrahmen Chouat
Examiner
Art Unit 2451
/Chris Parry/Supervisory Patent Examiner, Art Unit 2451