Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to an amendment application received on 12/04/2025. In the amendment, claims 1, 8, and 20 have been amended. Claims 2-7, and 9-19 remain original. No claim has been cancelled and no new claim has been added.
For this Office Action, claims 1-20 have been received for consideration and have been examined.
Response to Arguments
Claim Rejections – 35 USC § 103
Applicant’s arguments, filed 12/04/2025, with respect to the rejection(s) of claim(s) under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of new amendments to the claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-10, 12-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al., (US20250141896A1) hereinafter referred as Thomas’96 in view of Thomas et al., Foreign Patent Application # WO2022087510A1 hereinafter referred as Thomas’10 and further in view of Milden et al., (US20240048565A1).
Regarding claim 1, Thomas’96 discloses:
A computer program product for actively testing security services for an enterprise network, the computer program product comprising computer executable code embodied in a non-transitory computer readable medium that, when executing on one or more computing devices, causes the one or more computing devices to perform the steps of:
transmitting a security update from a threat management facility to the local security agent (([0116] The rules can also define different actions to be taken (e.g., automatically by the threat rule system 602 or the threat intelligence system 102, manually by a relevant user in the enterprise network 104) in response to being triggered … The threat intelligence system 102 and/or the threat rule system 602 can update this rule so that the rule is triggered if network traffic now contains signatures $s25, $s29, $s0, $s0′, and/or $s4′. This update can be determined by the threat intelligence system 102 based on processing the network traffic and/or other received information and determining that the rule is outdated or otherwise not triggering in response to new variations of the particular malware family A; [0117] The updated and/or generated rule(s) can be transmitted from the threat intelligence system 102 to the threat rule system 602 in block D (616))), wherein the security update includes:
a detection rule for the local security agent, the detection rule identified as a test rule (i.e., synthetic events comprise test rules) ([0108] The rule evaluations may be performed to determine whether the threat detection rules 520A-N are firing correctly, responding to known, adapted, and/or new threats, being used, outdated, not responding to new variations of a threat action/actor, etc. The engine 508 may generate one or more suggested improvements and/or tasks for updating, modifying, or otherwise improving one or more of the tested rules 520A-N. Refer to FIGS. 6, 7A, 7B, 8, 9, 10A, and 10B for further discussion about evaluating and testing the threat detection rules 520A-N; [0145] FIG. 8 is a conceptual diagram of a system 800 for evaluating and validating an enterprise's threat intelligence system using synthetic network events. The system 800 can be used as part of evaluating threat detection rules in the process 400 of FIGS. 4A, 4B, and 4C; [0146] In the system 800, the threat intelligence system 102 can communicate, via the network(s) 112, with the rules data store 108, one or more network devices 806A-N (which can be part of the enterprise's internal network 104 described herein)), and
a trigger for the detection rule, the trigger configured to cause a detection by the local security agent when applying the detection rule ([0012] FIG. 6 is a conceptual diagram of a system 600 for testing and evaluating threat detection rule performance. The system 600 can be implemented during runtime to determine effectiveness of rule execution in response to threat actions/actors of the enterprise's internal network 104; [0116] Sometimes, updating existing rules can include adding one or more signatures or signature sequences from the received dictionary or assessed network traffic to a trigger condition in the rules. For example, a rule for detecting malware in a family A can be triggered if network traffic or another file contains signatures $s25 and/or $s29), and
the trigger being free from malware requiring remediation of the endpoint ([0147] The synthetic handler sub-system 804 can generate at least one synthetic network event in block A (810). The synthetic event can replicate an actual event that can cause triggering of a rules engine in the threat intelligence system 102. Unlike the actual event, the synthetic event can include a tag that the alerting sub-system 802 may use in differentiating the synthetic event from actual events);
storing the trigger on the endpoint ([0042] For example, any of the information stored in the data stores 168, 170, 172, and 174 can be fed back to the threat detection system 159 to improve detection of the malicious traffic and/or improve/create threat detection rules that are maintained in the threat detection rules data store 158; [0043] As another example, any of the information stored in the data stores 168, 170, 172, and 174 can be accessed by the threat intelligence system 102, which can be configured to monitor the information and determine opportunities for improvement in the network security system 156);
detecting the trigger with a detection by the local security agent based on the detection rule ([0012] FIG. 6 is a conceptual diagram of a system 600 for testing and evaluating threat detection rule performance. The system 600 can be implemented during runtime to determine effectiveness of rule execution in response to threat actions/actors of the enterprise's internal network 104; [0116] Sometimes, updating existing rules can include adding one or more signatures or signature sequences from the received dictionary or assessed network traffic to a trigger condition in the rules. For example, a rule for detecting malware in a family A can be triggered if network traffic or another file contains signatures $s25 and/or $s29); and
transmitting a notification of the detection to the threat management facility ([0153] Triggered rules can be identified by the threat intelligence system 102. Notification of such triggers can be transmitted to the alerting sub-system 802 (block G, 820)).
Thomas’96 does not disclose:
executing a local security agent on an endpoint; adding the detection rule to a plurality of detection rules used by the local security agent to monitor the endpoint.
However, Thomas’10 discloses:
executing a local security agent on an endpoint ([00209] the system may include a local security agent on a compute instance in an enterprise network and a threat management facility for the enterprise network. The local security agent may be configured, e.g., by computer executable code executing on the compute instance, to generate one or more event vectors each including a collection of events for an entity associated with the compute instance, to locally determine a first risk score based on a first deviation of one of the event vectors to an entity model for the entity associated with the compute instance, and to report each of the event vectors to a remote resource);
adding the detection rule to a plurality of detection rules used by the local security agent to monitor the endpoint ([0074] The policy management facility 112 may include access rules and policies that are distributed to maintain control of access by the compute instances 10-26 to network resources; [0184] the local security agent 1408 may locally analyze events 1406 and/or event vectors 1410 in order to permit suitable prioritization, as well as to support local detection and response to malicious, or potentially malicious activity).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify the synthetic test network environment of Thomas’96 and include a local security agent on an endpoint which is configured to execute a rule/intervene in a condition based on malicious behaviors that are potentially suitable for intervention, as disclosed by Thomas’10.
The motivation to include the local security agent on the endpoint in the synthetic test network environment is to enhance the threat detection capability right at the endpoint level as necessary or helpful to investigate or remediate a potential threat.
The combination of Thomas’96 and Thomas’10 fail to discloses:
verifying a security update process of the local security agent by detecting the trigger contained in the security update.
However, Milden discloses:
verifying a security update process of the local security agent by detecting the trigger contained in the security update (Abstract: The example method further includes adjusting the rules under which a network-connected device operates within a given network based on a determination that the network-connected device has performed a requested reconfiguration to reduce the susceptibility of the network-connected device to fraudulent activity; [0065] It will be appreciated that, particularly in situations where the apparatus 200 is capable of receiving and/or processing information in real-time and/or near-real-time, the status of a network-connected device may be periodically checked, and the incentives provided to the network-connected device may be adjusted. For example, the apparatus 200 may subject a given network-connected device to multiple compliance verification procedures over time to confirm that reconfigurations have been maintained. Moreover, in some example implementations, the ability of the apparatus 200 to process information in real-time and/or near-real-time permits the apparatus 200 to adjust the network access rule set and/or other incentives for the network-connected device to reflect changes in the activity patterns in a given area and/or changes in the characteristics of the network-connected device).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify synthetic test network environment of Thomas’96 and Thomas’10 and include a system and apparatus which performs compliance verification over time, as disclosed by Milden.
The motivation to perform compliance verification over time is to provide information to users/administrators to readily ascertain information pertaining to the susceptibility of a given network-connected device to fraudulent activity.
Regarding claim 2, the combination of Thomas’96, Thomas’10 and Milden discloses:
The computer program product of claim 1, further comprising code that performs the step of retrieving the security update with the local security agent during a periodic update initiated by the local security agent or the threat management facility (Thomas’96: [0176] After identifying the flagged event as a failure in block 1012, the computer system can determine whether aggregated information (e.g., intelligence package) about the flagged event is outdated (block 1014) … a frequency of the particular threat occurring/being identified (e.g., the more often the threat is identified, the shorter the period of time, and thus the more frequently the rules may be updated); [0181] In block 1018, if the rule has not been trained/improved using the aggregated information within the predetermined period of time, the computer system can generate a recommendation for improving the rule in block 1020).
Regarding claim 4, the combination of Thomas’96, Thomas’10 and Milden discloses:
The computer program product of claim 1, wherein the detection rule includes a behavioral test (Thomas’10: [00107] In embodiments, events are continuously analyzed against a baseline. The baseline may be adjusted to account for normal behavior. Comparison to baselines may include looking for outliers and anomalies as well as impossible events).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify the synthetic test network environment of Thomas’96 and Milden and include a local security agent on an endpoint which is configured to execute a rule/intervene in a condition based on malicious behaviors that are potentially suitable for intervention, as disclosed by Thomas’10.
The motivation to include the local security agent on the endpoint in the synthetic test network environment is to enhance the threat detection capability right at the endpoint level as necessary or helpful to investigate or remediate a potential threat.
Regarding claim 5, the combination of Thomas’96, Thomas’10 and Milden discloses:
The computer program product of claim 1, wherein the detection rule includes a Uniform Resource Locator test (Thomas’10: [0084] In an embodiment, the network access facility 124 may have access to policies that include one or more of a block list, a black list, an allowed list, a white list, an unacceptable network site database, an acceptable network site database, a network site reputation database, or the like of network access locations that may or may not be accessed by the client facility. Additionally, the network access facility 124 may use rule evaluation to parse network access requests and apply policies. The network access rule facility 124 may have a generic set of policies for all compute instances, such as denying access to certain types of websites).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify the synthetic test network environment of Thomas’96 and Milden and include a local security agent on an endpoint which is configured to execute a rule/intervene in a condition based on malicious behaviors that are potentially suitable for intervention, as disclosed by Thomas’10.
The motivation to include the local security agent on the endpoint in the synthetic test network environment is to enhance the threat detection capability right at the endpoint level as necessary or helpful to investigate or remediate a potential threat.
Regarding claim 6, the combination of Thomas’96, Thomas’10 and Milden discloses:
The computer program product of claim 1, wherein the endpoint includes a network device in the enterprise network (Thomas’10: [0083] Aspects of the network access facility 124 may be provided, for example, in the security agent of the endpoint 12, in a wireless access point 11, in a firewall 10, as part of application protection 150 provided by the cloud, and so on).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify the synthetic test network environment of Thomas’96 and Milden and include a local security agent on an endpoint which is configured to execute a rule/intervene in a condition based on malicious behaviors that are potentially suitable for intervention, as disclosed by Thomas’10.
The motivation to include the local security agent on the endpoint in the synthetic test network environment is to enhance the threat detection capability right at the endpoint level as necessary or helpful to investigate or remediate a potential threat.
Regarding claim 7, the combination of Thomas’96, Thomas’10 and Milden discloses:
The computer program product of claim 6, wherein the network device includes at least one of a router, a switch, a gateway, a firewall, and a wireless access point (Thomas’10: [0083] Aspects of the network access facility 124 may be provided, for example, in the security agent of the endpoint 12, in a wireless access point 11, in a firewall 10, as part of application protection 150 provided by the cloud, and so on).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify the synthetic test network environment of Thomas’96 and Milden and include a local security agent on an endpoint which is configured to execute a rule/intervene in a condition based on malicious behaviors that are potentially suitable for intervention, as disclosed by Thomas’10.
The motivation to include the local security agent on the endpoint in the synthetic test network environment is to enhance the threat detection capability right at the endpoint level as necessary or helpful to investigate or remediate a potential threat.
Regarding claim 8, Thomas’96 discloses:
A method for actively testing security services for an enterprise network, the method comprising:
storing a security update on a threat management facility at a location accessible to a plurality of endpoints managed by the threat management facility ([0042] For example, any of the information stored in the data stores 168, 170, 172, and 174 can be fed back to the threat detection system 159 to improve detection of the malicious traffic and/or improve/create threat detection rules that are maintained in the threat detection rules data store 158; [0043] As another example, any of the information stored in the data stores 168, 170, 172, and 174 can be accessed by the threat intelligence system 102, which can be configured to monitor the information and determine opportunities for improvement in the network security system 156), wherein the security update includes:
[[a detection rule for local security agents on the plurality of endpoints]], the detection rule identified as a test rule (i.e., synthetic events comprise test rules) ([0108] The rule evaluations may be performed to determine whether the threat detection rules 520A-N are firing correctly, responding to known, adapted, and/or new threats, being used, outdated, not responding to new variations of a threat action/actor, etc. The engine 508 may generate one or more suggested improvements and/or tasks for updating, modifying, or otherwise improving one or more of the tested rules 520A-N. Refer to FIGS. 6, 7A, 7B, 8, 9, 10A, and 10B for further discussion about evaluating and testing the threat detection rules 520A-N; [0145] FIG. 8 is a conceptual diagram of a system 800 for evaluating and validating an enterprise's threat intelligence system using synthetic network events. The system 800 can be used as part of evaluating threat detection rules in the process 400 of FIGS. 4A, 4B, and 4C; [0146] In the system 800, the threat intelligence system 102 can communicate, via the network(s) 112, with the rules data store 108, one or more network devices 806A-N (which can be part of the enterprise's internal network 104 described herein)), and
a trigger for the detection rule, the trigger configured to cause a detection by one of the local security agents when applying the detection rule ([0012] FIG. 6 is a conceptual diagram of a system 600 for testing and evaluating threat detection rule performance. The system 600 can be implemented during runtime to determine effectiveness of rule execution in response to threat actions/actors of the enterprise's internal network 104; [0116] Sometimes, updating existing rules can include adding one or more signatures or signature sequences from the received dictionary or assessed network traffic to a trigger condition in the rules. For example, a rule for detecting malware in a family A can be triggered if network traffic or another file contains signatures $s25 and/or $s29);
transmitting the security update to one or more of the plurality of endpoints ([0116] The rules can also define different actions to be taken (e.g., automatically by the threat rule system 602 or the threat intelligence system 102, manually by a relevant user in the enterprise network 104) in response to being triggered … The threat intelligence system 102 and/or the threat rule system 602 can update this rule so that the rule is triggered if network traffic now contains signatures $s25, $s29, $s0, $s0′, and/or $s4′. This update can be determined by the threat intelligence system 102 based on processing the network traffic and/or other received information and determining that the rule is outdated or otherwise not triggering in response to new variations of the particular malware family A; [0117] The updated and/or generated rule(s) can be transmitted from the threat intelligence system 102 to the threat rule system 602 in block D (616));
logging transmittals of the security update to the one or more of the plurality of endpoints ([0117] the threat rule system 602 can retrieve the rules, such as from a data store described herein, optionally update the retrieved rules, and/or perform the rules during runtime execution);
logging test responses to the trigger from the plurality of endpoints ([0119] The threat rule system 602 can determine whether any content in the network traffic triggers one or more of the threat detection rules (including previously defined rules, newly generated rules, and/or updated rules) (block E, 618)); and
in response to a predetermined pattern of transmittals and test responses, initiating a remediation of one or more of the plurality of endpoints ([0120] The threat rule system 602 can block any of the network traffic that triggers one or more of the rules (block F. 620). The system 602 may prevent that network traffic from being transmitted into the enterprise network 104. The system 602 may also perform one or more other actions responsive to blocking the network traffic. The other actions can be defined by the rule(s) that is triggered).
Thomas’96 does not disclose:
a detection rule for local security agents on the plurality of endpoints.
However, Thomas’10 discloses:
a detection rule for local security agents on the plurality of endpoints ([0074] The policy management facility 112 may include access rules and policies that are distributed to maintain control of access by the compute instances 10-26 to network resources; [00209] In another aspect, there is disclosed herein a system that operates according to the method 1500 described above. For example, the system may include a local security agent on a compute instance in an enterprise network and a threat management facility for the enterprise network. The local security agent may be configured, e.g., by computer executable code executing on the compute instance, to generate one or more event vectors each including a collection of events for an entity associated with the compute instance, to locally determine a first risk score based on a first deviation of one of the event vectors to an entity model for the entity associated with the compute instance, and to report each of the event vectors to a remote resource; [00254] In general, the local security agent 1904 on the endpoint 1902 may be configured to lookup behaviors that are potentially suitable for intervention using a database 1906 of behaviors locally store don the endpoint 1902).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify the synthetic test network environment of Thomas’96 and include a local security agent on an endpoint which is configured to execute a rule/intervene in a condition based on malicious behaviors that are potentially suitable for intervention, as disclosed by Thomas’10.
The motivation to include the local security agent on the endpoint in the synthetic test network environment is to enhance the threat detection capability right at the endpoint level as necessary or helpful to investigate or remediate a potential threat.
The combination of Thomas’96 and Thomas’10 fails to disclose:
wherein logging test responses includes logging a verification of a security update process of one or more of the local security agents by detecting the trigger contained in the security update with a corresponding detection by each of the one or more of the local security agents based on the detection rule contained in the security update.
However, Milden discloses:
wherein logging test responses includes logging a verification of a security update process of one or more of the local security agents by detecting the trigger contained in the security update with a corresponding detection by each of the one or more of the local security agents based on the detection rule contained in the security update ([0061] In some example implementations, the apparatus 200, such as through the operation of input/output circuitry 206 and/or communications circuitry 208, may request and/or receive information from the network-connected device and test that received information against the instructions contained in the reconfiguration instruction set. In some example implementations, determining whether the reconfiguration has been performed by the network-connected device may comprise receiving an event data set from the network-connected device and performing a compliance verification procedure on the received event data set. It will be appreciated that the content of the event data set and/or the details of the compliance verification procedure may vary based on the characteristics of the network-connected device and the relevant reconfiguration instruction set; [0065] It will be appreciated that, particularly in situations where the apparatus 200 is capable of receiving and/or processing information in real-time and/or near-real-time, the status of a network-connected device may be periodically checked, and the incentives provided to the network-connected device may be adjusted. For example, the apparatus 200 may subject a given network-connected device to multiple compliance verification procedures over time to confirm that reconfigurations have been maintained. Moreover, in some example implementations, the ability of the apparatus 200 to process information in real-time and/or near-real-time permits the apparatus 200 to adjust the network access rule set and/or other incentives for the network-connected device to reflect changes in the activity patterns in a given area and/or changes in the characteristics of the network-connected device).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify synthetic test network environment of Thomas’96 and Thomas’10 and include a system and apparatus which performs compliance verification over time, as disclosed by Milden.
The motivation to perform compliance verification over time is to provide information to users/administrators to readily ascertain information pertaining to the susceptibility of a given network-connected device to fraudulent activity.
Regarding claim 20, it is a system claim and recite similar subject matter as claim 8 and therefore rejected under similar ground of rejection.
Regarding claim 9, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the remediation includes a notification to initiate investigation of one or more of the plurality of endpoints (Thomas’96: [0043] The threat intelligence system 102 may also use the information associations and aggregations to generate and prioritize opportunities for improving intelligence/information gathering tactics of the enterprise, which further may improve robustness of data representations of the threat actions/actors that can be used by the threat intelligence system 102 and/or relevant users or other systems and components of the enterprise to generate and improve threat detection rules and other responses; [0044] The system 102 can aggregate and process the received information to determine threat intelligence status and/or improvement opportunities. Such status and/or opportunities can include, but are not limited to, threat data gaps (e.g., informational gaps, whether sufficient data or other information exists to identify and track an attack flow of a particular threat actor, malware, or other threat action(s)), threat data freshness (e.g., how recently the threat data was collected and/or updated), detection rule and response performance (e.g., whether a rule triggered, whether the right rule triggered, whether a response was adequate). The status and/or opportunities determined by the threat intelligence system 102 may be transmitted to the security analyst devices 162 or devices of other relevant users in the system 100 and used to improve the threat detection rules, information collection, and/or responses to ever-changing malware 150 and/or threat actors 152).
Regarding claim 10, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the remediation includes one or more of a quarantine, an isolation, and a malware scan of one or more of the plurality of endpoints (Thomas’10: [0090] When a threat or other policy violation is detected by the security management facility 122, the remedial action facility 128 may be used to remediate the threat … quarantine of a requesting application or the device, isolation of the requesting application or the device).
Regarding claim 12, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the predetermined pattern includes an absence of one of the test responses from one of the plurality of endpoints that retrieved the security update from the threat management facility (Thomas’96: [0018] The disclosed technology may provide for continuous and automatic (i) collection of information associated with threat actions and actors and (ii) testing of detection rules to ensure that the rules work as designed and continue to respond to ever-changing threats. Such ongoing integrity checks can provide for testing and validating threat detection rules and their components as new security threats and/or actors are ingested into the enterprise's system and as associated information changes over time; [0133] The computer system can inject a test sample into the set of samples in block 710. The test sample can be a new malware sample that is retrieved from a dictionary of malware samples (e.g., a dictionary of malware samples associated with the particular malware family, a dictionary of malware samples associated with various different malware families, a dictionary of malware samples associated with a sub-family of the particular malware family). For example, the computer system can inject a malware sample that was retrieved in block 704 but that was not included in the threshold-size set of the retrieved samples that was tested in block 706. By injecting the malware sample back into the set for the particular malware family, the computer system can determine whether the injected sample is the sample that was wrongfully associated with the malware family and thus caused the generation and poor performance of the detection rule that does not satisfy the one or more rule efficacy criteria).
Regarding claim 13, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the predetermined pattern includes a malware detection unrelated to the security update from one of the plurality of endpoints (Thomas’96: [0018] The disclosed technology may provide for continuous and automatic (i) collection of information associated with threat actions and actors and (ii) testing of detection rules to ensure that the rules work as designed and continue to respond to ever-changing threats. Such ongoing integrity checks can provide for testing and validating threat detection rules and their components as new security threats and/or actors are ingested into the enterprise's system and as associated information changes over time; [0133] The computer system can inject a test sample into the set of samples in block 710. The test sample can be a new malware sample that is retrieved from a dictionary of malware samples (e.g., a dictionary of malware samples associated with the particular malware family, a dictionary of malware samples associated with various different malware families, a dictionary of malware samples associated with a sub-family of the particular malware family). For example, the computer system can inject a malware sample that was retrieved in block 704 but that was not included in the threshold-size set of the retrieved samples that was tested in block 706. By injecting the malware sample back into the set for the particular malware family, the computer system can determine whether the injected sample is the sample that was wrongfully associated with the malware family and thus caused the generation and poor performance of the detection rule that does not satisfy the one or more rule efficacy criteria).
Regarding claim 14, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the predetermined pattern includes an absence of security update requests from one or more of the plurality of endpoints (Thomas’96: [0018] The disclosed technology may provide for continuous and automatic (i) collection of information associated with threat actions and actors and (ii) testing of detection rules to ensure that the rules work as designed and continue to respond to ever-changing threats. Such ongoing integrity checks can provide for testing and validating threat detection rules and their components as new security threats and/or actors are ingested into the enterprise's system and as associated information changes over time; [0133] The computer system can inject a test sample into the set of samples in block 710. The test sample can be a new malware sample that is retrieved from a dictionary of malware samples (e.g., a dictionary of malware samples associated with the particular malware family, a dictionary of malware samples associated with various different malware families, a dictionary of malware samples associated with a sub-family of the particular malware family). For example, the computer system can inject a malware sample that was retrieved in block 704 but that was not included in the threshold-size set of the retrieved samples that was tested in block 706. By injecting the malware sample back into the set for the particular malware family, the computer system can determine whether the injected sample is the sample that was wrongfully associated with the malware family and thus caused the generation and poor performance of the detection rule that does not satisfy the one or more rule efficacy criteria).
Regarding claim 15, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the detection rule and the trigger are packaged into a single file as the security update for retrieval by the plurality of endpoints (Thomas’96: [0085] The computer system can also access threat detection rules and data representations for known threat actions and actors to the enterprise network in block 404. The data representations may include matrices and other types of data entries and/or tables or file packages including all information (e.g., intel, threat intelligence information) that has been collected and/or documented for each known threat action and actor).
Regarding claim 17, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the detection rule includes a behavioral detection rule, and wherein the trigger is configured to cause one of the plurality of endpoints to perform a plurality of activities associated with the behavioral detection rule (Thomas’10: [00112] Compute instances 611, 612, 613 may connect to a SaaS application 630. The SaaS applications 630 each communication with an identity provider 620 (e.g., Azure Active Directory). The identity provider 620 communicates with an identity provider interface 606, for example to provide multifactor authentication. For example, the IDP authentication interface 606 may send a text message or a notification to a mobile device that may be used as a requirement for authentication).
Regarding claim 18, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the detection rule includes a Uniform Resource Locator rule, and wherein the trigger is configured to cause a receiving one of the plurality of endpoints to try to connect to a network address specified in the Uniform Resource Locator rule (Thomas’10: [00290] While security alerts may usefully be delivered to user-specific devices using local security agents on managed devices as described herein, a security alert may also or instead be communicated over another channel, or through another communication medium, that supplements existing channels, such as email, text messages, and the like. A security alert may further provide a link or otherwise enable access to a user interface for addressing alerts, or to specific actions responsive to an alert. For example, a link may provide a connection with an enterprise server or a threat management central server, open a communication channel with an enterprise security team member, and the like).
Regarding claim 19, the combination of Thomas’96, Thomas’10 and Milden discloses:
The method of claim 8, wherein the detection is a real time detection based on monitoring of reads and writes by a file system of a receiving one of the endpoints (Thomas’10: [00197] For example, security policies for compute instances 1402, users, applications or the like may be updated to security settings that impose stricter controls or limits on activity including, e.g., limits on network activity (bandwidth, data quotas, permitted network addresses, etc.), limits on system changes (e.g., registry entries, certain system calls, etc.), limits on file activity (e.g., changes to file permissions), increased levels of local activity monitoring, and so forth; [00307] A security agent 2802 operating on or in association with first endpoint 2804 may detect a behavior 2806 such as an attempt to login at a protected resource 2830 such as an application gateway, enterprise network gateway, cloud data storage facility, cloud computing resource, website, or other remote resource … The determination may be based on observations of behaviors associated with the endpoint 2804, such as user activity, file access activity, program/application activity, and other behaviors, such as activity trends and the like).
Claim(s) 3, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al., (US20250141896A1) hereinafter referred as Thomas’96 in view of Thomas et al., Foreign Patent Application # WO2022087510A1 hereinafter referred as Thomas’10 in view of Milden et al., (US20240048565A1) and further in view of Shanklin et al., (US20020133586A1).
Regarding claim 3, the combination of Thomas’96, Thomas’10 and Milden fails to disclose:
The computer program product of claim 1, wherein the detection rule includes a static detection rule.
However, Shanklin discloses:
wherein the detection rule includes a static detection rule ([0079] In short, the rules are stored on nodes in the Radix tree, and certain actions (e.g. alert, deny, throttle, redirect or combinations thereof) are associated with the rule. Hash tables may also be associated with the rules nodes for rules relating to specific parameters. For example, where an “any” item in terms of, for example, source address or port, destination address or port or protocol is identified for specific treatment, upon the presentation of that item to the IDS the hash table is consulted for the appropriate rule to apply).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify Thomas’96 in view of Thomas’10 and further in view of Milden and include a system that maintains hash table for rules, as disclosed by Shanklin.
The motivation to maintain hash table for rules is highly effective data structure for maintaining a ruleset due to its efficiency in storing, retrieving, and managing rules based on unique identifiers or characteristics.
Regarding claim 16, it is a system claim and recite similar subject matter as claim 3 and therefore rejected under similar ground of rejection.
Claim 11 rejected under 35 U.S.C. 103 as being unpatentable over Thomas et al., (US20250141896A1) hereinafter referred as Thomas’96 in view of Thomas et al., Foreign Patent Application # WO2022087510A1 hereinafter referred as Thomas’10 in view of Milden et al., (US20240048565A1) and further in view of Willis et al., (US20160267284A1).
Regarding claim 11, the combination of Thomas’96, Thomas’10 and Milden fails to disclose:
The method of claim 8, wherein the remediation includes a local security agent reinstallation on one or more of the plurality of endpoints.
However, Willis discloses:
wherein the remediation includes a local security agent reinstallation on one or more of the plurality of endpoints ([0043] These safeguards may include hiding the program from user access, embedding the security agent in firmware of the portable device, prohibiting uninstall of the security agent, causing the security agent to be automatically reinstalled on the portable device from read-only memory or from the network if the security agent had been removed or tampered with).
It would have been obvious to an ordinary skill in the art before the effective filing date of the claimed invention to modify Thomas’96 in view of Thomas’10 and further in view of Milden and include a system that enforce reinstallation of security agent on a computer, as disclosed by Willis.
The motivation to enforce reinstallation of security agent on the computer is to have one or more safeguards aimed at preventing tampering with by unauthorized individuals.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED M AHSAN whose telephone number is (571)272-5018. The examiner can normally be reached 8:30 AM - 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Korzuch can be reached at 571-272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SYED M AHSAN/Primary Examiner, Art Unit 2491