DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Acknowledgment is made of the information disclosure statements filed on July 27, 2025 and November 16, 2025. The U.S. patents and Foreign Patents have been considered.
Response to Amendment
The amendment filed on October 13, 2025 has been entered.
Claims 11 and 24 have been amended.
Applicant’s amendment and response to the claims are sufficient to overcome the 35 USC § 112 (b) set forth in the previous office action. The examiner has withdrawn the rejection.
Response to Arguments
Applicant's arguments filed on June 30, 2025, have been fully considered, but they are not persuasive.
Examiner’s response to Applicant’s argument 1 (Page 2-3 of Applicant Arguments/Remarks):
Regarding Applicant’s argument that Mazumder fails to teach “each event corresponding to a failed action,” the Examiner respectfully disagrees.
Mazumder discloses detecting in at least one log of a computing environment a plurality of events, each event corresponding to a failed action, each event further corresponding to a[n] entity deployed in the computing environment (See Parag. [0025]; The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats. The automatic graph-based detection logic may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time. See Parag. [0039]; Examples of an event include but are not limited to an operation failure, crypto-mining activity, atypical travel, a login attempt, an unfamiliar location (e.g., domain) of a computing device (e.g., computing device that attempted to login or computing device from which information is downloaded or attempted to be downloaded), a number of files downloaded is greater than or equal to a threshold, a cumulative size of files downloaded is greater than or equal to a threshold, and a number of accounts that are enumerated is greater than or equal to a threshold (each event corresponding to a failed action)).
The Examiner did NOT interpret the plurality of events to correspond to the list of events, as argued by the Applicant; instead, Mazumder discloses analyzing events to detect the security threats (Mazumder, Parag. [0025]), and the Examiner has cited Parag. [0039] of Mazumder as it teaches different examples of what an event is (i.e., the Examiner has selected an operation failure as an event). Meanwhile, Parag. [0039] does NOT teaches that the analyzed events include all of the different examples in the list in Parag. [0039]. Therefore, the Examiner has reasonably interpreted the analyzed events to be operation failure events.
Examiner’s response to Applicant’s argument 2 (Pages 3-5 of Applicant Arguments/Remarks):
Regarding Applicant’s argument that Mazumder does not use the claimed “cloud log of a cloud computing environment,” the Examiner respectfully disagrees.
In rejecting the limitation, the Examiner has relied on Mazumder disclosing detecting in at least one log of a computing environment a plurality of events … (See Parag. [0025]; The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats. The automatic graph-based detection logic may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time).
Mazumder doesn’t explicitly disclose the security detection system including the at least one cloud log as operating in a cloud computing environment.
However, Yellapragada discloses security detection system including the at least one cloud log operating in a cloud computing environment (See Parag. [0054]; The operations 600 include, at block 604, for public cloud environments, using a machine imaging method to collect service and vulnerability information. See Parag. [0073]; In certain public cloud environments, events are obtained using various methods such as cloud as cloud monitoring logs. The logs include events such as changes in VM state, change in ACLs, firewall rules, etc. See Parag. [0095]; Certain embodiments herein are directed to an imaging method to analyze assets in a public cloud environment to obtain security related information such as services running, software packages installed, and the corresponding vulnerabilities associated therewith. Such a method first uses the gathered information related to the virtual cloud assets).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder, to operate in a cloud computing environment, as taught by Yellapragada. This would be convenient to prevent, detect, and monitor unauthorized access, misuse, modification, or denial of a computer and computer network (Yellapragada, Parag. [0001]).
Mazumder discloses in Parag. [0022] that the automatic graph-based detection system 100 includes a plurality of protectable entities 102A-102M [which] may be a processing system, an application, a service … or any entity that possesses sensitive, proprietary, and/or important information. An example of a processing system is a system that includes at least one processor that is capable of manipulating data in accordance with a set of instructions. For instance, a processing system may be a computer. Therefore, Mazumder discloses detection system for analyzing logs associated with the entities 102A-102M comprising computing entities, which is reasonably interpreted as detection of multiple failed actions across a computing environment based on analyzing logs.
In response to applicant's argument that the references fail to show certain features of applicant’s invention, it is noted that the features upon which applicant relies (i.e., correlation of events across different entities) are not recited in the rejected claim. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Examiner’s response to Applicant’s argument 3 (Pages 5-6 of Applicant Arguments/Remarks):
Regarding Applicant’s argument that Mazumder does not teach “extracting from the log an identifier of the entity,” the Examiner respectfully disagrees.
extracting from the log an identifier of the entity (See Parag. [0022]; Each of the protectable entities 102A-102M may be a processing system, an application, a service, a client, a user (e.g., a user ID). See Parag. [0025-0026]; The automatic graph-based detection logic 108 may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time ... The automatic graph-based detection logic 108 generates an association graph based on (e.g., based at least in part on) the logs and events 104A-104M. See also Parag. [0039]; each graph node of the association graph represents an entity from a plurality of entities).
The Examiner notes that the automatic graph-based detection logic intercepts the logs and generates the association graph which represents the entities (e.g., user ID). Therefore, the Examiner has reasonably interpreted the entity represented by the graph is uniquely identified within the generated graph based on retrieving logs data. In addition, the Examiner has reasonably interpreted intercepting/retrieve log(s) to be equivalent to extracting from the log(s).
Examiner’s response to Applicant’s argument 4 (Pages 6-9 of Applicant Arguments/Remarks):
Regarding Applicant’s argument that Mazumder does not teach “traversing a security graph to detect a node representing the entity, based on the extracted identifier,” the Examiner respectfully disagrees.
Mazumder discloses traversing a security graph to detect a node representing the entity, based on the extracted identifier (See Parag. [0026]; The automatic graph-based detection logic 108 initializes a Bayesian network using the association graph, based on correlations among graph nodes that are included in the association graph (i.e., identified entities (e.g., User ID)), to establish connections among network nodes that are included in the Bayesian network. The automatic graph-based detection logic 108 groups the network nodes of the Bayesian network among clusters that correspond to respective intents such that, for each connection between a respective pair of network nodes, which includes an arbitrary network node and a network node that is included in a cluster, a connection between the arbitrary network node and each of the other network nodes that are included in that cluster is created. The automatic graph-based detection logic 108 identifies patterns in the Bayesian network. Each pattern includes at least one connection. Each connection is between a respective pair of network nodes. The automatic graph-based detection logic 108 removes at least one redundant connection, which is redundant with regard to one or more other connections, from the patterns in the Bayesian network).
Mazumder discloses that correlations among graph nodes that are included in the association graph (i.e., identified entities (e.g., User ID)), to establish connections among network nodes, as responded to Applicant’s argument 3 above, the node within the graph is uniquely identified within the generated graph. In addition, Mazumder discloses in Parag. [0022] that [e]ach of the protectable entities 102A-102M may be a user ID, which later presented in the graph.
In addition, the Examiner notes that “traversing a security graph …” is reasonably interpreted as checking vertex (node) and edge in a graph data structure by moving through connected nodes to discover relationships, paths, or patterns.
Examiner’s response to Applicant’s argument 5 (Pages 9-12 of Applicant Arguments/Remarks):
Regarding Applicant’s argument that Mazumder does not teach “detecting a node representing a cybersecurity vulnerability connected to the node representing the entity,” the Examiner respectfully disagrees.
Mazumder discloses detecting a node representing a cybersecurity vulnerability connected to the node representing the entity (See Parag. [0026]; The automatic graph-based detection logic 108 assigns scores to the respective patterns in the Bayesian network, based on knowledge of historical patterns and historical security threats, such that each score indicates a likelihood of the respective pattern to indicate a security threat. The automatic graph-based detection logic 108 automatically generates an output graph. The output graph includes each pattern that has a score that is greater than or equal to a score threshold. The output graph does not include each pattern that has a score that is less than the score threshold. Each pattern in the output graph represents a potential security threat. See also Parag. [0027]).
In addition, Mazumder discloses in Parag. [0030] that LSTM neural network may be capable of remembering relationships between features, such as events that are represented by the network nodes of the Bayesian network, sequences (e.g., temporal sequences) of such events, entities associated with such events, probabilities that such events, sequences, and/or entities correspond to a potential security threat, and ML-based confidences that are derived therefrom. Mazumder discloses in Parag. [0078] identifying (FIG. 2, 206) a plurality of patterns in the Bayesian network, each pattern including at least one connection, each connection being between a respective pair of network nodes. Therefore, detecting a pattern connecting a pair of nodes as potential security threat is equivalent to detecting a pattern represented by a connection between the pair of nodes.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 8, 13-14, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Mazumder et al. (Pub. No. US 2023/0102103), hereinafter Mazumder; in view of Yellapragada et al. (Pub. No. US 2023/0208870), hereinafter Yellapragada.
Claim 1. Mazumder discloses a method for detecting an exploited vulnerable entity (See Parag. [0005]; automatic graph-based detection of potential security threats), comprising:
detecting in at least one log of a computing environment a plurality of events, each event corresponding to a failed action, each event further corresponding to a[n] entity deployed in the computing environment (See Parag. [0025]; The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats. The automatic graph-based detection logic may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time. See Parag. [0039]; Examples of an event include but are not limited to an operation failure, crypto-mining activity, atypical travel, a login attempt, an unfamiliar location (e.g., domain) of a computing device (e.g., computing device that attempted to login or computing device from which information is downloaded or attempted to be downloaded), a number of files downloaded is greater than or equal to a threshold, a cumulative size of files downloaded is greater than or equal to a threshold, and a number of accounts that are enumerated is greater than or equal to a threshold (each event corresponding to a failed action));
extracting from the log an identifier of the entity (See Parag. [0022]; Each of the protectable entities 102A-102M may be a processing system, an application, a service, a client, a user (e.g., a user ID). See Parag. [0025-0026]; The automatic graph-based detection logic 108 may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time ... The automatic graph-based detection logic 108 generates an association graph based on (e.g., based at least in part on) the logs and events 104A-104M. See Parag. [0039]; each graph node of the association graph represents an entity from a plurality of entities. Examples of an entity include but are not limited to a user, an internet protocol (IP) address, an alert, a host (e.g., client host), a virtual machine (VM), a file, a cloud subscription, and a domain controller. Examiner interpretation: The applicant specification discloses “an identifier of a cloud entity, such as principal (e.g., user account) (See Parag. [0025]). The automatic graph-based detection logic retrieves information from the logs to generates an association graph representing the entities as nodes of the graph. Therefore, it is reasonably interpreted by the Examiner that the entity is identified. As described, above, Mazumder discloses that the entities may be a user (e.g., a user ID));
traversing a security graph to detect a node representing the entity, based on the extracted identifier, wherein the security graph includes a representation of the computing environment (See Parag. [0026]; The automatic graph-based detection logic 108 initializes a Bayesian network using the association graph, based on correlations among graph nodes that are included in the association graph (i.e., identified entities (e.g., User ID)), to establish connections among network nodes that are included in the Bayesian network. The automatic graph-based detection logic 108 groups the network nodes of the Bayesian network among clusters that correspond to respective intents such that, for each connection between a respective pair of network nodes, which includes an arbitrary network node and a network node that is included in a cluster, a connection between the arbitrary network node and each of the other network nodes that are included in that cluster is created. The automatic graph-based detection logic 108 identifies patterns in the Bayesian network. Each pattern includes at least one connection. Each connection is between a respective pair of network nodes. The automatic graph-based detection logic 108 removes at least one redundant connection, which is redundant with regard to one or more other connections, from the patterns in the Bayesian network);
detecting a node representing a cybersecurity vulnerability connected to the node representing the entity (See Parag. [0026]; The automatic graph-based detection logic 108 assigns scores to the respective patterns in the Bayesian network, based on knowledge of historical patterns and historical security threats, such that each score indicates a likelihood of the respective pattern to indicate a security threat. The automatic graph-based detection logic 108 automatically generates an output graph. The output graph includes each pattern that has a score that is greater than or equal to a score threshold. The output graph does not include each pattern that has a score that is less than the score threshold. Each pattern in the output graph represents a potential security threat. See also Parag. [0027]); and
initiating a mitigation action for the workload based on the cybersecurity vulnerability (See Parag. [0020]; by using graph(s) to automatically detect potential security threats, an amount of time and/or assets consumed to detect the potential security threats and/or to respond to (e.g., mitigate) the negative impacts that result from those potential security threats may be reduced. The example techniques may prevent the negative impacts of the potential security threats from occurring in which case the amount of time and/or assets consumed to respond to the negative impacts may be avoided).
Mazumder doesn’t explicitly disclose the security detection system including the at least one cloud log as operating in a cloud computing environment.
However, Yellapragada discloses security detection system including the at least one cloud log operating in a cloud computing environment (See Parag. [0054]; The operations 600 include, at block 604, for public cloud environments, using a machine imaging method to collect service and vulnerability information. See Parag. [0073]; In certain public cloud environments, events are obtained using various methods such as cloud as cloud monitoring logs. The logs include events such as changes in VM state, change in ACLs, firewall rules, etc. See Parag. [0095]; Certain embodiments herein are directed to an imaging method to analyze assets in a public cloud environment to obtain security related information such as services running, software packages installed, and the corresponding vulnerabilities associated therewith. Such a method first uses the gathered information related to the virtual cloud assets).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder, to operate in a cloud computing environment, as taught by Yellapragada. This would be convenient to prevent, detect, and monitor unauthorized access, misuse, modification, or denial of a computer and computer network (Yellapragada, Parag. [0001]).
Claim 8. Mazumder in view of Yellapragada discloses the method of claim 1,
Mazumder further discloses wherein the cybersecurity vulnerability is any one of: a weak password, an exposed password, a misconfiguration, an exposure, and a combination thereof (See Parag. [0018]; A potential security threat may be a potential negative (e.g., malicious) action or event that is facilitated by a vulnerability and that is configured to result in an unwanted impact to a computing system).
Claim 13. Mazumder discloses a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to execute a process (See Fig. 7 and Parag. [0089]), the process comprising:
detecting in at least one log of a computing environment a plurality of events, each event further corresponding to a[n] entity deployed in the computing environment (See Parag. [0025]; The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats. The automatic graph-based detection logic may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time. See Parag. [0039]; Examples of an event include but are not limited to an operation failure, crypto-mining activity, atypical travel, a login attempt, an unfamiliar location (e.g., domain) of a computing device (e.g., computing device that attempted to login or computing device from which information is downloaded or attempted to be downloaded), a number of files downloaded is greater than or equal to a threshold, a cumulative size of files downloaded is greater than or equal to a threshold, and a number of accounts that are enumerated is greater than or equal to a threshold (each event corresponding to a failed action)));
extracting from the log an identifier of the entity (See Parag. [0022]; Each of the protectable entities 102A-102M may be a processing system, an application, a service, a client, a user (e.g., a user ID). See Parag. [0025-0026]; The automatic graph-based detection logic 108 may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time ... The automatic graph-based detection logic 108 generates an association graph based on (e.g., based at least in part on) the logs and events 104A-104M. See Parag. [0039]; each graph node of the association graph represents an entity from a plurality of entities. Examples of an entity include but are not limited to a user, an internet protocol (IP) address, an alert, a host (e.g., client host), a virtual machine (VM), a file, a cloud subscription, and a domain controller. Examiner interpretation: The applicant specification discloses “an identifier of a cloud entity, such as principal (e.g., user account) (See Parag. [0025]). The automatic graph-based detection logic retrieves information from the logs to generates an association graph representing the entities as nodes of the graph. Therefore, it is reasonably interpreted by the Examiner that the entity is identified. As described, above, Mazumder discloses that the entities may be a user (e.g., a user ID));
traversing a security graph to detect a node representing the entity, based on the extracted identifier, wherein the security graph includes a representation of the computing environment (See Parag. [0026]; The automatic graph-based detection logic 108 initializes a Bayesian network using the association graph, based on correlations among graph nodes that are included in the association graph (i.e., identified entities (e.g., User ID)), to establish connections among network nodes that are included in the Bayesian network. The automatic graph-based detection logic 108 groups the network nodes of the Bayesian network among clusters that correspond to respective intents such that, for each connection between a respective pair of network nodes, which includes an arbitrary network node and a network node that is included in a cluster, a connection between the arbitrary network node and each of the other network nodes that are included in that cluster is created. The automatic graph-based detection logic 108 identifies patterns in the Bayesian network. Each pattern includes at least one connection. Each connection is between a respective pair of network nodes. The automatic graph-based detection logic 108 removes at least one redundant connection, which is redundant with regard to one or more other connections, from the patterns in the Bayesian network);
detecting a node representing a cybersecurity vulnerability connected to the node representing the entity (See Parag. [0026]; The automatic graph-based detection logic 108 assigns scores to the respective patterns in the Bayesian network, based on knowledge of historical patterns and historical security threats, such that each score indicates a likelihood of the respective pattern to indicate a security threat. The automatic graph-based detection logic 108 automatically generates an output graph. The output graph includes each pattern that has a score that is greater than or equal to a score threshold. The output graph does not include each pattern that has a score that is less than the score threshold. Each pattern in the output graph represents a potential security threat. See also Parag. [0027]); and
initiating a mitigation action for the workload based on the cybersecurity vulnerability (See Parag. [0020]; by using graph(s) to automatically detect potential security threats, an amount of time and/or assets consumed to detect the potential security threats and/or to respond to (e.g., mitigate) the negative impacts that result from those potential security threats may be reduced. The example techniques may prevent the negative impacts of the potential security threats from occurring in which case the amount of time and/or assets consumed to respond to the negative impacts may be avoided).
Mazumder doesn’t explicitly disclose the security detection system including the at least one cloud log as operating in a cloud computing environment.
However, Yellapragada discloses security detection system including the at least one cloud log operating in a cloud computing environment (See Parag. [0054]; The operations 600 include, at block 604, for public cloud environments, using a machine imaging method to collect service and vulnerability information. See Parag. [0073]; In certain public cloud environments, events are obtained using various methods such as cloud as cloud monitoring logs. The logs include events such as changes in VM state, change in ACLs, firewall rules, etc. See Parag. [0095]; Certain embodiments herein are directed to an imaging method to analyze assets in a public cloud environment to obtain security related information such as services running, software packages installed, and the corresponding vulnerabilities associated therewith. Such a method first uses the gathered information related to the virtual cloud assets).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder, to operate in a cloud computing environment, as taught by Yellapragada. This would be convenient to prevent, detect, and monitor unauthorized access, misuse, modification, or denial of a computer and computer network (Yellapragada, Parag. [0001]).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder, to include detecting event corresponding to a failed action, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Mazumder, Parag. [0092]).
Claim 14. Mazumder discloses a system for detecting an exploited vulnerable entity, comprising: a processing circuitry; and a memory, the memory containing instructions (See Fig. 7 and Parag. [0089]) that, when executed by the processing circuitry, configure the system to:
detect in at least one log of a computing environment a plurality of events, each event further corresponding to a[n] entity deployed in the computing environment (See Parag. [0025]; The automatic graph-based detection logic 108 analyzes the logs and events 104A-104M, which are obtained directly or indirectly from the protectable entities 102A-102M, to detect the security threats. The automatic graph-based detection logic may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time. See Parag. [0039]; Examples of an event include but are not limited to an operation failure, crypto-mining activity, atypical travel, a login attempt, an unfamiliar location (e.g., domain) of a computing device (e.g., computing device that attempted to login or computing device from which information is downloaded or attempted to be downloaded), a number of files downloaded is greater than or equal to a threshold, a cumulative size of files downloaded is greater than or equal to a threshold, and a number of accounts that are enumerated is greater than or equal to a threshold (each event corresponding to a failed action));
extract from the log an identifier of the entity (See Parag. [0022]; Each of the protectable entities 102A-102M may be a processing system, an application, a service, a client, a user (e.g., a user ID). See Parag. [0025-0026]; The automatic graph-based detection logic 108 may retrieve the logs and events 104A-104M directly from the protectable entities 102A-102M by intercepting the logs and events 104A-104M from the protectable entities 102A-102M in real-time ... The automatic graph-based detection logic 108 generates an association graph based on (e.g., based at least in part on) the logs and events 104A-104M. See Parag. [0039]; each graph node of the association graph represents an entity from a plurality of entities. Examples of an entity include but are not limited to a user, an internet protocol (IP) address, an alert, a host (e.g., client host), a virtual machine (VM), a file, a cloud subscription, and a domain controller. Examiner interpretation: The applicant specification discloses “an identifier of a cloud entity, such as principal (e.g., user account) (See Parag. [0025]). The automatic graph-based detection logic retrieves information from the logs to generates an association graph representing the entities as nodes of the graph. Therefore, it is reasonably interpreted by the Examiner that the entity is identified. As described, above, Mazumder discloses that the entities may be a user (e.g., a user ID));
traverse a security graph to detect a node representing the entity, based on the extracted identifier, wherein the security graph includes a representation of the computing environment (See Parag. [0026]; The automatic graph-based detection logic 108 initializes a Bayesian network using the association graph, based on correlations among graph nodes that are included in the association graph (i.e., identified entities (e.g., User ID)), to establish connections among network nodes that are included in the Bayesian network. The automatic graph-based detection logic 108 groups the network nodes of the Bayesian network among clusters that correspond to respective intents such that, for each connection between a respective pair of network nodes, which includes an arbitrary network node and a network node that is included in a cluster, a connection between the arbitrary network node and each of the other network nodes that are included in that cluster is created. The automatic graph-based detection logic 108 identifies patterns in the Bayesian network. Each pattern includes at least one connection. Each connection is between a respective pair of network nodes. The automatic graph-based detection logic 108 removes at least one redundant connection, which is redundant with regard to one or more other connections, from the patterns in the Bayesian network);
detect a node representing a cybersecurity vulnerability connected to the node representing the entity (See Parag. [0026]; The automatic graph-based detection logic 108 assigns scores to the respective patterns in the Bayesian network, based on knowledge of historical patterns and historical security threats, such that each score indicates a likelihood of the respective pattern to indicate a security threat. The automatic graph-based detection logic 108 automatically generates an output graph. The output graph includes each pattern that has a score that is greater than or equal to a score threshold. The output graph does not include each pattern that has a score that is less than the score threshold. Each pattern in the output graph represents a potential security threat. See also Parag. [0027]); and
initiate a mitigation action for the workload based on the cybersecurity vulnerability (See Parag. [0020]; by using graph(s) to automatically detect potential security threats, an amount of time and/or assets consumed to detect the potential security threats and/or to respond to (e.g., mitigate) the negative impacts that result from those potential security threats may be reduced. The example techniques may prevent the negative impacts of the potential security threats from occurring in which case the amount of time and/or assets consumed to respond to the negative impacts may be avoided).
Mazumder doesn’t explicitly disclose the security detection system including the at least one cloud log as operating in a cloud computing environment.
However, Yellapragada discloses security detection system including the at least one cloud log operating in a cloud computing environment (See Parag. [0054]; The operations 600 include, at block 604, for public cloud environments, using a machine imaging method to collect service and vulnerability information. See Parag. [0073]; In certain public cloud environments, events are obtained using various methods such as cloud as cloud monitoring logs. The logs include events such as changes in VM state, change in ACLs, firewall rules, etc. See Parag. [0095]; Certain embodiments herein are directed to an imaging method to analyze assets in a public cloud environment to obtain security related information such as services running, software packages installed, and the corresponding vulnerabilities associated therewith. Such a method first uses the gathered information related to the virtual cloud assets).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder, to operate in a cloud computing environment, as taught by Yellapragada. This would be convenient to prevent, detect, and monitor unauthorized access, misuse, modification, or denial of a computer and computer network (Yellapragada, Parag. [0001]).
Claim 21. The applicant is directed to the rejections to claim 8 set forth above, as they are rejected based on the same rationale.
Claims 2-7, 9-12, 15-20, and 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over Mazumder et al. (Pub. No. US 2023/0102103), hereinafter Mazumder; in view of Yellapragada et al. (Pub. No. US 2023/0208870), hereinafter Yellapragada; in further view of Kirti et al. (Pub. No. US 2015/0172321), hereinafter Kirti.
Claim 2. Mazumder in view of Yellapragada discloses the method of claim 1,
Mazumder in view of Yellapragada doesn’t explicitly disclose the method further comprising: detecting a principal identifier in an event corresponding to a failed action; and detecting an event corresponding to a successful action associated with the principal identifier.
However, Kirti discloses:
detecting a principal identifier in an event corresponding to a failed action (See Parag. [0089]; a threat can be identified based on an account accessing one or more files or failing a series of login attempts from an IP address that is flagged (by a third party feed or otherwise) as malicious); and
detecting an event corresponding to a successful action associated with the principal identifier (See Parag. [0056]; detecting patterns of suspicious activity in one cloud or across multiple clouds. Some patterns may involve detecting the same action or different actions in multiple clouds that are associated with the same user account or IP address).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include detecting event corresponding to a failed action, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 3. Mazumder in view of Yellapragada and Kirti discloses the method of claim 2,
Kirti further discloses the method further comprising:
determining that the successful action is an action which corresponds to a predetermined action; and initiating a mitigation action based on the successful action (See Parag. [0092]; a recommendation engine tracks user activity for anomalous behavior to detect attacks and unknown threats. The recommendation engine can track user activity across multiple clouds for suspicious events. Events can include pre-defined at-risk operations (e.g., downloading a file containing credit card numbers, copying encryption keys, elevating privileges of a normal user). An alarm can be sounded with details of the event and recommendations for remediation. See also Parag. [0056]).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include determining that the successful action is an action which corresponds to a predetermined action and initiating a mitigation action based on the successful action, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 4. Mazumder in view of Yellapragada and Kirti discloses the method of claim 3,
Kirti further discloses wherein initiating the mitigation action includes any one of: revoking a permission associated with the cloud entity, changing a configuration of a resource, reducing a network exposure of the cloud entity, isolating the cloud entity, blocking network traffic to the cloud entity, blocking network traffic from the cloud entity, and a combination thereof (See Parag. [0063]; when a threat is detected based upon behavior on one or more cloud services, preemptively alert a system administrator with respect to threats on other cloud services and/or proactively secure other services on which a user maintains data by applying remedial measures, such as adding additional steps to authentication, changing passwords, blocking a particular IP address or addresses, blocking email messages or senders, or locking accounts).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include blocking network traffic to the cloud entity, blocking network traffic from the cloud entity, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 5. Mazumder in view of Yellapragada and Kirti discloses the method of claim 2,
Kirti further discloses wherein the principal identifier corresponds to any one of: a user account, a service account, and a role (See Parag. [0056]; threat can be identified based on an account accessing one or more files or failing a series of login attempts from an IP address that is flagged (by a third party feed or otherwise) as malicious. See also Parag. [0056]).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include user account, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 6. Mazumder in view of Yellapragada and Kirti discloses the method of claim 2,
Kirti further discloses the method further comprising:
detecting a series of events and principal identifier, each event in the series of events corresponding to a unique failed action (See Parag. [0089]; a threat can be identified based on an account accessing one or more files or failing a series of login attempts from an IP address that is flagged (by a third party feed or otherwise) as malicious… a series of failed logins with user accounts associated with a user across multiple clouds may indicate a concerted effort to crack the user's password and therefore set off an alarm. See Parag. [0092]; recommendation engine can track user activity across multiple clouds for suspicious events. Events can include pre-defined at-risk operations (e.g., downloading a file containing credit card numbers, copying encryption keys, elevating privileges of a normal user).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include detecting a series of events corresponding to a unique failed action, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 7. Mazumder in view of Yellapragada discloses the method of claim 1,
Mazumder in view of Yellapragada doesn’t explicitly disclose wherein the failed action is failed based on insufficient permission to initiate the action.
However, Kirti discloses wherein the failed action is failed based on insufficient permission to initiate the action (See Parag. [0089]; a threat can be identified based on an account accessing one or more files or failing a series of login attempts from an IP address that is flagged (by a third party feed or otherwise) as malicious… a series of failed logins with user accounts associated with a user across multiple clouds may indicate a concerted effort to crack the user's password and therefore set off an alarm).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include failed action based on insufficient permission, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 9. Mazumder in view of Yellapragada discloses the method of claim 1, Mazumder in view of Yellapragada doesn’t explicitly disclose the method further
comprising: generating a notification to indicate that the workload is compromised, as part of the mitigation action.
However, Kirti discloses generating a notification to indicate that the workload is compromised, as part of the mitigation action (See Parag. [0063]; when a threat is detected based upon behavior on one or more cloud services, preemptively alert a system administrator with respect to threats on other cloud services and/or proactively secure other services on which a user maintains data by applying remedial measures, such as adding additional steps to authentication, changing passwords, blocking a particular IP address or addresses, blocking email messages or senders, or locking accounts).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include generating a notification as part of the mitigation action, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 10. Mazumder in view of Yellapragada discloses the method of claim 1,
Mazumder in view of Yellapragada doesn’t explicitly disclose the method further comprising: updating a severity of an alert associated with the cybersecurity vulnerability as part of the mitigation action.
However, Kirti discloses updating a severity of an alert associated with the cybersecurity vulnerability as part of the mitigation action (See Parag. [0076]; A controls management platform user interface may display key security indicators in a library format with risk factors that are color coded (such as red, green, yellow). Other statistics or metrics may be displayed such as, but not limited to, user logins attempts, groups with most added users, most deleted files, users with the most deleted files, and users downloading the most files).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include updating a severity of an alert associated with the cybersecurity vulnerability as part of the mitigation action, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 11. Mazumder in view of Yellapragada discloses the method of claim 1,
Mazumder in view of Yellapragada doesn’t explicitly disclose the method further comprising: detecting a node representing a principal connected to the node representing the workload; and initiating a mitigation action based on the principal
However, Kirti discloses detecting a node representing a principal connected to the node representing the workload; and initiating a mitigation action based on the principal (See Parag. [0092]; a recommendation engine tracks user activity for anomalous behavior to detect attacks and unknown threats. The recommendation engine can track user activity across multiple clouds for suspicious events. Events can include pre-defined at-risk operations (e.g., downloading a file containing credit card numbers, copying encryption keys, elevating privileges of a normal user). An alarm can be sounded with details of the event and recommendations for remediation).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder in view of Yellapragada, to include detecting a node representing a principal connected to the node representing the workload; and initiating a mitigation action based on the principal, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 12. Mazumder in view of Yellapragada discloses the method of claim 1,
Mazumder in view of Yellapragada doesn’t explicitly disclose wherein the failed action corresponds to any one of: deletion of a record, changing a permission of a principal account, changing a configuration of a resource, encrypting a database, deploying multiple workloads, deactivating multiple workloads, generating a secret, generating a certificate, generating a key, deleting a secret, deleting a certificate, deleting a key, exposing a resource to a public network, exfiltrating data, planting a malicious entity, initiating a privilege escalation, encrypting a record, assuming a role, and a combination thereof.
However, Kirti discloses wherein the failed action corresponds to any one of: deletion of a record, changing a permission of a principal account, changing a configuration of a resource, encrypting a database, deploying multiple workloads, deactivating multiple workloads, generating a secret, generating a certificate, generating a key, deleting a secret, deleting a certificate, deleting a key, exposing a resource to a public network, exfiltrating data, planting a malicious entity, initiating a privilege escalation, encrypting a record, assuming a role, and a combination thereof (See Parag. [0092]; a recommendation engine tracks user activity for anomalous behavior to detect attacks and unknown threats. The recommendation engine can track user activity across multiple clouds for suspicious events. Events can include pre-defined at-risk operations (e.g., downloading a file containing credit card numbers, copying encryption keys, elevating privileges of a normal user). An alarm can be sounded with details of the event and recommendations for remediation).
It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the system, taught by Mazumder, to include a failed event that correspond to initiating a privilege escalation, as taught by Kirti. This would be convenient to detect attacks and unknown threats (Kirti, Parag. [0092]).
Claim 15. The applicant is directed to the rejections to claim 2 set forth above, as they are rejected based on the same rationale.
Claim 16. The applicant is directed to the rejections to claim 3 set forth above, as they are rejected based on the same rationale.
Claim 17. The applicant is directed to the rejections to claim 4 set forth above, as they are rejected based on the same rationale.
Claim 18. The applicant is directed to the rejections to claim 5 set forth above, as they are rejected based on the same rationale.
Claim 19. The applicant is directed to the rejections to claim 6 set forth above, as they are rejected based on the same rationale.
Claim 20. The applicant is directed to the rejections to claim 7 set forth above, as they are rejected based on the same rationale.
Claim 22. The applicant is directed to the rejections to claim 9 set forth above, as they are rejected based on the same rationale.
Claim 23. The applicant is directed to the rejections to claim 10 set forth above, as they are rejected based on the same rationale.
Claim 24. The applicant is directed to the rejections to claim 11 set forth above, as they are rejected based on the same rationale.
Claim 25. The applicant is directed to the rejections to claim 12 set forth above, as they are rejected based on the same rationale.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GHIZLANE MAAZOUZ whose telephone number is (571)272-8118. The examiner can normally be reached Telework M-F 7:30-5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip J Chea can be reached on 571-272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GHIZLANE MAAZOUZ/Examiner, Art Unit 2499 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499