Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
This communication is in respond to applicant's claims filed on 01/26/2026. Claims 1-9, 10-19, and 20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 01/26/2026 have been fully considered, but are considered moot in view of the following new ground of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-9, 10-19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over PRATT (US 20200296124 A1), hereafter PRATT, in view of JURZAK (US 20230004654 A1), hereafter JURZAK.
Regarding claim 1, PRATT teaches:
A computer-implemented method, comprising: monitoring system data associated with at least one operation occurring in at least one computing environment (PRATT [AB] “Techniques are described for processing anomalies detected using user-specified rules with anomalies detected using machine-learning based behavioral analysis models to identify threat indicators and security threats to a computer network. In an embodiment, anomalies are detected based on processing event data at a network security system that used rules-based anomaly detection.”);
predicting a predictive output, using at least one machine learning model and based at least in part on the system data, the predictive output indicative of whether the system data is associated with at least one of a plurality of anomalous event definitions (PRATT [0017] “FIG. 9 shows an example representation of the process of building adaptive behavioral baselines and evaluating against such baselines to support the detection of anomalies.”, [0126] “Note that security platform 800 described with respect to FIG. 7 includes machine-learning based systems that may comprise or be part of the machine-learning based network security system 122 shown in FIG. 2A.” [0145] “The security platform 800 can detect anomalies and threats by determining behavior baselines of various entities that are part of, or that interact with, a network (such as users, devices, applications, etc.) and then comparing activities of those entities to their behavior baselines to determine whether the activities are anomalous, or even rise to the level of threat.”),
wherein the at least one machine learning model is i) configured to generate the predictive output based on whether at least a portion of the system data indicates an aspect of an anomalous event as defined by a plurality of intrusion detection models (PRATT [0068] “Similarly, a machine-learning based network security system 122 may process received event data with one or more machine-learning anomaly detection models to detect anomalous activity and generate and output anomaly data based on that activity.”, [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software or hardware processes, and the like.”, [0145] “The security platform 800 can detect anomalies and threats by determining behavior baselines of various entities that are part of, or that interact with, a network (such as users, devices, applications, etc.) and then comparing activities of those entities to their behavior baselines to determine whether the activities are anomalous, or even rise to the level of threat.”, [0034] “FIG. 26 illustrates identification of a threat indicator according to another example case based on combining the outputs from different anomaly detection models and anomaly detection rules.”),
and in response to the predictive output, performing at least one response action that reduces vulnerability of the at least one computing environment to anomalous activity in the at least one operation (PRATT [0134] “The output of the analysis module 830 may also automatically trigger actions such as terminating access by a user, terminating file transfer, or any other action that may neutralize the detected threats.”).
Further regarding claim 1, PRATT teaches the limitations previously demonstrated, however does not appear to explicitly teach the following limitations demonstrated by JURZAK:
and ii) wherein the at least one machine learning model is trained on historical classifications of data processed using the plurality of intrusion detection models (JURZAK [0061] “In some embodiments, machine-learning-based analysis engine 110 may make classification decisions based on previous classification results produced by anomaly detection models used by different clients managed by a single anomaly detection system including, in some cases, anomaly detection models used by different tenants in a multitenant anomaly detection system.”);
Since both PRATT and JURZAK are from the same field of endeavor as both are directed to automated classification and detection of anomalous events, which is within the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine the teachings of PRATT and JURZAK by incorporating the teachings of JURZAK into PRATT for automating classification and detection of anomalous events as claimed. The motivation to combine is to improve detection and classification of anomalous events (PRATT [AB]; JURZAK [AB]).
Regarding claim 2, PRATT-JURZAK teaches:
The method of claim 1, wherein: the system data comprises at least one of network data or device data (PRATT [0126] “Note that security platform 800 described with respect to FIG. 7 includes machine-learning based systems that may comprise or be part of the machine-learning based network security system 122 shown in FIG. 2A. Data sources 802 represent various data sources that provide data including event data (e.g. machine data) and other data, to be analyzed for anomalies and threats. The incoming data can include event data represents events that take place in the network environment.”).
Regarding claim 3, PRATT-JURZAK teaches:
The method of claim 1, wherein: performing the at least one response action comprises: generating at least one alert comprising the system data and the at least one anomalous event definition (PRATT [0134] “As an example, a visualization map and a threat alert may be presented to the human operator 852 for review and possible action… The event data that underlies those notifications or that gives rise to the detection made by the analysis module 830 are persistently stored in a database 878. If the human operator decides to investigate a particular notification, he or she may access from database 378 the event data (including raw event data and any associated information”, [0235] “Accordingly, anomalies defined in the anomaly data 2282, or threat indicators defined in the threat indicator data 2284, can be incorporated into the graph as vertices (nodes), each linked to one or more of the entities by one or more edges… In a highly simplified network security graph, the user and device are each defined as a node with an edge linking them to represent the association (i.e. user 1 uses device 1).”) that supports the anomalies or threat detection.”); and causing provision of the alert to at least one computing device associated with an administrator of the at least one computing environment (PRATT [0134] “These anomalies, threat indicators and threats may be provided to a user interface (UI) system 850 for review by a human operator 852. UI 850 may be provided via nay number of applications or other systems. For example, in an embodiment, anomaly, threat, and threat indicator data is output for display via a UI at an enterprise security application (e.g. Splunk® App for Enterprise Security). Note that the enterprise security application may be part of another network system (e.g. a rules-based network security system). As an example, a visualization map and a threat alert may be presented to the human operator 852 for review and possible action.”).
Regarding claim 4, PRATT-JURZAK teaches:
The method of claim 1, wherein: the system data comprises live data collected in real-time from the at least one computing environment (PRATT [0132] “The real-time processing path includes an analysis module 330 that receives data from the distribution block 820. The analysis module 830 analyzes the data in real-time to detect anomalies, threat indicators, and threats.”).
Regarding claim 5, PRATT-JURZAK teaches:
The method of claim 4, wherein: performing the at least one response action comprises suspending or blocking the at least one operation (PRATT [0134] “The output of the analysis module 830 may also automatically trigger actions such as terminating access by a user, terminating file transfer, or any other action that may neutralize the detected threats.”).
Regarding claim 6, PRATT-JURZAK teaches:
The method of claim 1, wherein: performing the at least one response action comprises disabling communication access of at least one computing device to the at least one computing environment (PRATT [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software or hardware processes, and the like.”, [0134] “The output of the analysis module 830 may also automatically trigger actions such as terminating access by a user, terminating file transfer, or any other action that may neutralize the detected threats.”).
Regarding claim 7, PRATT-JURZAK teaches:
The method of claim 1, wherein: performing the at least one response action comprises disabling a user account associated with the at least one operation occurring in the at least one computing environment (PRATT [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software or hardware processes, and the like.” One of ordinary skill in the art would appreciate how “shutting down network access, locking out users, preventing information theft or information transfer” clearly anticipates disabling a user account).
Regarding claim 8, PRATT-JURZAK teaches:
The method of claim 1, wherein: performing the at least one response action comprises retraining the at least one machine learning model based at least in part on the system data (PRATT [0204] “The ML-based CEP engine trains and retrains (e.g., updates) the machine learning models in real-time and applies (e.g., during the model deliberation phase) the machine learning models in real-time. Parallelization of training and deliberation enables the ML-based CEP engine to utilize machine learning models without preventing or hindering the formation of real-time conclusions.”).
Regarding claim 9, PRATT-JURZAK teaches:
The method of claim 1, further comprising: in response to the predictive output failing to match a respective anomalous event threshold for any of the plurality of abnormal event definitions (PRATT [0269] “For example, anomaly 1 (detected because a particular user accessed a computing device outside his department) can be fed into a machine-learning anomaly detection model that may apply user behavioral analysis (e.g. based on a particular user's behavioral baseline) to detect a second anomaly.”, [0149] “In certain embodiments, anomalies and threats are detected by comparing events against the baseline profile for an entity to which the event relates (e.g., a user, an application, a network node or group of nodes, a software system, data files, etc.).”): generating a new anomalous event definition based at least in part on the system data and at least one classification of the system data from the plurality of intrusion models (PRATT [0269] “This anomaly detected using the user-specified anomaly detection rule can be fed as an input (including underlying events or other events or anomalies) in a machine-learning based anomaly detection model. For example, anomaly 1 (detected because a particular user accessed a computing device outside his department) can be fed into a machine-learning anomaly detection model that may apply user behavioral analysis (e.g. based on a particular user's behavioral baseline) to detect a second anomaly. In some embodiments, if the second anomaly is detected, a threat indicator is identified.”); and storing the new anomalous event definition in a data store that comprises the plurality of anomalous event definitions (PRATT [0234] “In some embodiments the threat indicator data 2284 is stored in a data structure in the form of a threat indicator graph. In such embodiments, the threat indicator graph may include a plurality of vertices (nodes) representing entities associated with the information technology environment and a plurality of edges, each of the plurality of edges representing a threat indicator linking two of the plurality of vertices (nodes). In other embodiments, the threat indicator data 2284 is instead stored in a relational database or a key-store database.”).
Regarding claim 10, claim 10 recites similar limitations as claim 1, but for the recitation in the form of an apparatus. Accordingly, claim 20 is rejected for similar reasoning and rationale as claim 1. PRATT-JURZAK teaches:
An apparatus comprising at least one processor and at least one non-transitory memory having computer-coded instructions stored thereon (JURZAK [0011] “In one embodiment, a disclosed machine-learning-based analysis engine of an anomaly detection system includes a processor, and a memory storing program instructions. When executed by the processor, the program instructions cause”, [0038] “method 400 illustrated in FIG. 4, and method 500 illustrated in FIG. 5 may be performed by program instructions 215 executing on electronic processor 230 of machine-learning-based analysis engine 110. In some embodiments, program instructions 215 may be stored in another type of non-volatile memory, such as a hard disk”)
Regarding claim 11, PRATT-JURZAK teaches:
The apparatus of claim 10, wherein: the computer-code instructions, in execution with the at least one processor, further cause the apparatus to perform the at least one response action in response to determining the predictive output meets a respective anomalous event threshold for the at least one anomalous event definition (PRATT [0149] “In certain embodiments, anomalies and threats are detected by comparing events against the baseline profile for an entity to which the event relates (e.g., a user, an application, a network node or group of nodes, a software system, data files, etc.). If the variation is more than insignificant, the threshold for which may be dynamically or statically defined, an anomaly may be considered to be detected.”, [0134] “The output of the analysis module 830 may also automatically trigger actions such as terminating access by a user, terminating file transfer, or any other action that may neutralize the detected threats.”).
Regarding claim 12, PRATT-JURZAK teaches:
The apparatus of claim 10, wherein: each of the plurality of anomalous event definitions is associated with at least one historical data pattern (PRATT [0064] “In this description, an “anomaly” is defined as a detected or identified variation from an expected pattern of activity on the part of an entity associated with an information technology environment, which may or may not constitute a threat. This entity activity that departs form expected patterns of activity can be referred to as “anomalous activity.” For example, an anomaly may include an event or set of events of possible concern that may be actionable or warrant further investigation.”); and a first model of the plurality of intrusion detection models is configured to: generate an association between the at least one operation and at least one historical data pattern based at least in part on a comparison of the system data to the respective historical data patterns, wherein the aspect of the anomalous event is defined based at least in part on the association between the at least one operation and the at least one historical data pattern(PRATT [0064] “In this description, an “anomaly” is defined as a detected or identified variation from an expected pattern of activity on the part of an entity associated with an information technology environment, which may or may not constitute a threat. This entity activity that departs form expected patterns of activity can be referred to as “anomalous activity.” For example, an anomaly may include an event or set of events of possible concern that may be actionable or warrant further investigation. Examples of anomalies include alarms, blacklisted applications/domains/IP addresses, domain name anomalies, excessive uploads or downloads, website attacks, land speed violations, machine generated beacons, login errors, multiple outgoing connections, unusual activity time/sequence/file access/network activity, etc.”).
Regarding claim 13, PRATT-JURZAK teaches:
The apparatus of claim 12, wherein: a second model (PRATT [0269] “FIGS. 28-29 illustrate an example case for identifying threat indicators based on combining the outputs (i.e. detected anomalies) from different combinations of anomaly models…”) of the plurality of intrusion detection models is configured to associate the at least one operation with at least one of a plurality of intrusion phases determined based at least in part on the system data (PRATT [0064] “For example, an anomaly may include an event or set of events of possible concern that may be actionable or warrant further investigation. Examples of anomalies include… excessive uploads or downloads, website attacks, land speed violations, machine generated beacons, login errors, multiple outgoing connections, unusual activity time/sequence/file access/network activity, etc.”, [0261] “FIG. 24 illustrates an example case for identifying threat indicators based on duration of detected anomalous activity. Anomalies may be detected over a period of time, for example, as shown in FIG. 24, anomalies 1 through M are detected at time periods t1 through tm. This use case assumes that a temporal correlation among detected anomalies is indicative of suspicious activity. For example, a high number of anomalies occurring in a short time period may be indicative of a concentrated threat to the security of the network.” The at least one operation(anomaly event) is associated with at least one of a plurality of intrusion phases(detected durations of anomalous activity) determined based at least in part on the system data), wherein the aspect of the anomalous event is further defined based at least in part on the at least one of the plurality of intrusion phases (PRATT [0261] “Anomalies may be detected over a period of time, for example, as shown in FIG. 24, anomalies 1 through M are detected at time periods t1 through tm. This use case assumes that a temporal correlation among detected anomalies is indicative of suspicious activity. For example, a high number of anomalies occurring in a short time period may be indicative of a concentrated threat to the security of the network.” [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion,” [0263] “In some embodiments, the use case described in FIG. 24 involves a process that begins with determining a number of anomalies that have substantially matching profiles or footprints (e.g. as described in the previous use case) over a time period. These substantially matching anomalies may indicate a pattern of anomalous activity that has duration.”).
Regarding claim 14, PRATT-JURZAK teaches:
The apparatus of claim 13, wherein: a third model of the plurality of intrusion models is configured to generate an event data object representative of the at least one operation based at least in part on the system data (PRATT [0238]” A plurality of anomaly detection models instances may be instantiated for each entity associated with the information technology environment. Each model instance may be of a particular model type configured to detect a particular category of anomalies based on received events.”, [0231] “As shown in FIG. 18 at step 2202, events 2280 are processed through one or more anomaly detection models 1 through N (e.g. machine learning models as discussed above) as well as one or more anomaly detection rules 1 through P.”, [0238] “According to an embodiment, an anomaly detection model includes at least model processing logic defining a process for assigning an anomaly score to the processed events 2280 and a model state defining a set of parameters for applying the model processing logic.”); and the aspect of the anomalous event is further defined based at least in part on respective comparisons between the event data object and the plurality of anomalous event definitions (PRATT [0239] “Calculation of the anomaly score is done by the processing logic contained within the anomaly detection model and represents a quantification of a degree to which the processed events are associated with anomalous activity on the network. In some embodiments, the anomaly score is a value in a specified range. For example, the resulting anomaly score may be a value between 0 and 10, with 0 being the least anomalous and 10 being the most anomalous.”, [0149] “In certain embodiments, anomalies and threats are detected by comparing events against the baseline profile for an entity to which the event relates (e.g., a user, an application, a network node or group of nodes, a software system, data files, etc.). If the variation is more than insignificant, the threshold for which may be dynamically or statically defined, an anomaly may be considered to be detected. The comparison may be based on any of various techniques, for example, time-series analysis (e.g., number of log-ins per hour), machine learning, or graphical analysis (e.g., in the case of security graphs or security graph projections). Preferably, this detection is performed by various machine learning models.”).
Regarding claim 15, PRATT-JURZAK teaches:
The apparatus of claim 10, wherein: the computer-code instructions, in execution with the at least one processor, further cause the apparatus to, in performance of the at least one response action: generate at least one security protocol based at least in part on the at least one anomalous event definition; and cause provision of the at least one security protocol to at least one computing device associated with an administrator of the at least one computing environment (PRATT [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software or hardware processes, and the like. In certain embodiments, the discovered anomalies and threats may be presented (e.g. via GUI 162) to a network operator (e.g., a network security administrator or analyst) for decision.”, [0238] “According to some embodiments, the security platform includes anomaly detection models configured to detect a number of different kinds of anomalous activity, such as lateral movement, blacklisted entities, malware communications, rare events, and beacon activity.” The locking out of users or preventing information transfer by generating or adding to a blacklist is mapped to “in performance of the at least one response action: generate at least one security protocol based at least in part on the at least one anomalous event definition; and cause provision of the at least one security protocol to at least one computing device associated with an administrator of the at least one computing environment”).
Regarding claim 16, PRATT-JURZAK teaches:
The apparatus of claim 15, wherein:the at least one security protocol defines at least one adjustment to account authentication policies; and the at least one adjustment indicates an implementation of at least one of account lockout protocol, multifactor authentication protocol, or credential management protocol(PRATT [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software or hardware processes, and the like.).
Regarding claim 17, PRATT-JURZAK teaches:
The apparatus of claim 15, wherein: the at least one security protocol defines at least one adjustment to subsequent real-time monitoring of operations occurring on the at least one computing environment (PRATT [0148] Baseline profiles can be continuously updated (whether in real-time or in batch according to a predefined schedule) in response to received event data, i.e., they can be updated dynamically or adaptively based on event data. If the human user 1004 begins to access source code server 1010 more frequently in support of his work, for example, and his accessing of source code server 1010 has been judged to be legitimate by the security platform 800…, his baseline profile 1014 is updated to reflect the updated “normal” behavior for the human user 1004.”, [0149] “In certain embodiments, anomalies and threats are detected by comparing events against the baseline profile for an entity to which the event relates (e.g., a user, an application, a network node or group of nodes, a software system, data files, etc.). If the variation is more than insignificant, the threshold for which may be dynamically or statically defined, an anomaly may be considered to be detected.”); and the at least one adjustment is associated with at least one of application log monitoring, command monitoring, or user account monitoring (PRATT [0149] “In certain embodiments, anomalies and threats are detected by comparing events against the baseline profile for an entity to which the event relates (e.g., a user, an application, a network node or group of nodes, a software system, data files, etc.). If the variation is more than insignificant, the threshold for which may be dynamically or statically defined, an anomaly may be considered to be detected.”, [0240] “Process 2300 continues at step 2308 with outputting an indicator of a particular anomaly if the anomaly score satisfies a specified criterion (e.g., exceeds a threshold) … In some embodiments, the criterion (e.g., threshold) is dynamic and changes based on situational factors. The situational factors may include volume of events, presence or absence of pre-conditional events, user configurations, and volume of detected anomalies.”).
Regarding claim 18, PRATT-JURZAK teaches:
The apparatus of claim 15, wherein: the at least one security protocol defines at least one data management process to reduce vulnerability of the at least one computing environment to unauthorized data manipulation; and the at least one data management process comprises at least one of data backup, data modification monitoring, or data encryption (PRATT [0070] “In some embodiments, anomalies and threats detected using a real-time processing path may be employed to automatically trigger an action, such as stopping the intrusion, shutting down network access, locking out users, preventing information theft or information transfer, shutting down software or hardware processes, and the like.” One of ordinary skill in the art would appreciate that actions preventing information theft anticipate data encryption.).
Regarding claim 19, PRATT-JURZAK teaches:
The apparatus of claim 15, wherein: the at least one security protocol defines at least one communication control process to reduce vulnerability of the at least one computing environment to network intrusion; and the at least one communication control process comprises at least one of signature verification, communication content filtering, or network traffic flow monitoring (PRATT [0063] “The network security systems described herein can be deployed at any of various locations in a network environment … that can monitor or control the network traffic within the private intranet.”, [0089] “In an embodiment, the monitoring component 112 may monitor one or more aspects of network traffic sent or received by a client application 110.”, [0134] “The output of the analysis module 830 may also automatically trigger actions such as terminating access by a user, terminating file transfer, or any other action that may neutralize the detected threats.”).
Regarding claim 20, claim 20 recites similar limitations as claim 1, but for the recitation in the form of a computer program product. Accordingly, claim 20 is rejected for similar reasoning and rationale as claim 1. PRATT-JURZAK teaches:
A computer program product comprising at least one non-transitory computer-readable storage medium having computer program code stored thereon (PRATT [0295] “Embodiments of the techniques introduced here may be implemented, at least in part, by a computer program product which may include a non-transitory machine-readable medium having stored thereon instructions that may be used to program/configure a computer or other electronic device to perform some or all of the operations described above.”)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kamryn Gillespie whose telephone number is 703-756-5498. The examiner can normally be reached on Monday through Thursday from 9am to 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.J.G./Examiner, Art Unit 2408
/LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408