DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed on 10/19/2024.
Claims 1-20 are currently pending and have been examined.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 104. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 1921d, 1932, 1602, 1603, and 1607. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
Paragraph [0073] recites “the pro-processing subsystem” when it appears it should recite “the pre-processing subsystem”
Appropriate correction is required.
Claim Objections
Claims 1-3, 8-10, and 15-17 are objected to because of the following informalities:
Claim 1 line 9 recites “a current event” when it appears it should recite “a current audit event” in order to provide antecedent basis for “said current audit event” in lines 10-11 f claim 1
Additionally regarding claim 1, line 13 recites “said current event” when it appears it should recite “said current audit event”
Claim 2 line 5 recites “said current event” when it appears it should recite “said current audit event”
Claim 3 lines 1-2 recite “said one or more machine learning classifications” when it appears it should recite “said one or more machine learning classification[[s]] engines” to maintain proper antecedent basis
Claim 8 is objected to for similar reasoning as discussed above regarding claim 1
Claim 9 is objected to for similar reasoning as discussed above regarding claim 2
Claim 10 is objected to for similar reasoning as discussed above regarding claim 3
Claim 15 is objected to for similar reasoning as discussed above regarding claim 1
Claim 16 is objected to for similar reasoning as discussed above regarding claim 2
Claim 17 is objected to for similar reasoning as discussed above regarding claim 3
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite determining whether an event is anomalous.
As an initial matter, claims 1-7 fall into at least the process category of statutory subject matter. Claims 8-14 fall into at least the machine category of statutory subject matter. Finally, claims 15-20 fall into at least the manufacture category of statutory subject matter. Therefore, all claims fall into at least one of the statutory categories. Eligibility analysis proceeds to Step 2A.
In claim 1, the limitation of “A method of detecting anomalous behaviour in a network, the method comprising: in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store”, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a network,” nothing in the claim element precludes the step from practically being performed in the mind. Similarly, the limitations of “receiving a current event; processing, by said plurality of machine learning classification engines, said current audit event to obtain an anomaly score and a confidence score from each respective classification engine; determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Additionally, claim 1 recites the concept of identifying risky anomalous actions in a network which is a certain method of organizing human activity including fundamental economic principles including mitigating risks. A method of detecting anomalous behaviour, the method comprising: in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store; receiving a current event; processing said current audit event to obtain an anomaly score and a confidence score from each respective classification engine; determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores all, as a whole, fall under the category of fundamental economic principles including mitigating risks. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a network, training a plurality of machine learning classification engines, and an application. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a network, training a plurality of machine learning classification engines, and an application amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 2 further limits the abstract idea of claim 1 while introducing the additional element of the plurality of machine learning classification models comprising one or more ensemble models. The claim does not integrate the abstract idea into a practical application because the element of the plurality of machine learning classification models comprising one or more ensemble models is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 1 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claims 3 further limits the abstract idea of claim 1 without adding any new additional elements. Therefore, by the analysis of claim 1 above, this claim does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 4 further limits the abstract idea of claim 1 while introducing the additional element of the modification of machine learning classification models in response to assessment accuracy. The claim does not integrate the abstract idea into a practical application because the element of the modification of machine learning classification models in response to assessment accuracy is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 1 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claims 5 further limits the abstract idea of claim 1 without adding any new additional elements. Therefore, by the analysis of claim 1 above, this claim does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 6 further limits the abstract idea of claim 1 while introducing the additional element of the machine learning classification models operating in parallel and independently from each other. The claim does not integrate the abstract idea into a practical application because the element of the machine learning classification models operating in parallel and independently from each other is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 1 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claim 7 further limits the abstract idea of claim 4 while introducing the additional element of the parallel and separately executed feedback loops for each of the machine learning classification models. The claim does not integrate the abstract idea into a practical application because the element of the parallel and separately executed feedback loops for each of the machine learning classification models is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 4 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
In claim 8, the limitation of “A system for detecting anomalous behaviour in a network, the system comprising: one or more processors; a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method comprising: in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store”, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a system”, “a network”, “one or more processors”, “a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method,” nothing in the claim element precludes the step from practically being performed in the mind. Similarly, the limitations of “receiving a current event; processing, by said plurality of machine learning classification engines, said current audit event to obtain an anomaly score and a confidence score from each respective classification engine; determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Additionally, claim 8 recites the concept of identifying risky anomalous actions in a network which is a certain method of organizing human activity including fundamental economic principles including mitigating risks. Detecting anomalous behaviour, perform a method comprising: in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store; receiving a current event; processing said current audit event to obtain an anomaly score and a confidence score from each respective classification engine; determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores all, as a whole, fall under the category of fundamental economic principles including mitigating risks. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a system, a network, one or more processors, a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method, training a plurality of machine learning classification engines, and an application. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a system, a network, one or more processors, a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method, training a plurality of machine learning classification engines, and an application amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 9 further limits the abstract idea of claim 8 while introducing the additional element of the plurality of machine learning classification models comprising one or more ensemble models. The claim does not integrate the abstract idea into a practical application because the element of the plurality of machine learning classification models comprising one or more ensemble models is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 8 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claims 10 further limits the abstract idea of claim 8 without adding any new additional elements. Therefore, by the analysis of claim 8 above, this claim does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 11 further limits the abstract idea of claim 8 while introducing the additional element of the modification of machine learning classification models in response to assessment accuracy. The claim does not integrate the abstract idea into a practical application because the element of the modification of machine learning classification models in response to assessment accuracy is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 8 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claims 12 further limits the abstract idea of claim 8 without adding any new additional elements. Therefore, by the analysis of claim 8 above, this claim does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 13 further limits the abstract idea of claim 8 while introducing the additional element of the machine learning classification models operating in parallel and independently from each other. The claim does not integrate the abstract idea into a practical application because the element of the machine learning classification models operating in parallel and independently from each other is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 8 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claim 14 further limits the abstract idea of claim 11 while introducing the additional element of the parallel and separately executed feedback loops for each of the machine learning classification models. The claim does not integrate the abstract idea into a practical application because the element of the parallel and separately executed feedback loops for each of the machine learning classification models is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 11 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
In claim 15, the limitation of “A non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising: in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store”, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform a method”, “one or more processors,” nothing in the claim element precludes the step from practically being performed in the mind. Similarly, the limitations of “receiving a current event; processing, by said plurality of machine learning classification engines, said current audit event to obtain an anomaly score and a confidence score from each respective classification engine; determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Additionally, claim 15 recites the concept of identifying risky anomalous actions in a network which is a certain method of organizing human activity including fundamental economic principles including mitigating risks. Perform a method comprising: in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store; receiving a current event; processing said current audit event to obtain an anomaly score and a confidence score from each respective classification engine; determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores all, as a whole, fall under the category of fundamental economic principles including mitigating risks. The claim falls into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Mere recitation of generic computer components does not remove the claim from this grouping. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of one or more processors, a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method, training a plurality of machine learning classification engines, and an application. The recited additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of one or more processors, a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method, training a plurality of machine learning classification engines, and an application amounts to no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements is also no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible.
Claim 16 further limits the abstract idea of claim 15 while introducing the additional element of the plurality of machine learning classification models comprising one or more ensemble models. The claim does not integrate the abstract idea into a practical application because the element of the plurality of machine learning classification models comprising one or more ensemble models is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 15 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claims 17 further limits the abstract idea of claim 15 without adding any new additional elements. Therefore, by the analysis of claim 15 above, this claim does not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. The claim is not patent eligible.
Claim 18 further limits the abstract idea of claim 15 while introducing the additional element of the modification of machine learning classification models in response to assessment accuracy. The claim does not integrate the abstract idea into a practical application because the element of the modification of machine learning classification models in response to assessment accuracy is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 15 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claim 19 further limits the abstract idea of claim 15 while introducing the additional element of the machine learning classification models operating in parallel and independently from each other. The claim does not integrate the abstract idea into a practical application because the element of the machine learning classification models operating in parallel and independently from each other is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 15 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claim 20 further limits the abstract idea of claim 18 while introducing the additional element of the parallel and separately executed feedback loops for each of the machine learning classification models. The claim does not integrate the abstract idea into a practical application because the element of the parallel and separately executed feedback loops for each of the machine learning classification models is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. Adding this new additional element into the additional element from claim 18 still amounts to no more than mere instructions to apply the exception using generic computer components and generic machine learning. The claim also does not amount to significantly more than the abstract idea because mere instructions to apply an exception using generic computer components and generic machine learning cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5-6, 8, 10, 12-13, 15, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Torres Dho et al. (U.S. Pre-Grant Publication No. 2023/0319083, hereafter known as Torres Dho) in view of Salunke et al. (U.S. Pre-Grant Publication No. 2020/0351283, hereafter known as Salunke) and Adamson et al. (U.S. Patent No. 12,341,797; hereafter known as Adamson).
Regarding claim 1, Torres Dho teaches:
A method of detecting anomalous behaviour in a network, the method comprising (see Fig. 2 and [0039]-[0051] for the overall method. See Fig. 1 and [0021]-[0022] for the host devices being monitored for anomalies being a part of a computer network)
receiving a current event (see steps 202 and 206 and [0040] "at operation 202, the process identifies current values for a set of metrics from a set of hosts. For example, the process may scan log files received from hosts 102a-n for entries with the most recent timestamp. As another example, the process may detect and record the most recent samples streamed from hosts 102a-n" and [0042] "At operation 206, the process generates a set of one or more current point-in-time value data frames. A data frame is a data structure, which may be implemented as a table or multidimensional array for storing the point-in-time values. With current point-in-time value data frames, a column in the data structure may correspond to different metrics, such as CPU utilization rates, memory throughput, active user sessions, etc.")
processing, by said plurality of machine learning classification engines, said current (see step 212 and [0045] "At operation 212, the process applies a set of outlier detection ML models to each data frame...Example ML algorithms/models include angle-based outlier detection (ABOD), clustering models (e.g., k-means clustering, k-mode clustering), k-nearest neighbors (KNN), principal component analysis (PCA), and support vector machines (SVM)...The ML model may classify the behavior of a host as an outlier or non-outlier based on the set of point-in-time values for the host relative to patterns in point-in-time values for other hosts" and [0046] "each of the applied ML models outputs a per-host value or score for each data frame that indicates a classification and/or probability that the host's behavior is an outlier...Probabilistic models may assign values between 0-1 based on the probability that the value is an outlier with 1 indicating a 100% probability, 0 indicating a 0% percent probability" for processing a current event state with a plurality of machine learning models and obtaining a probability (anomaly score) that the current event is anomalous. See [0048] "the score may be computed as a weighted sum, where the output of different models are weighted differently and summed together. The weight may correspond to the contribution of the model to the score, where ML model outputs weighted more highly contribute more to than ML model outputs with lower weights. Weighting may be set based on model reliability" for each model also obtaining a weight based on reliability (confidence score) that will affect how much the results of an individual model impact the overall score for the current event)
determine whether said current event is an anomalous event based on said respective anomaly scores and confidence scores (see steps 214 and 216 and [0047] "At operation 214, the process generates a set of anomaly scores for each host based on the output of the machine learning algorithms", [0048] for the weighting of contributions of the anomaly probabilities of each model based on the reliability of each particular model to obtain the anomaly score for a host, and [0050] "At operation 216, the process determines whether any hosts have an anomaly score satisfying a threshold value" for determining whether a current state is anomalous based on the overall score for the current events surpassing a threshold)
While Torres Dho teaches a plurality of machine learning models processing a current event in order to determine whether the event was anomalous, Torres Dho does not explicitly teach the pre-processing phase of classifying a population set to discover context-specific classes via clustering analysis, storing the classification in a population data store, and using the population data store and compliance and audit events to train the machine learning models. Torres Dho also does not explicitly teach the current event being processed as an audit event. Salunke teaches:
in a pre-processing phase, classifying a population set to discover context-specific classes via a clustering analysis of each sub-population within said population set and storing said classification in a population data store (see [0079] "If there are no user-set labels in the data, then the training process automatically labels the training data using an unsupervised approach (operation 206). Techniques for automatically labeling data are described further below in Section 3.2, titled “Biased Sampling and Automatic Labeling.” as well as Fig. 3 and [0093] "The training process next determines the distribution of data relative to the grid cells (operation 304). In some embodiments, the data points are stored in a balanced tree, such as a k-dimensional tree. The training process may query the balanced tree for which data points belong to each grid cell. Each grid cell may be labeled with integers that are used to identify the membership of each data point. The result is a set of clusters, where each cluster is being represented by a grid cell of zero or more data points from the training data" and [0095] "the training process labels the biased samples as unanomalous while operating in the unsupervised mode (operation 308). The unanomalous label may be represented as “+1” corresponding to a positive class within the training data" and [0097] "the training process labels the random data as anomalous (operation 312). The anomalous label may be represented as “−1” corresponding to a negative class within the training data" for clustering a population of training data into anomalous and non-anomalous classes. See Fig. 4 for the clustering being performed in the context of Hits/Minute and Processing Time. See [0066] for the data repository holding the data within the system)
training a plurality of machine learning classification engines based on said population data store and on received (see [0098] "The training process then trains the anomaly detection model as a function of the labeled samples from the training data and the random data (operation 314)" and [0070] for the training data comprising events in the computing network)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the clustering and training processes of Salunke into the system of Torres Dho. As Salunke states in [0090] regarding the clustering and training process cited above, “the unsupervised training process involves biased sampling of the normalized data. Biased sampling allows the anomaly detection model to learn over all of the data and not just the areas where populations are highly concentrated. As a result, false flagging of behavior that is less frequent but unanomalous may be mitigated”. In other words, by performing unsupervised training of the models in the manner taught by Salunke, the machine learning models of the combined system would be less likely to incorrectly flag appropriate but infrequent events are anomalous by mistake. Torres Dho explicitly considers the models using unsupervised learning to determine anomalies in [0045], so the unsupervised methods of Salunke could be integrated to minimize the likelihood of false positives when determining anomalous behavior in a network.
The combination of Torres Dho and Salunke still does not explicitly teach the events being used to train the machine learning models as “compliance and audit” events and the event being processed by the machine learning models as an “audit” event. Adamson teaches the events being tracked by the system as compliance and audit events (see Col. 5 line 63 thru Col. 6 line 3 “Data processing resources 20 may be configured to perform various data processing operations with respect to data ingested by data ingestion resources 18, including data ingested and stored in data store 30. For example, data processing resources 20 may be configured to perform one or more data security monitoring and/or remediation operations, compliance monitoring operations, anomaly detection operations” and Col. 29 lines 20-27 “Two example kinds of anomalies that can be detected by data platform 12 include security anomalies (e.g., a user or process behaving in an unexpected manner) and devops/root cause anomalies (e.g., network congestion, application failure, etc.). Detected anomalies can be recorded and surfaced (e.g., to administrators, auditors, etc.), such as through alerts which are generated at 304 based on anomaly detection” for data being processed in the system being compliance and audit data. In combination with Torres Dho and Salunke, the compliance and audit data are processed by the machine learning models and used to train the machine learning models).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the data being used to train machine learning models and the data being processed by machine learning models to look for anomalies being compliance and audit data as taught by Adamson in the combination of Torres Dho and Salunke, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As discussed in Adamson, anomaly detection and compliance monitoring/auditing can be performed at the same time, with anomalies in a network being used as part of the compliance monitoring and auditing of network operations. Therefore, it would have been obvious to incorporate such compliance monitoring and auditing of Adamson into the combination of Torres Dho and Salunke.
Regarding claim 3, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 1 above. Torres Dho further teaches:
wherein said one or more machine learning classifications is configured to detect anomalies in one or more of a user, an application in use, a location, a job role, and/or a demographic (see [0024] "Example metric readings may include...number of active user sessions" and [0023] "Example computing resources may include...application instances, and virtual machine instances" for metrics being analyzed for anomalies including users and applications in use)
Regarding claim 5, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 1 above. Torres Dho further teaches:
wherein each of said respective confidence scores and anomaly scores is a value between 0 and 1 (see [0046] "Probabilistic models may assign values between 0-1 based on the probability that the value is an outlier with 1 indicating a 100% probability, 0 indicating a 0% percent probability, and values in between represent varying levels of probability increasing the closer the value is to 1" for anomaly scores between 0-1. See [0048] for weights of models being confidence scores. The weights being values within 0-1 is would have been obvious as part of routine optimization of Torres Dho (see MPEP 2144.05 II.). As all the models are weighted to contribute to an overall value, the absolute value of each of the weights does not matter to the functioning of the invention as much as the value of the weight relative to the sum of all weight values. In other words, a model with a weight of 30 out of an aggregate value of 100 has the same effect on the final result as a model with a weight of 0.3 out of an aggregate value of 1. Accordingly, the exact weight values, or the scale (out of 1) at which the weight values are issued, would be arrived at via routine optimization of the teachings of Torres Dho)
Regarding claim 6, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 1 above. Torres Dho further teaches:
wherein each of said plurality of machine learning classification engines executes in parallel and independently from other machine learning classification engines of said plurality of machine learning classification engines (see Fig. 3 element 306 and [0054] "For each data frame, all of ML algorithms 306 are run to estimate a classification or probabilistic value. With AB OD, for instance, the process may identify outliers based on the variances of the angles and distances between data points within a data frame...With clustering, outliers may be detected based on the distance between the host (represented by the host's point-in-time values) at a point in time and the nearest cluster centroid. With KNN, outliers may be detected based on the distance between the host and the k nearest neighbors. With PCA, outliers may be detected based on a decomposition (e.g., an eigendecomposition) of the values into principal components and variance between the host's principal components from the principal components of other hosts. With SVM, outliers may be detected based on the position of a host relative to a hyperplane or boundary" for the different machine learning models running in parallel at step 306 and independently identifying anomalies using their own techniques)
Regarding claim 8, Torres Dho teaches:
A system for detecting anomalous behaviour in a network, the system comprising: one or more processors (see Fig. 5 and [0101] “FIG. 5 illustrates a computer system upon which some embodiments may be implemented. Computer system 500 includes a bus 502 or other communication mechanism for communicating information, and a hardware processor 504 coupled with bus 502 for processing information. Hardware processor 504 may be, for example, a general-purpose microprocessor”)
a non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by said one or more processors, cause the one or more processors to perform a method comprising (see [0102] “Computer system 500 also includes a main memory 506, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus 502 for storing information and instructions to be executed by processor 504. Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Such instructions, when stored in non-transitory storage media accessible to processor 504, render computer system 500 into a special-purpose machine that is customized to perform the operations specified in the instructions” and [0106])
Regarding the remaining limitations of claim 8, see the rejection of claim 1 above.
Regarding claim 10, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 8 above. Regarding the limitations introduced in claim 10, see the rejection of claim 3 above.
Regarding claim 12, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 8 above. Regarding the limitations introduced in claim 12, see the rejection of claim 5 above.
Regarding claim 13, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 8 above. Regarding the limitations introduced in claim 13, see the rejection of claim 6 above.
Regarding claim 15, Torres Dho teaches:
A non-transitory computer-readable storage medium having stored thereon processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising (see [0115] “a non-transitory computer readable storage medium comprises instructions which, when executed by one or more hardware processors, causes performance of any of the operations described herein and/or recited in any of the claims” and [0106])
Regarding the remaining limitations of claim 15, see the rejection of claim 1 above.
Regarding claim 17, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 15 above. Regarding the limitations introduced in claim 17, see the rejection of claim 3 above.
Regarding claim 19, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 15 above. Regarding the limitations introduced in claim 19, see the rejection of claim 6 above.
Claims 2, 5, 9, 12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Torres Dho in view of Salunke, Adamson, and Muddu (U.S. Patent No. 9,516,053; hereafter known as Muddu).
Regarding claim 2, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 1 above. Torres Dho further teaches:
and determine whether said current event is anomalous based on the output of said one or more ensemble models (see steps 214 and 216 and [0047] "At operation 214, the process generates a set of anomaly scores for each host based on the output of the machine learning algorithms", [0048] for the weighting of contributions of the anomaly probabilities of each model based on the reliability of each particular model to obtain the anomaly score for a host, and [0050] "At operation 216, the process determines whether any hosts have an anomaly score satisfying a threshold value" for determining whether a current state is anomalous based on the overall score for the current events surpassing a threshold)
As discussed above and in Torres Dho [0047]-[0049], Torres Dho teaches calculating a weighted average of the individual anomaly scores based on confidence scores from the machine learning models to arrive at an overall anomaly score. The combination of Torres Dho, Salunke, and Adamson thus does not explicitly teach an ensemble machine learning model that obtains an aggregate score based on the anomaly and confidence scores of the individual machine leaning classification models. Muddu teaches:
wherein said plurality of machine learning classification engines comprises one or more ensemble models, each of said one or more ensemble models configured to obtain an aggregate score based on respective anomaly and confidence scores from a subset of the plurality of classification engines (see Col. 105 line 66 thru Col. 106 line 14 "In some embodiments ensemble learning techniques can be applied to process the plurality of feature scores according to a plurality of models (including machine-learning models) to achieve better predictive performance in the anomaly scoring and reduce false positives. An example model suitable for ensemble learning is Random Forest. In such an embodiment, the process may involve, processing an entity profile according to a plurality of machine-learning models, assigning a plurality of intermediate anomaly scores, each of the plurality of intermediate anomaly scores based on processing of the entity profile according to one of the plurality of machine-learning models, processing the plurality of intermediate anomaly scores according to an ensemble-learning model, and assigning the anomaly score based on processing the plurality of intermediate anomaly scores")
Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. That is in the substitution of the ensemble machine learning model combining intermediate anomaly scores to obtain a final anomaly score of Muddu for the weighted averaging of the intermediate anomaly scores of the combination of Torres Dho, Salunke, and Adamson.
Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Furthermore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the ensemble model of Muddu into the combination of Torres Dho, Salunke, and Adamson because, as Muddu states in Col. 105 line 66 thru Col. 106 line 14 above, “ensemble learning techniques can be applied to process the plurality of feature scores according to a plurality of models (including machine-learning models) to achieve better predictive performance in the anomaly scoring and reduce false positives”. Therefore, the incorporation of the ensemble model would improve the combination by reducing the chance for false positives.
Regarding claim 5, an alternative rejection to the rejection of claim 5 provided above is presented here. This alternate rejection is presented as if the routine optimization analysis above were not appropriate, which Examiner does not concede. Regardless, this alternate rejection is provided in the interest of compact prosecution. The combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 1 above. Torres Dho further teaches:
wherein each of said respective (see [0046] "Probabilistic models may assign values between 0-1 based on the probability that the value is an outlier with 1 indicating a 100% probability, 0 indicating a 0% percent probability, and values in between represent varying levels of probability increasing the closer the value is to 1")
While the combination of Torres Dho, Salunke, and Adamson does not explicitly teach the weights/confidence scores being values from 0 to 1, Muddu teaches the weights ranging from 0 to 1 (see Col. 97 line 64 thru Col. 98 line 3 "the machine learning model 6300 keeps a weight value of 0.15 (=1*15%) at the device node D4. The machine learning model 6300 equally distributes a remainder of the initial weight value (0.85=1*0.85%) to user nodes U2, U3 and U6. Each node of user nodes U2, U3 and U6 receives a weight value of 0.283 (=0.85/3)" for the weights (confidence values in Torres Dho) being values between 0 and 1)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include weight values ranging from 0 to 1 as taught by Muddu in the combination of Torres Dho, Salunke, and Adamson, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Specifically, one of ordinary skill in the art would have recognized that as long as the weights of the machine learning models maintained their ratio to one another the values could be scaled down to values between 0 and 1 without changing the final outputs of the combined system.
Regarding claim 9, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 8 above. Regarding the limitations introduced in claim 9, see the rejection of claim 2 above.
Regarding claim 12, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 8 above. Regarding the limitations introduced in claim 12, see the alternate rejection of claim 5 above.
Regarding claim 16, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 15 above. Regarding the limitations introduced in claim 16, see the rejection of claim 2 above.
Claims 4, 7, 11, 14, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Torres Dho in view of Salunke, Adamson, and Kumar et al. (U.S. Pre-Grant Publication No. 2024/0356945, hereafter known as Kumar).
Regarding claim 4, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 1 above. Torres Dho further teaches feedback loops to refine downstream models that take in anomaly scores as inputs in [0060], but the combination of Torres Dho, Salunke, and Adamson does not explicitly teach feedback loops for assessing the accuracy of the machine learning models in response to an assessment of their accuracy. Kumar teaches:
further comprising a feedback loop for assessing accuracy of said determination and modifying one or more of said machine learning classification engines in response to said assessment accuracy (see Fig. 5 and [0088] "Once received by the SOC, the security alert may be investigated by, for example, security analysts and/or other resources associated with the SOC. Based on this review, the security analysts and/or other resources associated with the SOC (generally, the SOC) may create feedback related to the detected anomaly and other detected anomalies", [0089] "in operation 504, one or more servers may receive the created feedback from the SOC. For example, with reference to FIG. 1, the data transform server 108, the ML server 116, the rule server 118, and/or the alert generation server 120 may receive such feedback. Then, in response to the feedback, upstream processes may be tweaked and improved for better model predictions in a recurrent manner", and [0090] "in operation 506, the models 110, 112, 114 of FIG. 1 may be optionally tuned, trained, etc. based on the received feedback. For instance, hyperparameters of the models 110, 112, 114, model families (e.g., in an ensemble), individual model weights in the ensemble, etc. may be modified based on the feedback to improve performance metrics")
It would have been obvious to one of ordinary skill in the art before thew effective filing date of the claimed invention to incorporate the feedback loops for the machine learning models of Kumar into the combination of Torres Dho, Salunke, and Adamson. As Kumar states in [0089] “in response to the feedback, upstream processes may be tweaked and improved for better model predictions in a recurrent manner” and [0090] “hyperparameters of the models 110, 112, 114, model families (e.g., in an ensemble), individual model weights in the ensemble, etc. may be modified based on the feedback to improve performance metrics”. Therefore, one of ordinary skill in the art would have recognized that by incorporating the feedback loops of Kumar, the combined system would be able to improve the machine learning models to more accurately identify anomalies in a network.
Regarding claim 7, the combination of Torres Dho, Salunke, Adamson, and Kumar teaches all of the limitations of claim 4 above. Torres Dho further teaches a plurality of the machine learning models operating independently and in parallel as discussed regarding claim 6 above. As discussed above regarding claim 4, the combination of Torres Dho, Salunke, and Adamson does not explicitly teach feedback loops for assessing the accuracy of the machine learning models in response to an assessment of their accuracy. Accordingly, the combination of Torres Dho, Salunke, and Adamson does not explicitly teach a plurality of these feedback loops for each machine learning model that operate in parallel and separately from the other feedback loops. Kumar further teaches:
wherein said feedback loop comprises a plurality of feedback loops for each respective machine learning classification engine of said plurality of machine learning classification engines, and wherein each of said plurality of feedback loops is executed in parallel and separately from others of said plurality of feedback loops (see [0071] "the models 110, 112, 114 (and/or other models herein) may be retrained. For example, the ML server 116 and/or the SOC 122 may detect whether performance of the ML models 110, 112, 114 falls below a defined threshold. Then, in response to the performance falling below the defined threshold, the models 110, 112, 114 may be retrained based on, for example, new normal patterns relating to remote and/or physical access" for a plurality of feedback loops for tuning the plurality of models running in parallel. See [0058] "any one of the unsupervised models 110, 112, 114 may be sufficiently trained when one or more of its performance metrics (e.g., accuracy, precision, recall, f1-score, etc.) on test data is about 10% or less, 5% or less, etc." for each of the models being retrained satisfactorily based on their own independent performance thresholds)
It would have been obvious to one of ordinary skill in the art before thew effective filing date of the claimed invention to incorporate the parallel and separately operating feedback loops for the machine learning models of Kumar into the combination of Torres Dho, Salunke, and Adamson. As Kumar states in [0089] “in response to the feedback, upstream processes may be tweaked and improved for better model predictions in a recurrent manner” and [0090] “hyperparameters of the models 110, 112, 114, model families (e.g., in an ensemble), individual model weights in the ensemble, etc. may be modified based on the feedback to improve performance metrics”. Therefore, one of ordinary skill in the art would have recognized that by incorporating the feedback loops of Kumar, the combined system would be able to improve the machine learning models to more accurately identify anomalies in a network. Furthermore, by refining models when they fall below a particular threshold and refining the model until the individual model reaches a performance threshold, the combined system maintains system performance without wasting time and effort refining models that already meet a performance threshold.
Regarding claim 11, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 8 above. Regarding the limitations introduced in claim 11, see the rejection of claim 4 above.
Regarding claim 14, the combination of Torres Dho, Salunke, Adamson, and Kumar teaches all of the limitations of claim 11 above. Regarding the limitations introduced in claim 14, see the alternate rejection of claim 7 above.
Regarding claim 18, the combination of Torres Dho, Salunke, and Adamson teaches all of the limitations of claim 15 above. Regarding the limitations introduced in claim 18, see the rejection of claim 4 above.
Regarding claim 20, the combination of Torres Dho, Salunke, Adamson, and Kumar teaches all of the limitations of claim 18 above. Regarding the limitations introduced in claim 20, see the alternate rejection of claim 7 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Tholar et al. (U.S. Pre-Grant Publication No. 2025/0106231) teaches the detection of anomalous data using machine learning
Roy et al. (U.S. Pre-Grant Publication No. 2024/0394587) teaches creating an outlier detection model based on clustered feature data
Palani et al. (U.S. Pre-Grant Publication No. 2021/0344695) teaches detecting anomalies in aggregated time series data using an ensemble of deep learning models
Andrabi et al. (U.S. Pre-Grant Publication No. 2023/0007023) teaches performing a remedial action to neutralize anomalous actions that have been detected using an anomalous action-detecting model
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C MORONEY whose telephone number is (571)272-4403. The examiner can normally be reached Mon-Fri 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at (571) 270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.C.M./Examiner, Art Unit 3628
/EMMETT K. WALSH/Primary Examiner, Art Unit 3628