DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered.
Applicant has amended claims 1, 2, 4, 6, 9-12 and 17-19. Claims 1-20 have been examined on the merits.
Response to Amendment/Arguments
Claims 1-10 and 19-20 are no longer interpreted under 35 U.S.C. 112(f).
Rejections under 35 USC § 101 for claims 1 and 11 are maintained. Examiner recognizes applicant’s arguments regarding the express recitation of network traffic received from a client and destined for a server. However, the courts have recognized receiving or transmitting data over a network as well‐understood, routine, and conventional function when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (MPEP 2106.05(d)II). Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection (MPEP 2106.07(b)). Examiner suggests incorporating limitations (e.g., from dependent claim(s) not rejected under 35 USC § 101) to show that the generic computer components that perform merely generic computer functions are able to perform functions that are not generic computer functions and therefore integrate or amount to significantly more than an abstract idea.
Applicant’s arguments, see pages 8-9, filed 01/20/2026, with respect to the rejection(s) of claim(s) 1, 10, 11, and 17 rejected under 35 USC § 102, and claims 2-9, 12-16 and 18-20 under 35 USC § 103, specifically:
Applicants respectfully submit that while Doron '607 discloses plotting attributes, the reference does not plot percentages of network traffic features. Moreover, the reference does not disclose determining whether network traffic is anomalous based on a difference between the two percentages (percentage of current feature compared to percentage of baseline feature), as presently claimed.
have been fully considered and are persuasive. Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature. However, upon further consideration, a new ground(s) of rejection is made in view of Doron’607 and Chao et al. (US 20070280114 A1) as detailed below under “Claim Rejections - 35 USC § 103”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because as drafted, these limitations are processes that, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of various “devices” (e.g., detection device) and generic computer components, such as “processor” and “memory”. That is, nothing in the claim elements precludes the steps from practically being performed in the mind. For example, “receiving network traffic”, “monitor at least one histogram” and “determining if a feature exceeds a predetermined threshold” in this context encompasses the user manually collecting the data, observing a graphical representation of historic data of the network traffic and comparing the data to predetermined acceptable thresholds.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. Other aspects of the claims’ limitations amount no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to collect data and perform non-descript processing of the data cannot provide an inventive concept. Therefore, claims 1 and 11 are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8-11 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Doron et al. (US 20240171607 A1), hereinafter Doron’607, in view of Chao et al. (US 20070280114 A1), hereinafter Chao.
Regarding claim 1, Doron’607 discloses a system for detecting anomalous network traffic (a method for the detection of HTTP flood DDoS attacks; distinguishes between malicious traffic and legitimate (flash crowd) traffic to allow efficient detection of HTTP Flood attacks - see [0025], system is illustrated in FIG. 1), the system comprising:
a detection device (“detection system 110”) comprising at least one processor and memory that:
receives network traffic from a client destined for a server (process application-layer transactions received during a current time window to detect a rate-based anomaly in a traffic directed to a protected entity – see [0012]; a detection system 110 is deployed between client device 120, attack tool 125, and victim server 130; the deployment can be an always-on deployment, or any other deployments that enable the detection system 110 to observe incoming HTTP requests and their corresponding responses during peace time and during active attacks time; the detection system 110 is configured to detect changes, or fast increasements, in the transmission rate, or requests per second (RPS), of traffic directed to the server 130 – see [0040-44]; detection system 110 is configured to analyze the received transactions and determine if rate-invariant parameters in the transactions demonstrate normal or abnormal behavior of the application – see [0047]);
monitors at least one histogram for the network traffic (detection is based on comparing a baseline distribution of application attributes (AppAttributes) [[i.e., feature]] learned during peacetime to distributions of AppAttributes measured during an attack time – see [0025]; rate-invariant parameters are AppAttributes that have their baselines developed during peacetime and are monitored during a potential attack time – see [0048]; the detection system 110 is configured to alert of a potential HTTP flood DDoS attack based in part on a comparison between AppAttributes buffers (hereinafter “window buffers”) generated for a current time window, and baseline AppAttributes buffers (hereinafter “baseline buffers”) calculated over past time window – see [0049]), the at least one histogram plotting a (specifically, the determination of AppAttributes anomalies is based on an attack proximity indicating how statistically close the WinAppAttBuf[n+1] window buffers are to the BLAttBuf[n] baseline buffers; each AppAttributes buffer can be presented as a single bar, and the entire buffers from a specific type can be presented as a histogram where the AppAttribute key value is the X axis of the histogram and the occurrences, or the weight represents the Y axis of the histogram; from these histograms, form both window and baseline buffers, the AppAttributes probability density function, or the distribution, can be computed to represent the probability of the appearance of each AppAttribute – see [0076] and FIG. 4 where bars 410 are baseline buffers (i.e., baseline feature) and bars 420 are window buffers (i.e., current feature)); and
determines whether the network traffic is potentially anomalous if a difference between the (the attack proximity represents the statistical distance between the AppAttributes window distribution, and the AppAttributes baseline distribution, for each AppAttributes type – see [0078]; attack proximity is calculated based, at least in part, on the metric distance DAppAttri - see [0079-0083]; the computed attack proximity is compared to a proximity threshold; when the attack proximity exceeds proximity threshold, an AppAttributes anomaly, or rate invariant anomaly, for the current window (n+1) is set; a HTTP Flood DDoS attack is declared when both an AppAttributes anomaly and RPS anomaly are set – see [0087]; examiner’s note: determining attack proximity (i.e, anomaly) is calculated based, at least in part, on the statistical distance (i.e., metric distance DAppAtt#i ) between the AppAttributes window distribution, and the AppAttributes baseline distribution; this metric distance DAppAtt#i is computed as the difference between baseline PAppAtt#i and window PAppAtt#i which are the baseline and window probabilities of AppAttributes i, respectively, as shown in FIG. 4 and [0080-82]).
Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature.
However, Chao discloses a system and method for providing a high-speed defense against DDoS attacks using a packet scoring scheme on a network for providing control of communications traffic (see abstract, [0022]) including measuring attributes (e.g., IP protocol-type values, packet sizes, Time-to-Live (TTL) values, Server port number, 16-bit source/destination IP address prefixes (as an approximation to the IP subnet calculation), TCP/IP header length, and TCP flag patterns) (see [0013]) to generate histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Regarding claim 8, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses a detection system 110 and mitigation system 170 deployed in-line of traffic between the client 120, attack tool 125 and victim server 130 (see [0026-28], FIG. 1) for detecting, characterizing and mitigating HTTP flood attacks (directed at a “victim server 130”) (see [0028-30] and [0040]). The underlying communication between client 120/attack tool 125 and victim server 130 over network 140 relies on IP addresses (see [0031, [0064-66]), thus the victim server’s IP address its implicitly identified.
Doron’607 does not explicitly teach the system, wherein the detection device is further configured to identify a victim IP address of the potentially anomalous network traffic, based at least in part on the at least one histogram.
However, Chao discloses the system, wherein the detection device identifies a victim IP address of the potentially anomalous network traffic, based at least in part on the at least one histogram (a nominal profile is a set of baselines collected during a period in which the protected network was allegedly free of attacks; it characterizes the traffic within a certain period of time by measuring the average throughput in packets or bytes per second (used to rule an acceptable output packet rate), and by creating packet attributes normalized histograms – see [0011]; the following attributes are currently measured on both profiles to generate the histograms: IP protocol-type values, packet sizes, Time-to-Live (TTL) values, Server port number, 16-bit source/destination IP address prefixes (as an approximation to the IP subnet calculation), TCP/IP header length, and TCP flag patterns – see [0013]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to identify a victim IP address of the potentially anomalous network traffic, based at least in part on the at least one histogram, as taught by Chao. One would have been motivated to make such a combination to improve the accuracy of packet discarding and packet scoring schemes for use in determining malicious network activity, as recognized by Chao (see [0052] and [0109]).
Regarding claim 9, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses a detection system 110 and mitigation system 170 deployed in-line of traffic between the client 120, attack tool 125 and victim server 130 (see [0026-28], FIG. 1) for detecting, characterizing and mitigating HTTP flood attacks (directed at a “victim server 130”) (see [0028-30] and [0040]). The underlying communication between client 120/attack tool 125 and victim server 130 over network 140 relies on IP addresses (see [0031, [0064-66]), thus the victim server’s IP address its implicitly identified.
Doron’607 does not teach the system, wherein the detection device is further configured to identify a victim IP subnet of the potentially anomalous network traffic, based at least in part on the at least one histogram.
However, Chao discloses the system, wherein the detection device is further identifies a victim IP subnet of the potentially anomalous network traffic, based at least in part on the at least one histogram (a nominal profile is a set of baselines collected during a period in which the protected network was allegedly free of attacks; it characterizes the traffic within a certain period of time by measuring the average throughput in packets or bytes per second (used to rule an acceptable output packet rate), and by creating packet attributes normalized histograms – see [0011]; the following attributes are currently measured on both profiles to generate the histograms: IP protocol-type values, packet sizes, Time-to-Live (TTL) values, Server port number, 16-bit source/destination IP address prefixes (as an approximation to the IP subnet calculation), TCP/IP header length, and TCP flag patterns – see [0013]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to identify a victim IP subnet of the potentially anomalous network traffic, based at least in part on the at least one histogram, as taught by Chao. One would have been motivated to make such a combination to improve the accuracy of packet discarding and packet scoring schemes for use in determining malicious network activity, as recognized by Chao (see [0052] and [0109]).
Regarding claim 10, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses the system, wherein the detection device further generates the (the rate-invariant parameters are AppAttributes that have their baselines developed during peacetime and are monitored during a potential attack time – see [0048]; the detection system 110 is configured to alert of a potential HTTP flood DDoS attack based in part on a comparison between AppAttributes buffers (hereinafter “window buffers”) generated for a current time window, and baseline AppAttributes buffers (hereinafter “baseline buffers”) calculated over past time window – see [0049]; the detection of HTTP flood attacks is performed during predefined time windows, where an indication can be provided at every window; the baseline is developed over time based on transactions received during peacetime; there are two paths of detection: rate-based (labeled 210) anomalies detection and rate-invariant (labeled 220) anomalies detection – see [0054], FIG. 2; the baseline represents the behavior of AppAttributes as it appeared at the protected application during peace time; the peace time baseline is compared to the AppAttributes appearance behavior during anomaly window, and by that enables the detection of rate invariant applicative anomalous behavior – see [0067-68]).
Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature.
However, Chao discloses generating histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Regarding claim 11, Doron’607 discloses a method for detecting anomalous network traffic (see abstract, FIGS. 1and 2) by a detection device (“detection system 110”), the method comprising:
receiving network traffic on a hardware device from a client destined for a server (process application-layer transactions received during a current time window to detect a rate-based anomaly in a traffic directed to a protected entity – see [0012]; a detection system 110 is deployed between client device 120, attack tool 125, and victim server 130; the deployment can be an always-on deployment, or any other deployments that enable the detection system 110 to observe incoming HTTP requests and their corresponding responses during peace time and during active attacks time; the detection system 110 is configured to detect changes, or fast increasements, in the transmission rate, or requests per second (RPS), of traffic directed to the server 130 – see [0040-44]; detection system 110 is configured to analyze the received transactions and determine if rate-invariant parameters in the transactions demonstrate normal or abnormal behavior of the application – see [0047]); and
monitoring at least one histogram for the network traffic, the at least one histogram plotting a (detection is based on comparing a baseline distribution of application attributes (AppAttributes) [[i.e., feature]] learned during peacetime to distributions of AppAttributes measured during an attack time – see [0025]; rate-invariant parameters are AppAttributes that have their baselines developed during peacetime and are monitored during a potential attack time – see [0048]; the detection system 110 is configured to alert of a potential HTTP flood DDoS attack based in part on a comparison between AppAttributes buffers (hereinafter “window buffers”) generated for a current time window, and baseline AppAttributes buffers (hereinafter “baseline buffers”) calculated over past time window – see [0049]; the detection of HTTP flood attacks is performed during predefined time windows, where an indication can be provided at every window; the baseline is developed over time based on transactions received during peacetime; there are two paths of detection: rate-based (labeled 210) anomalies detection and rate-invariant (labeled 220) anomalies detection – see [0054], FIG. 2).
Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature.
However, Chao discloses a system and method for providing a high-speed defense against DDoS attacks using a packet scoring scheme on a network for providing control of communications traffic (see abstract, [0022]) including measuring attributes (e.g., IP protocol-type values, packet sizes, Time-to-Live (TTL) values, Server port number, 16-bit source/destination IP address prefixes (as an approximation to the IP subnet calculation), TCP/IP header length, and TCP flag patterns) (see [0013]) to generate histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Regarding claim 15, all limitations correspond to the method performed by the system of claims 1 and 8 above. Therefore, claim 15 is being rejected on the same basis as claim 8.
Regarding claim 16, all limitations correspond to the method performed by the system of claims 1 and 9 above. Therefore, claim 16 is being rejected on the same basis as claim 9.
Regarding claim 17, all limitations correspond to the method performed by the system of claims 1 and 10 above. Therefore, claim 17 is being rejected on the same basis as claim 10.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Doron et al. (US 20240171607 A1), and Chao et al. (US 20070280114 A1), as applied to claim 1 above, and further in view of Doron et al. (US 20200412750 A1), hereinafter Doron’750.
Regarding claim 2, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses the system, wherein the system further includes a mitigation device (“mitigation system 170”) comprising at least one processor and memory, that is configured to characterize and mitigate attack traffic – see [0040] – by performing a variety of mitigation actions including blocking requests, responding with a blocking page response, reporting and passing the request to the protected entity or blocking an attack tool at the source – see [0114-0115].
Doron’607 does not explicitly disclose the mitigation device that filters the potentially anomalous network traffic; and transmits clean network traffic to the server, wherein the clean network traffic comprises network traffic that is not potentially anomalous; and when an anomalous packet is detected, the mitigation device takes actions to mitigate an attack indicated by the anomalous packet, to launch a counter attack, to publish the identity of the originator of the anomalous packet, or to take no action.
However, in the same field of endeavor, Doron’750 discloses a method for detecting hypertext transfer protocol secure (HTTPS) flood denial-of-service (DDoS) attacks by evaluating features with respect to baselines to determine whether the behavior of the at least HTTPS traffic indicates a potential HTTPS flood DDoS attack (see [0011-13]), wherein the mitigation resource 112 in defense system 110:
filters the potentially anomalous network traffic (mitigation resource 112 is configured to perform one or more mitigation actions, triggered by the detector 111, in order to mitigate a detected attack – see [0025]; the mitigation action may include limiting the traffic or blocking the traffic completely – see [0040]); and
transmits clean network traffic to the server, wherein the clean network traffic comprises network traffic that is not potentially anomalous (any detected attack is mitigated within the cloud defense platform 201; thus, only clean traffic is sent to the server 220 – see [0063-67]); and
when an anomalous packet is detected, the mitigation device takes actions to mitigate an attack indicated by the anomalous packet, to launch a counter attack, to publish the identity of the originator of the anomalous packet, or to take no action (upon detection of an HTTPS flood attack, one or more mitigation actions may be performed; the mitigation action may be executed by the mitigation resource 112 in the defense system 110; the mitigation action may be, for example, blocking, or rate-limiting, of traffic from the client 120 to the server, challenge the client causing any traffic anomaly (e.g., CAPTCHA), redirecting the traffic to a scrubbing center for cleaning malicious traffic, and so on – see [0055]; a suspect list is generated based on HTTPS requests size distribution and HTTPS response distribution; client IP sources that their HTTPS requests, or responses, are part of anomalous bin in the histogram and therefore considered as candidate to the “suspect list” – see [0101-0102]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to filter the potentially anomalous network traffic; and transmits clean network traffic to the server, wherein the clean network traffic comprises network traffic that is not potentially anomalous; and when an anomalous packet is detected, the mitigation device takes actions to mitigate an attack indicated by the anomalous packet, to launch a counter attack, to publish the identity of the originator of the anomalous packet, or to take no action, as taught by Doron’750. One would have been motivated to make such a combination to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches, as recognized by Doron’750 (see [0010] and [0109]).
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Doron et al. (US 20240171607 A1), and Chao et al. (US 20070280114 A1), as applied to claim 1 above, and further in view of Aviv et al. (US 20240163309 A1), hereinafter Aviv.
Regarding claim 3, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above.
Furthermore, Doron’607 discloses the histograms representing AppAttributes keys values occurrences (e.g., rate-invariant parameters/indications [features]) that model the applicative behavior of a protected application or server – see [0037], [0048] and [0067] used to identify rate-invariant anomalies. Doron’607 also discloses rate-based parameters such as requests per second, packets-per-second, TCP SYN per second, number of TCP connection per second – see [0045]. The rate-based anomaly threshold is also calculated for each time window and a protected entity – see [0046] – as a HTTP Flood DDoS attack is declared when both an AppAttributes anomaly and RPS anomaly are set – see [0087]. Although both rate-invariant and rate-based parameters (features) are monitored and calculated and having a histogram plotting both types of parameters is a possibility, Doron’607 explicitly discloses histograms in terms of the rate-invariant (AppAttributes) parameters only.
Doron’607 does not explicitly disclose the system, wherein the at least one feature of the network traffic is one or more of the following: an average packet size per sample, a fragment packet size, a fragment/non-fragment packet type, an IP protocol proportion, a flow duration, and a TCP flag type.
Chao discloses measuring attributes on both profiles [nominal and measured (current)] to generate the histograms including IP protocol-type values, packet sizes, Time-to-Live (TTL) values, Server port number, 16-bit source/destination IP address prefixes (as an approximation to the IP subnet calculation), TCP/IP header length, and TCP flag patterns (see [0013]).
However, Aviv discloses a system and method for detecting HTTPS flood cyber-attacks (see abstract) wherein traffic is analyzed to determine abnormal activity based on one or more traffic features of the inspected traffic and the traffic features may include rate-based traffic features and the rate-invariant traffic features that demonstrate behavior of HTTPS traffic directed to the victim server and using the traffic features to determine histograms (see [0032-35]) including the system, wherein the at least one feature of the network traffic is one or more of the following: an average packet size per sample, a fragment packet size, a fragment/non-fragment packet type, an IP protocol proportion, a flow duration, and a TCP flag type (the defense system 110 is configured to capture and/or receive traffic data including, without limitation, 5-tuple, packet size, arrival time, TCP flags, and the like, to determine various traffic features; the ingress traffic data includes, for example, but is not limited to, HTTPS requests, ACK packets, SYN packets, SYN/ACK packets, and more – see [0034]; traffic features include determining histograms reflecting distribution of packet arrival times and the interarrival times of consecutive data packets from traffic data [i.e., rate-based features]; each histogram is determined based on a predetermined number of packets; the packets are distributed into several bins according to their times (e.g., arrival time, interarrival time) so that each bin represents a normalized probability of requests for the respective time bins; each histogram is determined based on a plurality of packets that are received within a predetermined time period (e.g., 5 seconds) – see [0035]; see also claims 2, 3 and 16; a baseline histogram of average (e.g., exponential average) and normalized bin values is generated for each traffic feature – see [0078]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include the system, wherein the at least one feature of the network traffic is one or more of the following: an average packet size per sample, a fragment packet size, a fragment/non-fragment packet type, an IP protocol proportion, a flow duration, and a TCP flag type, as taught by Aviv. One would have been motivated to make such a combination to be able to handle HTTPS floods on the ingress side (client to server), the egress side (server to client), or both as the rate-based features would allow for detecting abnormal (e.g., large, steady, concentrated) number of HTTPS requests (of data packets) and requests to URLs with large responses or even with relatively small responses and many other attack approaches, as recognized by Aviv (see [0050]).
Regarding claim 13, all limitations correspond to the method performed by the system of claims 1 and 3 above. Therefore, claim 13 is being rejected on the same basis as claim 3.
Claims 4, 6, 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Doron et al. (US 20240171607 A1), and Chao et al. (US 20070280114 A1), as applied to claim 1 above, and further in view of Doron et al. (US 20240297899 A1), hereinafter Doron’899.
Regarding claim 4, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses the system, further comprising an orchestrator device [[mitigation system 170]] comprising at least one processor and memory, that receives a notification from the detection device that the current feature of the network traffic exceeds the predetermined threshold for the at least one histogram (the detection system 110 is configured to alert of a potential HTTP flood DDoS attack based in part on a comparison between AppAttributes buffers (hereinafter “window buffers”) generated for a current time window, and baseline AppAttributes buffers (hereinafter “baseline buffers”) calculated over past time window – see [0049]; the computed attack proximity is compared to a proximity threshold; when the attack proximity exceeds proximity threshold, an AppAttributes anomaly, or rate invariant anomaly, for the current window (n+1) is set; a HTTP Flood DDoS attack is declared when both an AppAttributes anomaly and RPS anomaly are set – see [0087]; when an attack alert is generated, a mitigation action can be taken – see [0114]), and instruct the mitigation device to filter the potentially anomalous network traffic [[perform mitigation actions]] (the detection system 110 may be connected to a mitigation system 170 configured to characterize and mitigate attack traffic – see [0040]; when an attack alert is generated, a mitigation action can be taken – see [0114-0115]).
Doron’607 does not disclose an orchestrator device receiving a notification from the detector device and instructing the mitigation device to filter the potentially anomalous network traffic.
However, in the same field of endeavor, Doron’899 discloses a system and method for learning attack-safe baselines for characterizing advanced application-layer flood attack tools (see abstract) wherein an orchestrator device (“characterization device 170”) receives a notification from the detection device (“detector 111”) that the current feature of the network traffic exceeds the predetermined threshold for the at least one histogram, and instruct the mitigation device (“mitigation resource 112”) to filter the potentially anomalous network traffic (defense system 110, including detector 111 and mitigation resource 112, is connected to a characterization device 170; the device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines; during an attack the device 170 uses the calculated applicative baselines to build a dynamic applicative signature, or signatures, characterizing the attack tool 125 (or the attacker) HTTP requests; the signature generated by device 170 may allow a mitigation action or policy selection; the mitigation action may be carried out by system 110 – see [0049]; an indication of an on-going attack is provided to the device 170 by the system 110 [[i.e., detector 111]] – see [0050] and [0053]; mitigation resource 112 is configured to perform one or more mitigation actions triggered by the detector 111, to mitigate a detected attack – see [0053]; the device 170 reports its decision on each of the received requests to the system 110; the decision can be to mitigate the request or to safely pass the requests to the victim server 130 – see [0057]; a mitigation action may be performed, by the mitigation resource 112, selectively on the attacker traffic only; mitigation action can be a simple blocking of the request, a response on behalf of the server 130 with a dedicated blocking page, or similar; may include limiting the rate of attacker traffic or merely reporting and logging the mitigation results without any actual blocking of the incoming request; mitigation action can issue various types of challenges, e.g., captcha, to better identify the client as coming from legitimate user or attack tool operated as a bot – see [0062]; examiner’s note: notice that characterization device 170 receives an indication of an attack from detector 111 and reports a decision regarding mitigation action to system 110, that is, to mitigation resource 112 which is the element configured to perform mitigation actions).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include an orchestrator device receiving a notification from the detector device and instructing the mitigation device to filter the potentially anomalous network traffic, as taught by Doron’899. One would have been motivated to make such a combination to provide an efficient security solution for mitigating attacks with a variety of mitigation approaches and to allow active attack mitigation to begin once all baselines accurately represent the legitimate normal application behavior, as recognized by Doron’899 (see [0013-14]).
Regarding claim 6, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses the system, further comprising (when the rate-based anomaly indication is set and rate-invariant normal indication is output, an alert of flash crowd traffic is output; in all other combinations, no other alerts are set – see [0091]; during normal indication, the system allows for learning a baseline instead – see [0110]; when an attack is detected, the method may also determine an end-of-attack condition; such a condition is detected when a preconfigured number of consecutive time windows rate-invariant or rate-based normal indications are output – [0117]; examiner’s note: during normal indication, when thresholds are not exceeded and traffic is considered normal, the system does not perform any mitigation actions but enters a learning baseline phase instead).
Doron’607 does not disclose an orchestrator device receiving a notification from the detection device and instructing the mitigation device to cease filtering the potentially anomalous network traffic.
However, in the same field of endeavor, Doron’899 discloses a system and method for learning attack-safe baselines for characterizing advanced application-layer flood attack tools (see abstract) including an orchestrator device (“characterizations device 170”) receiving a notification from the detection device and instructing the mitigation device to cease filtering the potentially anomalous network traffic (an indication of an end-of-attack may be received from the detector; such an indication would halt the generation of new signatures and any mitigation actions; after the end of the attack, a detection action is indicated, and an attack mitigation grace period may be initiated – see [0090]; during this time (i.e., peacetime), the characterization device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines – see [0049]; the decision characterization device 170 reports can be either to mitigate a request, when determined it is an attack, or to safely pass it to the victim server, when it is determined to be legitimate – see [0057-59]; a mitigation action may be performed, by the mitigation resource 112, selectively on the attacker traffic only; mitigation action can be a simple blocking of the request, a response on behalf of the server 130 with a dedicated blocking page, or similar; may include limiting the rate of attacker traffic or merely reporting and logging the mitigation results without any actual blocking of the incoming request; mitigation action can issue various types of challenges, e.g., captcha, to better identify the client as coming from legitimate user or attack tool operated as a bot – see [0062]; examiner’s note: during end-of-attack period (peacetime), the system enters a learning phase to learn legitimate traffic baselines and thus it safely passes traffic to the “victim server” as it is deemed legitimate (i.e., no filtering or any other mitigation action)).
Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature.
However, Chao discloses generating histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Moreover, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include an orchestrator device receiving a notification from the detection device and instructing the mitigation device to cease filtering the potentially anomalous network traffic, as taught by Doron’899. One would have been motivated to make such a combination to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches and safeguarding the learning period needed for the purpose of accurate characterization of attacks, as recognized by Doron’899 (see [0014).
Regarding claim 7, Doron’607 and Chao disclose all the claimed subject matter recited in claim 1 above. Furthermore, Doron’607 discloses the system, further comprising (when the rate-based anomaly indication is set and rate-invariant normal indication is output, an alert of flash crowd traffic is output; in all other combinations, no other alerts are set – see [0091]; during normal indication, the system allows for learning a baseline instead – see [0110]; when an attack is detected, the method may also determine an end-of-attack condition; such a condition is detected when a preconfigured number of consecutive time windows rate-invariant or rate-based normal indications are output – [0117]; examiner’s note: during normal indication, no anomaly is detected (meets anomaly clear threshold) and traffic is considered normal, the system does not perform any mitigation actions but enters a learning baseline phase instead).
Doron’607 does not disclose an orchestrator device receiving a notification from the detection device and instructing the mitigation device to cease filtering the potentially anomalous network traffic.
However, in the same field of endeavor, Doron’899 discloses a system and method for learning attack-safe baselines for characterizing advanced application-layer flood attack tools (see abstract) including an orchestrator device (“characterizations device 170”) receiving a notification from the detection device and instructing the mitigation device to cease filtering the potentially anomalous network traffic (an indication of an end-of-attack may be received from the detector; such an indication would halt the generation of new signatures and any mitigation actions; after the end of the attack, a detection action is indicated, and an attack mitigation grace period may be initiated – see [0090]; during this time (i.e., peacetime), the characterization device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines – see [0049]; the decision characterization device 170 reports can be either to mitigate a request, when determined it is an attack, or to safely pass it to the victim server, when it is determined to be legitimate – see [0057-59]; a mitigation action may be performed, by the mitigation resource 112, selectively on the attacker traffic only; mitigation action can be a simple blocking of the request, a response on behalf of the server 130 with a dedicated blocking page, or similar; may include limiting the rate of attacker traffic or merely reporting and logging the mitigation results without any actual blocking of the incoming request; mitigation action can issue various types of challenges, e.g., captcha, to better identify the client as coming from legitimate user or attack tool operated as a bot – see [0062]; examiner’s note: during end-of-attack period (peacetime), the system enters a learning phase to learn legitimate traffic baselines and thus it safely passes traffic to the “victim server” as it is deemed legitimate (i.e., no filtering or any other mitigation action)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include an orchestrator device receiving a notification from the detection device and instructing the mitigation device to cease filtering the potentially anomalous network traffic, as taught by Doron’899. One would have been motivated to make such a combination to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches and safeguarding the learning period needed for the purpose of accurate characterization of attacks, as recognized by Doron’899 (see [0014).
Regarding claim 14, Doron’607 and Chao disclose all the claimed subject matter recited in claim 11 above. Furthermore, Doron’607 discloses the method, further comprising notifying (when the rate-based anomaly indication is set and rate-invariant normal indication is output, an alert of flash crowd traffic is output; in all other combinations, no other alerts are set – see [0091]; during normal indication, the system allows for learning a baseline instead – see [0110]; when an attack is detected, the method may also determine an end-of-attack condition; such a condition is detected when a preconfigured number of consecutive time windows rate-invariant or rate-based normal indications are output – [0117]).
Doron’607 does not disclose the method notifying an orchestrator device (that the potentially anomalous network traffic has cleared).
However, in the same field of endeavor, Doron’899 discloses a system and method for learning attack-safe baselines for characterizing advanced application-layer flood attack tools (see abstract) including the method notifying an orchestrator device (“characterizations device 170”) that the potentially anomalous network traffic has cleared (an indication of an end-of-attack may be received from the detector; such an indication would halt the generation of new signatures and any mitigation actions [at the characterization device 170]; after the end of the attack, a detection action is indicated, and an attack mitigation grace period may be initiated – see [0090]; during this time (i.e., peacetime), the characterization device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines – see [0049]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include the method notifying an orchestrator device (that the potentially anomalous network traffic has cleared), as taught by Doron’899. One would have been motivated to make such a combination to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches and safeguarding the learning period needed for the purpose of accurate characterization of attacks, as recognized by Doron’899 (see [0014).
Claims 5, 12, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Doron et al. (US 20240171607 A1), Chao et al. (US 20070280114 A1), and Doron et al. (US 20240297899 A1), as applied to claim 4 above, and further in view of, Doron et al. (US 20200412750 A1), hereinafter Doron’750.
Regarding claim 5, Doron’607, Chao and Doron’899 discloses all the claimed subject matter recited in claim 4 above.
Doron’607, Chao and Doron’899 do not disclose the system, wherein the instructing the mitigation device to filter the potentially anomalous network traffic further comprises instructing the mitigation device to update a routing for the network traffic away from the destined server.
However, in the same field of endeavor, Doron’750 discloses a method for detecting hypertext transfer protocol secure (HTTPS) flood denial-of-service (DDoS) attacks by evaluating features with respect to baselines to determine whether the behavior of the at least HTTPS traffic indicates a potential HTTPS flood DDoS attack (see [0011-13]), including the system, wherein the instructing the mitigation device to filter the potentially anomalous network traffic further comprises instructing the mitigation device to update a routing for the network traffic away from the destined server (the mitigation action may be, for example, blocking, or rate-limiting, of traffic from the client 120 to the server, challenge the client causing any traffic anomaly (e.g., CAPTCHA), redirecting the traffic to a scrubbing center for cleaning malicious traffic, and so on – emphasis added, see [0055]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include the system, wherein the instructing the mitigation device to filter the potentially anomalous network traffic further comprises instructing the mitigation device to update a routing for the network traffic away from the destined server, as taught by Doron’750. One would have been motivated to make such a combination to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches, as recognized by Doron’750 (see [0010] and [0109]).
Regarding claim 12, Doron’607 and Chao disclose all the claimed subject matter recited in claim 11 above. Furthermore, Doron’607 discloses the method, wherein the method further includes:
[[perform mitigation actions]] (the detection system 110 may be connected to a mitigation system 170 configured to characterize and mitigate attack traffic – see [0040]; when an attack alert is generated, a mitigation action can be taken – see [0114-0115]); and
if the method determines the network traffic is potentially anomalous if a difference between the (the attack proximity represents the statistical distance between the AppAttributes window distribution, and the AppAttributes baseline distribution, for each AppAttributes type – see [0078]; attack proximity is calculated based, at least in part, on the metric distance DAppAttri - see [0079-0083]; the computed attack proximity is compared to a proximity threshold; when the attack proximity exceeds proximity threshold, an AppAttributes anomaly, or rate invariant anomaly, for the current window (n+1) is set; a HTTP Flood DDoS attack is declared when both an AppAttributes anomaly and RPS anomaly are set – see [0087]; examiner’s note: determining attack proximity (i.e, anomaly) is calculated based, at least in part, on the statistical distance (i.e., metric distance DAppAtt#i ) between the AppAttributes window distribution, and the AppAttributes baseline distribution; this metric distance DAppAtt#i is computed as the difference between baseline PAppAtt#i and window PAppAtt#i which are the baseline and window probabilities of AppAttributes i, respectively, as shown in FIG. 4 and [0080-82]), the method notifies (the detection system 110 is configured to alert of a potential HTTP flood DDoS attack based in part on a comparison between AppAttributes buffers (hereinafter “window buffers”) generated for a current time window, and baseline AppAttributes buffers (hereinafter “baseline buffers”) calculated over past time window – see [0049]; the computed attack proximity is compared to a proximity threshold; when the attack proximity exceeds proximity threshold, an AppAttributes anomaly, or rate invariant anomaly, for the current window (n+1) is set; a HTTP Flood DDoS attack is declared when both an AppAttributes anomaly and RPS anomaly are set – see [0087]; when an attack alert is generated, a mitigation action can be taken – see [0114]), the method takes actions to mitigate the potentially anomalous network traffic, to publish the identity of the originator of the anomalous network traffic, (the detection system 110 may be connected to a mitigation system 170 configured to characterize and mitigate attack traffic – see [0040]; when an attack alert is generated, a mitigation action can be taken; a mitigation action may include blocking requests, responding with a blocking page response, reporting and passing the request to the protected entity, and so on; a mitigation resource may be provided with the characteristics of the attacker as represented by the dynamic applicative signature; that is, the general structure of HTTP requests generated by the attacker is provided to the mitigation resource; this would allow for defining and enforcing new mitigation policies and actions against the attacker – see [0114-0115]).
Doron’607 does not explicitly disclose transmitting clean network traffic to the server, wherein the clean network traffic comprises network traffic that is not potentially anomalous; and, does not disclose the method notifying an orchestrator device (of the potentially anomalous network traffic).
However, in the same field of endeavor, Doron’750 discloses a method for detecting hypertext transfer protocol secure (HTTPS) flood denial-of-service (DDoS) attacks by evaluating features with respect to baselines to determine whether the behavior of the at least HTTPS traffic indicates a potential HTTPS flood DDoS attack, wherein the method includes:
filtering the potentially anomalous network traffic and transmitting clean network traffic to the server, wherein the clean network traffic comprises network traffic that is not potentially anomalous (mitigation resource 112 is configured to perform one or more mitigation actions, triggered by the detector 111, in order to mitigate a detected attack – see [0025]; the mitigation action may include limiting the traffic or blocking the traffic completely – see [0040]; any detected attack is mitigated within the cloud defense platform 201; thus, only clean traffic is sent to the server 220 – see [0063-67]; the mitigation action may be, for example, blocking, or rate-limiting, of traffic from the client 120 to the server, challenge the client causing any traffic anomaly (e.g., CAPTCHA), redirecting the traffic to a scrubbing center for cleaning malicious traffic, and so on – see [0055]; a suspect list is generated based on HTTPS requests size distribution and HTTPS response distribution; client IP sources that their HTTPS requests, or responses, are part of anomalous bin in the histogram and therefore considered as candidate to the “suspect list” – see [0101-0102]; examiner’s note: the method essentially filters anomalous traffic and performs mitigation actions when needed in order to forward clean/legitimate traffic only).
Furthermore, in the same field of endeavor, Doron’899 discloses a system and method for learning attack-safe baselines for characterizing advanced application-layer flood attack tools (see abstract) wherein the method includes filtering the potentially anomalous network traffic and wherein the method notifies an orchestrator device (“characterization device 170”) of the potentially anomalous network traffic (defense system 110, including detector 111 and mitigation resource 112, is connected to a characterization device 170; the device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines; during an attack the device 170 uses the calculated applicative baselines to build a dynamic applicative signature, or signatures, characterizing the attack tool 125 (or the attacker) HTTP requests; the signature generated by device 170 may allow a mitigation action or policy selection; the mitigation action may be carried out by system 110 – see [0049]; an indication of an on-going attack is provided to the device 170 by the system 110 by detector 111 – see [0050] and [0053]; mitigation resource 112 is configured to perform one or more mitigation actions triggered by the detector 111, to mitigate a detected attack – see [0053]; the device 170 reports its decision on each of the received requests to the system 110; the decision can be to mitigate the request or to safely pass the requests to the victim server 130 – see [0057]; a mitigation action may be performed, by the mitigation resource 112, selectively on the attacker traffic only; mitigation action can be a simple blocking of the request, a response on behalf of the server 130 with a dedicated blocking page, or similar; may include limiting the rate of attacker traffic or merely reporting and logging the mitigation results without any actual blocking of the incoming request; mitigation action can issue various types of challenges, e.g., captcha, to better identify the client as coming from legitimate user or attack tool operated as a bot – see [0062]; examiner’s note: notice that characterization device 170 receives an indication of an attack from detector 111 and reports a decision regarding mitigation action to system 110, that is, to mitigation resource 112 which is the element configured to perform mitigation actions).
Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature.
However, Chao discloses generating histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Moreover, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to filter the potentially anomalous network traffic and transmit clean network traffic to the server, wherein the clean network traffic comprises network traffic that is not potentially anomalous, as taught by Doron’750; and, to filter the potentially anomalous network traffic and notify an orchestrator device (of the potentially anomalous network traffic), as taught by Doron’899. One would have been motivated to make such a combination to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches, as recognized by Doron’750 (see [0010] and [0109]); and, to allow active attack mitigation to begin once all baselines accurately represent the legitimate normal application behavior, as recognized by Doron’899 (see [00013]).
Regarding claim 19, Doron’607 discloses a system for detecting anomalous network traffic (a method for the detection of HTTP flood DDoS attacks; distinguishes between malicious traffic and legitimate (flash crowd) traffic to allow efficient detection of HTTP Flood attacks - see [0025], system is illustrated in FIG. 1), the system comprising:
a hardware detection device (“detection system 110”, detection system 110 may be realized in software, hardware, or any combination thereof - see [0053]) comprising at least one processor and memory, that:
receives network traffic from a client destined for a server (process application-layer transactions received during a current time window to detect a rate-based anomaly in a traffic directed to a protected entity – see [0012]; a detection system 110 is deployed between client device 120, attack tool 125, and victim server 130; the deployment can be an always-on deployment, or any other deployments that enable the detection system 110 to observe incoming HTTP requests and their corresponding responses during peace time and during active attacks time; the detection system 110 is configured to detect changes, or fast increasements, in the transmission rate, or requests per second (RPS), of traffic directed to the server 130 – see [0040-44]; detection system 110 is configured to analyze the received transactions and determine if rate-invariant parameters in the transactions demonstrate normal or abnormal behavior of the application – see [0047]);
monitors at least one histogram for the network traffic (detection is based on comparing a baseline distribution of application attributes (AppAttributes) [[i.e., feature]] learned during peacetime to distributions of AppAttributes measured during an attack time – see [0025]; rate-invariant parameters are AppAttributes that have their baselines developed during peacetime and are monitored during a potential attack time – see [0048]; the detection system 110 is configured to alert of a potential HTTP flood DDoS attack based in part on a comparison between AppAttributes buffers (hereinafter “window buffers”) generated for a current time window, and baseline AppAttributes buffers (hereinafter “baseline buffers”) calculated over past time window – see [0049]), the at least one histogram plotting a (specifically, the determination of AppAttributes anomalies is based on an attack proximity indicating how statistically close the WinAppAttBuf[n+1] window buffers are to the BLAttBuf[n] baseline buffers; each AppAttributes buffer can be presented as a single bar, and the entire buffers from a specific type can be presented as a histogram where the AppAttribute key value is the X axis of the histogram and the occurrences, or the weight represents the Y axis of the histogram; from these histograms, form both window and baseline buffers, the AppAttributes probability density function, or the distribution, can be computed to represent the probability of the appearance of each AppAttribute – see [0076] and FIG. 4 where bars 410 are baseline buffers (i.e., baseline feature) and bars 420 are window buffers (i.e., current feature));
determines the network traffic is potentially anomalous if a difference between the (the attack proximity represents the statistical distance between the AppAttributes window distribution, and the AppAttributes baseline distribution, for each AppAttributes type – see [0078]; attack proximity is calculated based, at least in part, on the metric distance DAppAttri - see [0079-0083]; the computed attack proximity is compared to a proximity threshold; when the attack proximity exceeds proximity threshold, an AppAttributes anomaly, or rate invariant anomaly, for the current window (n+1) is set; a HTTP Flood DDoS attack is declared when both an AppAttributes anomaly and RPS anomaly are set – see [0087]; examiner’s note: determining attack proximity (i.e, anomaly) is calculated based, at least in part, on the statistical distance (i.e., metric distance DAppAtt#i ) between the AppAttributes window distribution, and the AppAttributes baseline distribution; this metric distance DAppAtt#i is computed as the difference between baseline PAppAtt#i and window PAppAtt#i which are the baseline and window probabilities of AppAttributes i, respectively, as shown in FIG. 4 and [0080-82]).
Doron’607 fails to discloses plotting percentages of the baseline and current feature and determining potential anomaly based on the percentages. Instead, the plot/histogram in Doron'607 is a statistical distribution using a distribution density function for a query arguments type of attributes (features). It is not plotting a percentage of a baseline or current feature, but a probability measurement of the baseline and current feature.
However, Chao discloses a system and method for providing a high-speed defense against DDoS attacks using a packet scoring scheme on a network for providing control of communications traffic (see abstract, [0022]) including measuring attributes (e.g., IP protocol-type values, packet sizes, Time-to-Live (TTL) values, Server port number, 16-bit source/destination IP address prefixes (as an approximation to the IP subnet calculation), TCP/IP header length, and TCP flag patterns) (see [0013]) to generate histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Doron’607 does not disclose the detection device notifying an orchestrator device (of the potentially anomalous network traffic); and the mitigation device receiving an instruction from the orchestrator device to redirect the potentially anomalous network traffic away from the destined server and updating a routing of the potentially anomalous network traffic away from the destined server.
However, in the same field of endeavor, Doron’899 discloses a system and method for learning attack-safe baselines for characterizing advanced application-layer flood attack tools (see abstract) wherein the detection device (“detector 111”) notifies an orchestrator device (“characterization device 170”) of the potentially anomalous network traffic (defense system 110, including detector 111 and mitigation resource 112, is connected to a characterization device 170; the device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines; during an attack the device 170 uses the calculated applicative baselines to build a dynamic applicative signature, or signatures, characterizing the attack tool 125 (or the attacker) HTTP requests; the signature generated by device 170 may allow a mitigation action or policy selection; the mitigation action may be carried out by system 110 – see [0049]; an indication of an on-going attack is provided to the device 170 by the system 110 [[i.e., detector 111]] – see [0050] and [0053]); and
a mitigation device that: receives an instruction from the orchestrator device to perform mitigation actions]] (mitigation resource 112 is configured to perform one or more mitigation actions triggered by the detector 111, to mitigate a detected attack – see [0053]; the characterization device 170 reports its decision on each of the received requests to the system 110; the decision can be to mitigate the request or to safely pass the requests to the victim server 130 – see [0057]; a mitigation action may be performed, by the mitigation resource 112, selectively on the attacker traffic only; mitigation action can be a simple blocking of the request, a response on behalf of the server 130 with a dedicated blocking page, or similar; may include limiting the rate of attacker traffic or merely reporting and logging the mitigation results without any actual blocking of the incoming request; mitigation action can issue various types of challenges, e.g., captcha, to better identify the client as coming from legitimate user or attack tool operated as a bot – see [0062]; examiner’s note: notice that characterization device 170 receives an indication of an attack from detector 111 and reports a decision regarding mitigation action to system 110, that is, to mitigation resource 112 which is the element configured to perform mitigation actions).
Furthermore, in the same field of endeavor, Doron’750 discloses a method for detecting hypertext transfer protocol secure (HTTPS) flood denial-of-service (DDoS) attacks by evaluating features with respect to baselines to determine whether the behavior of the at least HTTPS traffic indicates a potential HTTPS flood DDoS attack (see [0011-13]), including redirecting the potentially anomalous network traffic away from the destined server; and updating a routing of the potentially anomalous network traffic away from the destined server (the mitigation action may be, for example, blocking, or rate-limiting, of traffic from the client 120 to the server, challenge the client causing any traffic anomaly (e.g., CAPTCHA), redirecting the traffic to a scrubbing center for cleaning malicious traffic, and so on – emphasis added, see [0055]; examiner’s note: when redirecting that traffic to a scrubbing center, for instance, the system is effectively updating the routing of traffic away from the server for mitigation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Moreover, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include the detection device notifying an orchestrator device (of the potentially anomalous network traffic), as taught by Doron’899; and, to include the mitigation device receiving an instruction from the orchestrator device to redirect the potentially anomalous network traffic away from the destined server and updating a routing of the potentially anomalous network traffic away from the destined server, as taught by Doron’750. One would have been motivated to make such a combination to provide an efficient security solution for mitigating attacks with a variety of mitigation approaches and to allow active attack mitigation to begin once all baselines accurately represent the legitimate normal application behavior, as recognized by Doron’899 (see [0013-14]); and, to provide an efficient security solution for detecting and mitigating attacks with a variety of mitigation approaches, as recognized by Doron’750 (see [0010] and [0109]).
Regarding claim 20, Doron’607, Chao, Doron’899 and Doron’750 discloses all the claimed subject matter recited in claim 19 above.
Doron’607 does not disclose the system, wherein the mitigation device receives an instruction from the orchestrator device to reset the routing of the potentially anomalous network traffic back to the destined server.
However, Doron’750 discloses the system, wherein the mitigation device receives an instruction from the (mitigation resource 112 is configured to perform one or more mitigation actions, triggered by the detector 111, in order to mitigate a detected attack – see [0025]; the mitigation action may be, for example, blocking, or rate-limiting, of traffic from the client 120 to the server, challenge the client causing any traffic anomaly (e.g., CAPTCHA), redirecting the traffic to a scrubbing center for cleaning malicious traffic, and so on – see [0055]; any detected attack is mitigated within the cloud defense platform 201; thus, only clean traffic is sent to the server 220 – see [0063-67]; examiner’s note: for instance, the method may redirect traffic to a scrubbing center for cleaning as a mitigation action, as only clean traffic is sent to the server; thus, there is a reset in the routing of the traffic back to the server as originally intended after being rerouted to the scrubbing center for cleaning/mitigation).
Doron’607and Doron’750 do not disclose that the mitigation device receives an instruction from the orchestrator device.
However, Doron’899 discloses the mitigation device receives an instruction from the orchestrator device (defense system 110, including detector 111 and mitigation resource 112, is connected to a characterization device 170; the device 170 is configured to analyze requests received from the system 110 and learn the legitimate traffic applicative baselines; an indication of an on-going attack is provided to the device 170 by detector 111 in system 110 – see [0050] and [0053]; mitigation resource 112 is configured to perform one or more mitigation actions triggered by the detector 111, to mitigate a detected attack – see [0053]; the device 170 reports its decision on each of the received requests to the system 110 (e.g., mitigation resource 112); the decision can be to mitigate the request or to safely pass the requests to the victim server 130 – see [0057]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 to include the mitigation device receives an instruction to reset the routing of the potentially anomalous network traffic back to the destined server, as taught by Doron’750; and, to include the mitigation device receives an instruction from the orchestrator device, as taught by Doron’899. One would have been motivated to make such a combination to provide an accurate and efficient security solution for mitigating attacks and delivering clean, legitimate traffic, as recognized by Doron’750 (see [0010] and [0063-67]) and Doron’899 (see [0013]).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Doron et al. (US 20240171607 A1), and Chao et al. (US 20070280114 A1), as applied to claim 11 above, and further in view of Saeed et al. (US 20230125203 A1), hereinafter Saeed.
Regarding claim 18, Doron’607 and Chao disclose all the claimed subject matter recited in claim 11 above.
Doron’607 does not teach the method, further comprising generating the percentage of the baseline of the network traffic from learned network traffic history for a known network attack.
However, Chao discloses generating histograms plotting a percentage of a nominal (baseline) traffic attribute (feature) and a percentage of a current measured traffic attribute (FIG. 8 illustrates nominal profile measurements in comparison with current measured traffic – see [0078-80]).
Thus, Doron’607 and Chao each disclose plotting a baseline or nominal attribute (feature) and a current-measured attribute (feature) of a network traffic. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the percentage of the nominal and current attributes of Chao could have been substituted for the probability of the nominal and current attributes of Doron’607 because both the percentage and the probability serve the purpose of providing a value of the attribute in terms of appearance (see Doron’67, [0076]; Chao, [0078-80]) in order to determine if the packets in the network traffic are anomalous (i.e., part of an attack). Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Doron’607 and Chao fail to disclose generating [the percentage of the baseline of the network traffic] from learned network traffic history for a known network attack.
However, in the same field of endeavor, Saeed discloses a method for detecting anomalies in a computer network using a network monitoring system to obtain a model representing normal characteristics of network traffic and analyzing the network traffic using the model to identify anomalous network traffic (see abstract); wherein the method, further compris[es] generating the (to determine how the traffic data should be aggregated, the system 200 may determine what size of aggregation window by determining the distribution of the traffic data (such as by plotting a histogram using time duration of each flow as an input variable, so that the flows are collected in different bins where each bin maps to a non-overlapping time interval); based on the determined aggregation window size, the network traffic data is converted into time-series format and corresponding features are aggregated accordingly – see [0039]; at operation 320, the method 300 uses the normal behavior model 230 [i.e., normal/non-anomalous behavior baseline] to identify anomalous network traffic associated with the set of devices – see [0043]; at optional operation 330, the method 300 obtains the known anomaly model 250 [i.e., anomalous behavior baseline] representing the characteristics of network traffic which are associated with known types of anomalies- see [0046]; at optional operation 340, the method 300 filters the anomalous network traffic that is provided by the anomaly detector 220 using of the known anomaly model 250 to classify the anomalous network traffic and filters out any network traffic that is classified as being a known type of anomaly – see [0047]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the percentage of the nominal and current attributes of Chao for the probability of the nominal and current attributes of Doron’607 according to known methods to yield the predictable result of providing the attribute value needed to determine the probability that the packets of the network traffic are part of an attack.
Moreover, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method in Doron’607 and Chao to generate the percentage of the baseline of the network traffic from learned network traffic history for a known network attack, as taught by Saeed. One would have been motivated to make such a combination to improve the accuracy of the characterization of the network traffic data and if a known type of anomaly is detected an alert may be raised indicating the classification of the known type of anomaly or a predetermined action that may be taken to mitigate its effects, as recognized by Saeed (see [0032] and [0047]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Patent Documents
Chao et al. (US 7526807 B2) - DISTRIBUTED ARCHITECTURE FOR STATISTICAL OVERLOAD CONTROL AGAINST DISTRIBUTED DENIAL OF SERVICE ATTACKS
Pappu et al. (US 20120216282 A1) - METHODS AND SYSTEMS FOR DETECTING AND MITIGATING A HIGH-RATE DISTRIBUTED DENIAL OF SERVICE (DDoS) ATTACK
Non-Patent Literature
Ayres, et al. (2006) - ALPI: A DDOS DEFENSE SYSTEM FOR HIGH-SPEED NETWORKS
Kim, et al. (2004, March) - PACKETSCORE: STATISTICS-BASED OVERLOAD CONTROL AGAINST DISTRIBUTED DENIAL-OF-SERVICE ATTACKS
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DORIANNE ALVARADO DAVID whose telephone number is (571)272-4228. The examiner can normally be reached 9:00am-5:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at (571) 272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DORIANNE ALVARADO DAVID/Examiner, Art Unit 2499 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499