Prosecution Insights
Last updated: April 19, 2026
Application No. 17/161,229

SYSTEM AND METHOD FOR SECURING NETWORKS BASED ON CATEGORICAL FEATURE DISSIMILARITIES

Final Rejection §101§103
Filed
Jan 28, 2021
Examiner
RAZA, MUHAMMAD A
Art Unit
2449
Tech Center
2400 — Computer Networks
Assignee
Armis Security Ltd.
OA Round
4 (Final)
58%
Grant Probability
Moderate
5-6
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
158 granted / 274 resolved
At TC average
Strong +71% interview lift
Without
With
+70.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
32 currently pending
Career history
306
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 274 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 3-11, and 13-22 are pending in this Office Action. Response to Arguments Applicant's arguments filed in the amendment filed 03/02/2026, have been fully considered but they are moot in view of new grounds of rejections. The reasons are set forth below. Drawings The formal drawings received on 01/28/2021 have been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Independent Claim(s): Step 1: Statutory Category. Claim(s) 1, 3-11, and 13-22 is/are directed to statutory category of subject matter. The claim(s) does/do fall within at least one of the four categories of patent eligible subject matter because the claim(s) is/are directed to either a process, machine, manufacture, or composition of matter. Step 2A: Prong One. Judicial Exception. Claim(s) 1, 3-11, and 13-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) are directed to abstract idea of determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, as explained in detail below. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. The independent claim(s) recites, in part, Claims 1, 10, 11. A method for improving network security by detecting anomalies in categorical network behavior comprising: receiving a first set of network activity data from at least one of: a database or a device operating in a subject network; continuously monitoring network activity in the subject network to receive a second set of network activity data configured to detect an anomaly with respect to a categorical variable; determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting [[an]] the anomaly with respect to the categorical variable when the scalar value is above the threshold; and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, wherein the at least one mitigation action comprises at least one of: disconnecting the device from a network, turning off one or more ports, or disconnecting the device from a host. These steps describe the concept of determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, which corresponds to concepts identified as abstract ideas by the courts, such as Performing statistical analysis (SAP America). All of these concepts relate to “Mathematical Relationships/Formulas” in which “Mathematical concepts such as mathematical algorithms, mathematical relationships, mathematical formulas, and calculations.” The concept described in the claim(s) is/are not meaningfully different than “Mathematical Relationships/Formulas” found by the courts to be abstract ideas. As such, the description in the claim(s) of determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected is an abstract idea. Enfish, LLC v. Microsoft Corp. 822 F.3d 1327, 1335-36 (Fed. Cir. 2016) (“[T]he first step in the Alice inquiry in this case asks whether the focus of the claims [was] on the specific asserted improvement in computer capabilities … or, instead, on a process that qualifies as an ‘abstract idea’ for which computers are invoked merely as a tool.”) No such evidence exists on this record. Unlike Enfish, where the claims were focused on a specific improvement in how the computer functioned, the claim here merely uses the computer as a tool to perform the abstract concepts, and the claims are not rooted in technology and simply employs conventional techniques used by humans for determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected. The claim here is not similar to claimed patent’s innovative logical model for a computer database (p. 2-3), nor does the claim here have similar specific asserted improvement in computer capabilities (p. 7) as in the Enfish patent. Rather here, the claim is directed to automating the human behavior or task. (See Enfish Memo and Enfish v. Microsoft, May 2016). In addition, simply limiting the invention to a technological environment does “not make an abstract concept any less abstract under step one.” Intellectual Ventures I, 850 F.3d at 1340. Therefore, based on the similarity of the concept described in this claim to abstract ideas identified by the courts in the claim is directed to an abstract idea. For these reasons, afford are ineligible. Step 2A: Prong Two. Practical Application. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). Step 2B: Additional Elements Significantly More Then the Judicial Exception. The independent claim(s) do/does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. The claim recites the additional limitations of a “processing circuitry” and a “memory,” the memory containing instructions that, when executed by the processing circuitry, configure the system to: receiving a first set of network activity data from at least one of: a database or a device operating in a subject network; continuously monitoring network activity in the subject network to receive a second set of network activity data configured to detect an anomaly with respect to a categorical variable; determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting [[an]] the anomaly with respect to the categorical variable when the scalar value is above the threshold; and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, wherein the at least one mitigation action comprises at least one of: disconnecting the device from a network, turning off one or more ports, or disconnecting the device from a host. The “processing circuitry” and “memory,” are recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. Next, “determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected” is stated at a high level of generality without tying it to an algorithm that would improve the functionality of the technology and its broadest reasonable interpretation comprises only determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected through the use of some unspecified generic computers and interface. The use of generic computer components for determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected through an unspecified interface does not impose any meaningful limit on the computer implementation of the abstract idea. These independent claims include insignificant pre-solution limitation(s) and post-solution limitation(s) [receiving a first set of network activity data from at least one of: a database or a device operating in a subject network; continuously monitoring network activity in the subject network to receive a second set of network activity data configured to detect an anomaly with respect to a categorical variable; wherein the at least one mitigation action comprises at least one of: disconnecting the device from a network, turning off one or more ports, or disconnecting a device from the host] that do not transform the patent-ineligible concept of an abstract idea to a patent-eligible concept even if they are performed using general purpose computer, as these pre-solution limitation(s) and post-solution limitation(s) add insignificant extrasolution activity to the judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Additionally, adding the words ‘‘apply it’’ (or an equivalent) with the judicial exception (i.e., applying the judicial exception to the network security), or mere instructions to implement an abstract idea on a computer or generally linking the use of the judicial exception to a particular technological environment or field of use (i.e., the network security) is also found to not be enough to qualify as significantly more. Dependent Claim(s): Step 1: Statutory Category. Claim(s) 3-9 and 13-22 is/are directed to statutory category of subject matter. The claim(s) does/do fall within at least one of the four categories of patent eligible subject matter because the claim(s) is/are directed to either a process, machine, manufacture, or composition of matter. Step 2A: Judicial Exception. Claim(s) 3-9 and 13-22 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) are directed to abstract idea of determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, without significant extrasolution activities, as explained in detail below. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. The dependent claim(s) recites, in part, Claims 3, 13. The method of claim 1, wherein determining the first discrete probability distribution further comprises: determining a time window such that activity with respect to the categorical variable is assumed to be fully observed during the time window, wherein a duration of the time window Is based on a type of the categorical variable, wherein the first discrete probability distribution is determined based on a portion of the first set of network activity data corresponding to the time window. Claims 4, 14. The method of claim 3, wherein determining the first discrete probability distribution further comprises: determining a sub-population of devices and systems indicated in the first network activity data, wherein the sub-population of devices and systems has a common attribute, wherein the portion of the first set of network activity data corresponding to the time window is related to the sub-population of devices. Claims 5, 15. The method of claim 1, wherein the scalar value increases as the difference between the first and second discrete probability distributions increases. Claims 6, 16. The method of claim 1, wherein the threshold is associated with the categorical variable. Claims 7, 17. The method of claim 1, wherein each discrete probability distribution indicates a probability of each of a plurality of potential categories for the categorical variable. Claims 8, 18. The method of claim 1, wherein the distance function is any of: a cross-entropy distance function, and a chi-squared statistic function. Claims 9, 19. The method of claim 1, wherein the categorical variable is any of: a host, a communication channel, and a port. Claim 20. The non-transitory computer readable medium of claim 10, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values and the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values. These steps describe the concept of determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, without significant extrasolution activities, which corresponds to concepts identified as abstract ideas by the courts, such as Performing statistical analysis (SAP America). All of these concepts relate to “Mathematical Relationships/Formulas” in which “Mathematical concepts such as mathematical algorithms, mathematical relationships, mathematical formulas, and calculations.” The concept described in the claim(s) is/are not meaningfully different than “Mathematical Relationships/Formulas” found by the courts to be abstract ideas. As such, the description in the claim(s) of determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, without significant extrasolution activities is an abstract idea. Enfish, LLC v. Microsoft Corp. 822 F.3d 1327, 1335-36 (Fed. Cir. 2016) (“[T]he first step in the Alice inquiry in this case asks whether the focus of the claims [was] on the specific asserted improvement in computer capabilities … or, instead, on a process that qualifies as an ‘abstract idea’ for which computers are invoked merely as a tool.”) No such evidence exists on this record. Unlike Enfish, where the claims were focused on a specific improvement in how the computer functioned, the claim here merely uses the computer as a tool to perform the abstract concepts, and the claims are not rooted in technology and simply employs conventional techniques used by humans for determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, without significant extrasolution activities. The claim here is not similar to claimed patent’s innovative logical model for a computer database (p. 2-3), nor does the claim here have similar specific asserted improvement in computer capabilities (p. 7) as in the Enfish patent. Rather here, the claim is directed to automating the human behavior or task. (See Enfish Memo and Enfish v. Microsoft, May 2016). In addition, simply limiting the invention to a technological environment does “not make an abstract concept any less abstract under step one.” Intellectual Ventures I, 850 F.3d at 1340. Therefore, based on the similarity of the concept described in this claim to abstract ideas identified by the courts in the claim is directed to an abstract idea. For these reasons, afford are ineligible. Step 2B: Additional Elements Significantly More Then the Judicial Exception. The dependent claim(s) do/does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. The claim recites the additional limitations of a ““processing circuitry” and a “memory,” the memory containing instructions that, when executed by the processing circuitry, configure the system to: Claims 3, 13. The method of claim 1, wherein determining the first discrete probability distribution further comprises: determining a time window such that activity with respect to the categorical variable is assumed to be fully observed during the time window, wherein a duration of the time window Is based on a type of the categorical variable, wherein the first discrete probability distribution is determined based on a portion of the first set of network activity data corresponding to the time window. Claims 4, 14. The method of claim 3, wherein determining the first discrete probability distribution further comprises: determining a sub-population of devices and systems indicated in the first network activity data, wherein the sub-population of devices and systems has a common attribute, wherein the portion of the first set of network activity data corresponding to the time window is related to the sub-population of devices. Claims 5, 15. The method of claim 1, wherein the scalar value increases as the difference between the first and second discrete probability distributions increases. Claims 6, 16. The method of claim 1, wherein the threshold is associated with the categorical variable. Claims 7, 17. The method of claim 1, wherein each discrete probability distribution indicates a probability of each of a plurality of potential categories for the categorical variable. Claims 8, 18. The method of claim 1, wherein the distance function is any of: a cross-entropy distance function, and a chi-squared statistic function. Claims 9, 19. The method of claim 1, wherein the categorical variable is any of: a host, a communication channel, and a port. Claim 20. The non-transitory computer readable medium of claim 10, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values and the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values. The “processing circuitry” and “memory” are recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. Next, “determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected,” is stated at a high level of generality without tying it to an algorithm that would improve the functionality of the technology and its broadest reasonable interpretation comprises only determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, without significant extrasolution activities, through the use of some unspecified generic computers and interface. The use of generic computer components for determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable, wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values; determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation, wherein the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values; comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable; detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold, and determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and performing at least one mitigation action when the anomaly is detected, without significant extrasolution activities, through an unspecified interface does not impose any meaningful limit on the computer implementation of the abstract idea. These dependent claims include insignificant pre-solution limitation(s) and post-solution limitation(s) that do not transform the patent-ineligible concept of an abstract idea to a patent-eligible concept even if they are performed using general purpose computer, as these pre-solution limitation(s) and post-solution limitation(s) add insignificant extrasolution activity to the judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Additionally, adding the words ‘‘apply it’’ (or an equivalent) with the judicial exception (i.e., applying the judicial exception to the network security), or mere instructions to implement an abstract idea on a computer or generally linking the use of the judicial exception to a particular technological environment or field of use (i.e., the network security) is also found to not be enough to qualify as significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5-7, 9-11, 13, 15-17, 19, 20, 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180330257) in view of Shintre (US 10686816), and further in view of Savkli (US 20180307943). 10, 11. Dodson teaches: A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to execute a process for improving network security by detecting anomalies in categorical network behavior, the process comprising: – in paragraphs [0004], [0017], [0055]-[0059], [0067] (A system for real time detection of cyber threats comprising: (a) a processor; and (b) a memory for storing executable instructions, the processor executing the instructions to: ….) receiving a first set of network activity data from at least one of: a database or a device operating in a subject network; – in paragraphs [0015]-[0050] (Obtaining entity data for an entity over a given period of time. The entity data collected can comprise direct behaviors of the entity itself, such as collecting log data, DNS lookups, data exfiltration, network connections by process type, and so forth. These types of data can be gathered directly from the entity creating the entity data.) continuously monitoring network activity in the subject network to receive a second set of network activity data configured to detect an anomaly with respect to categorical variable; – in paragraphs [0015]-[0050] (Obtaining entity data for an entity over a given period of time. The entity data collected can comprise direct behaviors of the entity itself, such as collecting log data, DNS lookups, data exfiltration, network connections by process type, and so forth. These types of data can be gathered directly from the entity creating the entity data. Allow the system 105 to detect and potentially remediate zero day threats by detecting abnormal variations (anomalies) in entity data in either a self-referential manner or when compared to other similar entities in a population. The inclusion of unsupervised machine learning can provide for the system 105 to evaluate only the entity data available and examine these data instances for anomalous behavior in a self-referential manner.) determining a first discrete probability distribution for the categorical variable based on the first set of network activity data including at least one instance of the categorical variable, – in paragraphs [0015]-[0054] (The entity data can be categorical and/or numerical. For example, a categorical value is non-numerical data such as a username, a location, and so forth. The probabilistic modeling module 130 is executed to create a probabilistic model of entity data for an entity, where the entity data is collected over a period of time. These entities (and the model(s) created therefrom) may be considered a control or baseline that is used to create a probabilistic model that can be used as a comparison tool relative to the probabilistic model for the entity. The control portion establishes a baseline for later comparison. The probabilistic modeling module 130 creates the probabilistic model for the entity and also creates a probabilistic model for a population of entities that are similar to the entity under review.) wherein determining the first discrete probability distribution includes establishing a baseline behavioral model for the categorical variable over a predetermined time window specific to a type of the categorical variable; – in paragraphs [0015]-[0054] (The entity data can be categorical and/or numerical. For example, a categorical value is non-numerical data such as a username, a location, and so forth. The probabilistic modeling module 130 is executed to create a probabilistic model of entity data for an entity, where the entity data is collected over a period of time. These entities (and the model(s) created therefrom) may be considered a control or baseline that is used to create a probabilistic model that can be used as a comparison tool relative to the probabilistic model for the entity. The control portion establishes a baseline for later comparison. The probabilistic modeling module 130 creates the probabilistic model for the entity and also creates a probabilistic model for a population of entities that are similar to the entity under review.) determining a second discrete probability distribution for a unique observation based on the second set of network activity data including data representing the unique observation; – in paragraphs [0015]-[0054] (The probabilistic modeling module 130 creates the probabilistic model for the entity and also creates a probabilistic model for a population of entities that are similar to the entity under review. The method steps disclosed herein are executed in real time, meaning that the steps are executed as entity data is received. This can include receiving entity data as the entity data is created. For example, as the user logs on, log on information is transmitted to the system 105.) comparing the second discrete probability distribution to the first discrete probability distribution by applying a distance function to the first discrete probability distribution and the second discrete probability distribution, – in paragraphs [0015]-[0054] (These entities (and the model(s) created therefrom) may be considered a control or baseline that is used to create a probabilistic model that can be used as a comparison tool relative to the probabilistic model for the entity. Comparing the entity probability model to the population probability model to identify an anomaly between the entity probability model to the population probability model. Deviations between the entity probability model and the population probability model can be flagged as anomalous and subject to further review.) detecting the anomaly with respect to the categorical variable when the scalar value is above the threshold; – in paragraphs [0015]-[0054] (If the network traffic difference detected is greater than 125% of normal network traffic, the entity is flagged as anomalous. If the anomaly includes a high rate of access to a particular database, the remediation module 140 may restrict access privileges for the database until the anomaly is reviewed. If the anomaly is unusually frequent file transfers (e.g., exfiltration) of high volumes of data outside a protected network, the remediation module 140 may restrict file transfers by specifically identified machines in the network.) determining that a behavior with respect to the categorical variable is normal when the scalar value is not above the threshold; and – in paragraphs [0015]-[0054] (If the network traffic difference detected is greater than 125% of normal network traffic, the entity is flagged as anomalous.) performing at least one mitigation action when the anomaly is detected, – in paragraphs [0015]-[0050] (Once an anomaly has been detected and a cause or causes isolated, the remediation module 140 is executed to remediate the cause or causes.)) wherein the at least one mitigation action comprises at least one of: disconnecting the device from a network, turning off one or more ports, or disconnecting the device from a host. – in paragraphs [0051]-[0054] (The remediation action is taken in accordance with the one or more anomalies that are detected. For example, if the anomaly indicates that the user is exfiltrating data to a third party, the remediation can include disabling the computing device of the entity or terminating network access of the entity.) Dodson does not explicitly teach: discrete probability distribution. However, Savkli teaches: discrete probability distribution – in paragraph [0064]-[0076] (Each component contains a discrete probability distribution for each attribute of the dataset.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Dodson with Savkli to include discrete probability distribution, as taught by Savkli, in paragraphs [0002]-[0008], to provide techniques for clustering and classifying data or entities within large data sets and analyzing high dimensional data sets, including non-numerical data, to determine, for example, whether a particular entity associated with the data is normal, classify the entity, or identify similar entities. Combination of Dodson and Savkli does not explicitly teach: wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable. However, Shintre teaches: wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; – on lines 1-67 in column 4, on lines 1-67 in column 5 (Compute a variance of the compared probability distributions, including a maximum variational distance between the compared probability distributions. The computing device may determine the convergence of the probability distribution via comparison of a computed scalar value of the distribution to a pre-configured threshold value.) determining whether the scalar value is above a threshold associated with the type of the categorical variable; – on lines 1-67 in column 5, on lines 1-67 in column 17 (The scalar value may exceed the threshold value. At block 720, device 105, computing device 145, and/or server 110 may identify anomalous activity in relation to the set of users based on the variation exceeding a pre-configured threshold of the manager.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Dodson and Savkli with Shintre to include wherein an output of the distance function is a scalar value representing a difference between the first discrete probability distribution and the second discrete probability distribution; determining whether the scalar value is above a threshold associated with the type of the categorical variable, as taught by Shintre, on lines 5-67 in column 1, on lines 1-67 in column 2, on lines 1-67 in column 3, on lines 1-67 in column 4, to detect anomalous file access patterns among one or more users of a system. 20. The non-transitory computer readable medium of claim 10, – refer to the indicated claim for reference(s). Dodson teaches: wherein the first discrete probability distribution comprises a plurality of first possible values for the categorical variable and a respective first probability corresponding to each respective first possible value of the plurality of first possible values and – in paragraphs [0015]-[0050] (The entity data can be categorical and/or numerical. For example, a categorical value is non-numerical data such as a username, a location, and so forth. The probabilistic modeling module 130 is executed to create a probabilistic model of entity data for an entity, where the entity data is collected over a period of time. These entities (and the model(s) created therefrom) may be considered a control or baseline that is used to create a probabilistic model that can be used as a comparison tool relative to the probabilistic model for the entity. The probabilistic modeling module 130 creates the probabilistic model for the entity and also creates a probabilistic model for a population of entities that are similar to the entity under review. The probabilistic model for both the entity and the population can be created using any known methods. For example, clustering the data, capturing a parametric description of the typical deviation of values around each cluster, using, by way of example, a covariance matrix, and describing the overall density function as a weighted sum of normal density functions.) the second discrete probability distribution comprises a plurality of second possible values for the unique observation and a respective second probability corresponding to each respective second possible value of the plurality of second possible values. – in paragraphs [0015]-[0050] (The entity data can be categorical and/or numerical. For example, a categorical value is non-numerical data such as a username, a location, and so forth. The probabilistic modeling module 130 is executed to create a probabilistic model of entity data for an entity, where the entity data is collected over a period of time. These entities (and the model(s) created therefrom) may be considered a control or baseline that is used to create a probabilistic model that can be used as a comparison tool relative to the probabilistic model for the entity. The probabilistic modeling module 130 creates the probabilistic model for the entity and also creates a probabilistic model for a population of entities that are similar to the entity under review. The probabilistic model for both the entity and the population can be created using any known methods. For example, clustering the data, capturing a parametric description of the typical deviation of values around each cluster, using, by way of example, a covariance matrix, and describing the overall density function as a weighted sum of normal density functions.) 1. Claim 1 is substantially similar to claims 10, 11, and 20. 3, 13. The method of claim 1 – refer to the indicated claim for reference(s). Dodson teaches: wherein determining the first discrete probability distribution further comprises: determining a time window such that activity with respect to the categorical variable is assumed to be fully observed during the time window, wherein a duration of the time window Is based on a type of the categorical variable, wherein the first discrete probability distribution is determined based on a portion of the first set of network activity data corresponding to the time window. – in paragraphs [0015]-[0054] (The entity data can be categorical and/or numerical. For example, a categorical value is non-numerical data such as a username, a location, and so forth. The probabilistic modeling module 130 is executed to create a probabilistic model of entity data for an entity, where the entity data is collected over a period of time. These entities (and the model(s) created therefrom) may be considered a control or baseline that is used to create a probabilistic model that can be used as a comparison tool relative to the probabilistic model for the entity. The control portion establishes a baseline for later comparison. The probabilistic modeling module 130 creates the probabilistic model for the entity and also creates a probabilistic model for a population of entities that are similar to the entity under review.) 5, 15. The method of claim 1 – refer to the indicated claim for reference(s). Shintre further teaches: wherein the scalar value increases as the difference between the first discrete probability distribution and the second discrete probability distribution increases. – on lines 1-67 in column 4, on lines 1-67 in column 5 (Compute a variance of the compared probability distributions, including a maximum variational distance between the compared probability distributions. The computing device may determine the convergence of the probability distribution via comparison of a computed scalar value of the distribution to a pre-configured threshold value.) 6, 16. The method of claim 1 – refer to the indicated claim for reference(s). Dodson teaches: wherein the threshold is associated with the categorical variable. – in paragraphs [0015]-[0054] (If the network traffic difference detected is greater than 125% of normal network traffic, the entity is flagged as anomalous. If the anomaly includes a high rate of access to a particular database, the remediation module 140 may restrict access privileges for the database until the anomaly is reviewed. If the anomaly is unusually frequent file transfers (e.g., exfiltration) of high volumes of data outside a protected network, the remediation module 140 may restrict file transfers by specifically identified machines in the network.) 7, 17. The method of claim 1 – refer to the indicated claim for reference(s). Dodson teaches: wherein each discrete probability distribution indicates a probability of each of a plurality of potential categories for the categorical variable. – in paragraphs [0015]-[0054] (If the network traffic difference detected is greater than 125% of normal network traffic, the entity is flagged as anomalous. If the anomaly includes a high rate of access to a particular database, the remediation module 140 may restrict access privileges for the database until the anomaly is reviewed. If the anomaly is unusually frequent file transfers (e.g., exfiltration) of high volumes of data outside a protected network, the remediation module 140 may restrict file transfers by specifically identified machines in the network.) 9, 19, 22. The method of claim 1 – refer to the indicated claim for reference(s). Dodson teaches: wherein the categorical variable is any of: a host, a communication channel, and a port. – in paragraphs [0051]-[0054] (The remediation action is taken in accordance with the one or more anomalies that are detected. For example, if the anomaly indicates that the user is exfiltrating data to a third party, the remediation can include disabling the computing device of the entity or terminating network access of the entity.) Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180330257) in view of Shintre (US 10686816), and further in view of Savkli (US 20180307943) and Dean et al. (Pub. No.: US 2020/0280575, hereinafter, “Dean”). 4, 14. The method of claim 3 – refer to the indicated claim for reference(s). Combination of Dodson, Savkli, and Shintre does not explicitly teach: wherein determining the first discrete probability distribution further comprises: determining a sub-population of devices and systems indicated in the first set of network activity data, wherein the sub-population of devices and systems has a common attribute, wherein the portion of the first set of network activity data corresponding to the time window is related to the sub-population of devices. However, Dean teaches: wherein determining the first discrete probability distribution further comprises: determining a sub-population of devices and systems indicated in the first set of network activity data, wherein the sub-population of devices and systems has a common attribute, wherein the portion of the first set of network activity data corresponding to the time window is related to the sub-population of devices. – in paragraphs [0057]-[0074], [0293]-[0296] (One may model the tail probabilities (1) separately for some devices. As well as this one may wish to group certain subsets of the network devices together and build a single model for the tail probabilities of the devices in the subset based on the union of the observations of the metric for each individual device in the group. The groups may be manually specified by a user, may be created by grouping all devices of a certain type e.g. all desktops on a subnet or may be determined algorithmically by applying a clustering algorithm to some feature set.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Dodson, Savkli, and Shintre with Dean to wherein determining the first discrete probability distribution further comprises: determining a sub-population of devices and systems indicated in the first set of network activity data, wherein the sub-population of devices and systems has a common attribute, wherein the portion of the first set of network activity data corresponding to the time window is related to the sub-population of devices, as taught by Dean, in paragraphs [0002]-[0037], to provide a technique for detecting potentially malicious network activity and a technique for representing the output of anomaly detection algorithms to non-expert users. Claim(s) 8, 18, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180330257) in view of Shintre (US 10686816), and further in view of Savkli (US 20180307943) and Lin (US 20200028862). 8, 18, 21. The method of claim 1 – refer to the indicated claim for reference(s). Combination of Dodson, Savkli, and Shintre does not explicitly teach: wherein the distance function is any of: a cross-entropy distance function, and a chi-squared statistic function. However, Lin teaches: wherein the distance function is any of: a cross-entropy distance function, and a chi-squared statistic function. – in paragraph [0083] (The exponentially-weighted moving average and variance computed in the cloud is Mahalanobis distance (MD), which is a multi-dimensional measure of a distance between a point and a distribution (akin to measuring how many standard deviations away a point is from a distribution mean. While Mahalanobis distance is one preferred metric, it is not intended to be limiting, as other metrics that may be computed in the cloud include, without limitation, Pearson's chi-squared test, matching likelihood, and others.) It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Dodson, Savkli, and Shintre with Lin to include wherein the distance function is any of: a cross-entropy distance function, and a chi-squared statistic function, as taught by Lin, in paragraphs [0001]-[0006], to detect anomalous or malicious network activities or user behavior, e.g., in an enterprise network. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUHAMMAD RAZA whose telephone number is (571)272-7734. The examiner can normally be reached Monday-Friday, 7:00 A.M.-5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Srivastava can be reached on (571)272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MUHAMMAD RAZA/Primary Examiner, Art Unit 2449
Read full office action

Prosecution Timeline

Jan 28, 2021
Application Filed
Oct 26, 2022
Non-Final Rejection — §101, §103
May 01, 2023
Response Filed
Jul 18, 2023
Applicant Interview (Telephonic)
Jul 23, 2023
Final Rejection — §101, §103
Sep 18, 2023
Applicant Interview (Telephonic)
Sep 18, 2023
Examiner Interview Summary
Jan 26, 2024
Request for Continued Examination
Feb 02, 2024
Response after Non-Final Action
Aug 26, 2025
Non-Final Rejection — §101, §103
Mar 02, 2026
Response Filed
Mar 17, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603935
WORKFLOW COORDINATION IN COORDINATION NAMESPACE
2y 5m to grant Granted Apr 14, 2026
Patent 12598147
COLLABORATIVE RELATIONAL MANAGEMENT OF NETWORK AND CLOUD-BASED RESOURCES
2y 5m to grant Granted Apr 07, 2026
Patent 12592917
NETWORK LINK ESTABLISHMENT IN A MULTI-CLOUD INFRASTRUCTURE
2y 5m to grant Granted Mar 31, 2026
Patent 12587451
AUTOMATING SECURED DEPLOYMENT OF CONTAINERIZED WORKLOADS ON EDGE DEVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12580978
APPLICATION-CENTRIC WEB PROTOCOL-BASED DATA STORAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+70.8%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 274 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month