Prosecution Insights
Last updated: April 19, 2026
Application No. 17/551,348

LARGE SCALE SURVEILLANCE OF DATA NETWORKS TO DETECT ALERT CONDITIONS

Non-Final OA §103
Filed
Dec 15, 2021
Examiner
LONG, EDWARD X
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Refinitiv US Organization LLC
OA Round
5 (Non-Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
134 granted / 184 resolved
+14.8% vs TC avg
Strong +48% interview lift
Without
With
+47.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
4.8%
-35.2% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 184 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/12/2025, for application 17/551,348 has been entered. This Office Action is in response to the Amendment filed on 09/12/2025. In the instant amendment: Claims 1, 13 and 19 has been amended and claims 1, 13 and 19 are independent claims, claim 20 has been cancelled, claim 22 newly added. Claims 1-19, 21-22 have been examined and are pending. Response to Arguments Rejection under 35 U.S.C. 101 for claim 19 has been withdrawn in light of claim amendment. Applicants' arguments in the instant Amendment, filed on 09/12/2025, with respect to limitations listed below, have been fully considered but they are not persuasive. Applicant Argues: Lifshitz does not teach the amended claim limitation “generate, for a data record, a plurality of differences comprising at least a first difference…for a first tier…and at least a second difference…for a second tier…that is at least partially overlaps with the first time period… compare each difference… to a respective one of the learned filter parameters for the respective one of the multiple tiers; and generate a plurality of alert conditions for the data record based on the comparison, each alert condition indicating whether or not an alert is to be raised for the data record in a corresponding tier from among the multiple tiers” of amended claims 1. See Remarks at 11 (emphasis original). The examiner respectfully disagrees because these arguments are not persuasive. Applicant alleges that Lifshitz fails to disclose the above limitation on the basis that “[a]s claimed, separate deviations for the same data record across different overlapping tiers (e.g., compare today’s trade size to one-week history, and separately compare today’s trade size to one-month history), then compare each deviation to the learned filter parameter of that tier. This allows multiple independent alerts to be raised for the same record depending on which tier(s) (such as one-week history, one month history, and so on) show abnormality.” Id. at 11 (emphasis added). In contrast to these assertions, Liftshitz teaches monitoring a group of IoT devices (i.e., a data record) over many time periods, and to generate alert for one or more of such IoT devices that exhibits abnormal network behavior: [A] set of IoT devices, such as, an array of Internet-connected vending machines, or an array of Internet-connected smoke detectors, exhibit a generally stable and predictable network behavior and/or traffic activity; for example, each vending machine sends 3 kilobytes of data every hour on the hour, and also sends additional 7 kilobytes of data once a day at 3 AM. Accordingly, the system of the present invention may utilize this information in order to detect anomalies; for example, observing that ten vending machines are currently sending 800 kilobytes of data every 12 minutes, or that one vending machine is currently sending 240 kilobytes of data every minute, triggers a determination that this or these vending machine(s) is (or are) malfunctioning and/or compromised. See Lifshitz ¶ [0019] (emphasis added). Here, Lifshitz teaches monitoring a group of IoT devices across many (and possibly overlapping) time periods (e.g., on 1 minute interval, 10 minutes interval, or other time intervals), detecting abnormal network behavior that significantly deviates from expected normal behavior (e.g., on an hourly or daily basis), and to generate alerts and possible remedial actions for each or for sub-groups of these IoT devices. Thus, applicant’s allegation that Lifshitz fails to disclose the above limitation because Lifshitz does not teach the concept of “separate deviations for the same data record across different overlapping tiers (e.g., compare today’s trade size to one-week history, and separately compare today’s trade size to one-month history), then compare each deviation to the learned filter parameter of that tier. This allows multiple independent alerts to be raised for the same record depending on which tier(s) (such as one-week history, one month history, and so on) show abnormality” is not persuasive. In conclusion, applicant’s arguments are unpersuasive and the rejection of claim 1 is maintained. Similarly, rejection of independent claim 13 and 19, which recite similar matter to claim 1, is also maintained. Applicant may arrange a phone interview with the Examiner to further discuss this application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically discloses as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 10, 11, 13, 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Lifshitz et al. (“Lifshitz,” US 20190380037, published Dec. 12, 2019). Regarding claim 1, Lifshitz discloses An electronic surveillance system that detects alert conditions in data networks, the electronic surveillance system comprising a processor programmed to: receive input data from one of a plurality of data networks, each data network from among the plurality of data networks being associated with different types of data from the respective data network (Lifshitz FIGs 1-2, [0036], [0040]-[0041]. A Sensor Unit 221, or other sensing or listening or tracking or monitoring unit, sees or listens or monitors or tracks or captures or collects all the relevant network traffic (e.g., via Gi interface), as well as subscriber (IoT device) address mapping information (e.g., provided by a Subscriber Mapping unit 230, via Sm interface). The Sensor Unit 221 monitors and collects the following data for each of the endpoints identified as managed IoT devices, and/or for each data connection: (a) timestamp of start; (b) 5-tuple of the connections (e.g., source IP address, source port, destination IP address, destination port, protocol being used); (c) Identified protocols; (d) upstream volume of traffic; (e) downstream volume of traffic; (f) upstream packet count; (g) downstream packet count. Outliers are detected and flagged as suspicious, for example, based on distance greater than a pre-defined threshold value, or based on other indicators for irregularity of values or ranges-of-values; and a notification is generated with regard to such flagged IoT device, e.g., for further manual and/or automatic handling, for initiating attack mitigation operations, for remote de-activation or remote pausing of the IoT device, or the like.); store a data record based on the input data (Liftshitz [0040]. The Sensor Unit 221 monitors and collects the following data for each of the endpoints identified as managed IoT devices, and/or for each data connection: (a) timestamp of start; (b) 5-tuple of the connections (e.g., source IP address, source port, destination IP address, destination port, protocol being used); (c) Identified protocols; (d) upstream volume of traffic; (e) downstream volume of traffic; (f) upstream packet count; (g) downstream packet count. The data is periodically collected (e.g., at pre-defined time intervals) by a Data Collector unit 211 (e.g., via Cl interface), and is stored in a repository therein.); access a plurality of learned filter parameters for alert condition detection, each learned filter parameter from among the plurality of learned filter parameters being learned from historical datasets over a respective one of multiple tiers using computational modeling based on statistical analysis and/or [machine-learning and defining a value or range of values for which an alert condition is to be raised with respect to the respective one of the multiple tiers] (Lifshitz [0040] – [0041]. The Sensor Unit 221 monitors and collects the following data for each of the endpoints identified as managed IoT devices, and/or for each data connection: (a) timestamp of start; (b) 5-tuple of the connections (e.g., source IP address, source port, destination IP address, destination port, protocol being used); (c) Identified protocols; (d) upstream volume of traffic; (e) downstream volume of traffic; (f) upstream packet count; (g) downstream packet count. An Analyzer unit 212 performs analysis of the collected data: (a) Network activity profiling, performed periodically (e.g., at pre-defined time intervals), by clustering the collected data (e.g., via Cd interface) using a pre-defined clustering mechanism or clustering algorithm (e.g., by utilizing K-Means, or other suitable clustering method); and performing extraction of features from the data-set, per class of IoT devices, wherein a class pertains to a set of IoT devices that belongs to the same IoT service or type (e.g., type of “vending machine”, or type of “smoke detector”). (b) Each new data point for a particular IoT device is compared to the cluster(s) of the class for that device; or, the features or characteristics of traffic pertaining to a particular IoT device, is compared to the features or characteristics that characterize the cluster of IoT devices of that type. (c) Outliers are detected and flagged as suspicious, for example, based on distance greater than a pre-defined threshold value, or based on other indicators for irregularity of values or ranges-of-values; and a notification is generated with regard to such flagged IoT device. [Note that “tiers” can refer to sets of information/criteria for alert analysis, see Specification par. [0040].] ); wherein each tier from among the multiple tiers corresponds to a respective time period in the historical datasets and the respective time period at least partially overlaps with a time period of at least one tier from among the multiple tiers (Lifshitz [0041], [0093], [0095], [0126]. An Analyzer unit 212 performs analysis of the collected data: (a) Network activity profiling, performed periodically (e.g., at pre-defined time intervals), by clustering the collected data (e.g., via Cd interface) using a pre-defined clustering mechanism or clustering algorithm (e.g., by utilizing K-Means, or other suitable clustering method); and performing extraction of features from the data-set, per class of IoT devices. In some embodiments, the IoT grouping unit is to group said multiple IoT devices into said particular IoT group, based on at least detection that each one of said IoT devices: (I) sends every T hours cellular data having total volume in the range of between M1 to M2 bytes… In some embodiments, the IoT grouping unit is to group said multiple IoT devices into said particular IoT group, based on at least detection that each one of said IoT devices receives incoming cellular data … not more than one time every T minutes. In some embodiments, the baseline behavior determination unit is to dynamically update said RBCCB profile of said particular IoT group, based on continued Machine Learning (ML) of traffic-related behavior of members of said particular IoT group.); generate, for a given data record, a plurality of differences comprising at least a first difference between one or more fields in the data record with corresponding one or more fields in the historical datasets for a first tier associated with a first time period, and at least a second difference between the one or more fields in the data record with corresponding one or more fields in the historical datasets for a second tier associated with a second time period that at least partially overlaps with the first time period, the first difference and the second difference each representing a difference between one or more fields in the data record with corresponding one or more fields in the historical datasets over the respective first and second time periods (Lifshitz [0019], [0040] – [0041], [0126]. [A] set of IoT devices, such as, an array of Internet-connected vending machines, or an array of Internet-connected smoke detectors, exhibit a generally stable and predictable network behavior and/or traffic activity; for example, each vending machine sends 3 kilobytes of data every hour on the hour, and also sends additional 7 kilobytes of data once a day at 3 AM. Accordingly, the system of the present invention may utilize this information in order to detect anomalies; for example, observing that ten vending machines are currently sending 800 kilobytes of data every 12 minutes, or that one vending machine is currently sending 240 kilobytes of data every minute, triggers a determination that this or these vending machine(s) is (or are) malfunctioning and/or compromised. The Sensor Unit 221 monitors and collects the following data for each of the endpoints identified as managed IoT devices, and/or for each data connection: (a) timestamp of start; (b) 5-tuple of the connections (e.g., source IP address, source port, destination IP address, destination port, protocol being used); (c) Identified protocols; (d) upstream volume of traffic; (e) downstream volume of traffic; (f) upstream packet count; (g) downstream packet count. An Analyzer unit 212 performs analysis of the collected data: (a) Network activity profiling, performed periodically (e.g., at pre-defined time intervals), by clustering the collected data (e.g., via Cd interface) using a pre-defined clustering mechanism or clustering algorithm (e.g., by utilizing K-Means, or other suitable clustering method); and performing extraction of features from the data-set, per class of IoT devices, wherein a class pertains to a set of IoT devices that belongs to the same IoT service or type (e.g., type of “vending machine”, or type of “smoke detector”). (b) Each new data point for a particular IoT device is compared to the cluster(s) of the class for that device; or, the features or characteristics of traffic pertaining to a particular IoT device, is compared to the features or characteristics that characterize the cluster of IoT devices of that type. (c) Outliers are detected and flagged as suspicious, for example, based on distance greater than a pre-defined threshold value, or based on other indicators for irregularity of values or ranges-of-values; and a notification is generated with regard to such flagged IoT device. In some embodiments, the baseline behavior determination unit is to dynamically update said RBCCB profile of said particular IoT group, based on continued Machine Learning (ML) of traffic-related behavior of members of said particular IoT group.”).); compare each difference from among the plurality of differences to a respective one of the learned filter parameters for the respective one of the multiple tiers; and generate a plurality of alert conditions for the data record based on the comparisons, each alert condition indicating whether or not an alert is to be raised for the data record a corresponding tier from among the multiple tiers (Lifshitz [0060] –[0061]. An Outlier Detector unit 307 may detect that a particular IoT device exhibits network traffic characteristics that are dissimilar relative to said cluster of regular pattern of network traffic of said particular type of IoT devices. A Notifications Generator unit 308 may generate a notification or alarm or alert, that said particular IoT device is malfunctioning or is compromised, based on said dissimilar network traffic characteristics that are exhibited by said particular IoT device.); wherein a sanction is imposed in response to at least one alert condition from among the plurality of alert conditions, and wherein the sanction is recorded in association with the at least one alert condition (Lifshitz [0061], [0078]. A Notifications Generator unit 308 may generate a notification or alarm or alert, that said particular IoT device is malfunctioning or is compromised, based on said dissimilar network traffic characteristics that are exhibited by said particular IoT device. In some embodiments, an Enforcement and Quarantine Unit 330, upon detection that said particular IoT device is malfunctioning or compromised, activates or operates a Full IoT Device Isolation Module 331 (i) to block relaying of all traffic that is outgoing from said particular IoT device to all destinations, and also (ii) to block relaying of all traffic that is incoming to said particular IoT device from all senders.) The embodiment of Lifshitz does not explicitly disclose: machine-learning and defining a value or range of values for which an alert condition is to be raised with respect to the respective one of the multiple tiers. However, in another embodiment, Lifshitz discloses machine-learning and defining a value or range of values for which an alert condition is to be raised with respect to the respective one of the multiple tiers (Lifshitz [0126], [0129], [0157]. In some embodiments, the baseline behavior determination unit is to dynamically update said RBCCB profile of said particular IoT group, based on continued Machine Learning (ML) of traffic-related behavior of members of said particular IoT group. In some embodiments, the enforcement actions generator is to send a notification, to an owner or an operator of said particular IoT group, indicating (i) an identifier of said particular IoT device, and (ii) an indication that said particular IoT device is malfunctioning or compromised. The UP Probe 432 may count these messages, per type of message and/or in the aggregation of types, and may track their number or their quantity per time-unit (e.g., per minute) per IoT device from which they originate and/or per type-of-device from which they originate; and the system may then compare the monitored data to pre-defined threshold values or ranges-of-values, and/or may perform Machine Learning (ML) processes of such data, in order to determine that the number (the quantity) and/or the frequency of such monitored control messages is excessively high and/or irregular and/or abnormal and thus indicates that the IoT device is infected and/or compromised and/or malfunctioning.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz to include: machine-learning and defining a value or range of values for which an alert condition is to be raised with respect to the respective one of the multiple tiers. One would have been motivated to provide users with a means for using machine-learning algorithm for analyzing and detecting abnormal network conditions in an IoT system. (See Lifshitz [0157].) Regarding claim 10, Lifshitz discloses the system of claim 1. Lifshitz further discloses wherein to generate the plurality of differences, the processor is further programmed to: determine a notional quantity deviation between the one or more fields in the data record with corresponding one or more fields in the historical dataset (Lifshitz [0041]. An Analyzer unit 212 performs analysis of the collected data: (a) Network activity profiling, performed periodically (e.g., at pre-defined time intervals), by clustering the collected data (e.g., via Cd interface) using a pre-defined clustering mechanism or clustering algorithm (e.g., by utilizing K-Means, or other suitable clustering method); and performing extraction of features from the data-set, per class of IoT devices, wherein a class pertains to a set of IoT devices that belongs to the same IoT service or type (e.g., type of “vending machine”, or type of “smoke detector”). Outliers are detected and flagged as suspicious, for example, based on distance greater than a pre-defined threshold value, or based on other indicators for irregularity of values or ranges-of-values; and a notification is generated with regard to such flagged IoT device, e.g., for further manual and/or automatic handling, for initiating attack mitigation operations, for remote de-activation or remote pausing of the IoT device, or the like.). Regarding claim 11, Lifshitz discloses the system of claim 1. Lifshitz further discloses wherein to generate the plurality of differences, the processor is further programmed to: determine a statistical deviation between the one or more fields in the data record with corresponding one or more fields in the historical dataset (Lifshitz [0040] - [0041]. The data is periodically collected (e.g., at pre-defined time intervals) by a Data Collector unit 211 (e.g., via Cl interface), and is stored in a repository therein. An Analyzer unit 212 performs analysis of the collected data: (a) Network activity profiling, performed periodically (e.g., at pre-defined time intervals), by clustering the collected data (e.g., via Cd interface) using a pre-defined clustering mechanism or clustering algorithm (e.g., by utilizing K-Means, or other suitable clustering method); and performing extraction of features from the data-set, per class of IoT devices, wherein a class pertains to a set of IoT devices that belongs to the same IoT service or type (e.g., type of “vending machine”, or type of “smoke detector”). Outliers are detected and flagged as suspicious, for example, based on distance greater than a pre-defined threshold value, or based on other indicators for irregularity of values or ranges-of-values; and a notification is generated with regard to such flagged IoT device, e.g., for further manual and/or automatic handling, for initiating attack mitigation operations, for remote de-activation or remote pausing of the IoT device, or the like.). Regarding claim 13, claim 13 is directed to a method corresponding to the system of claim 1. Claim 13 is similar to claim 1 and is therefore rejected under similar rationale. Regarding claim 19, claim 19 is directed to a computer-readable storage medium corresponding to the system of claim 1. Claim 19 is similar to claim 1 and is therefore rejected under similar rationale. Regarding claim 21, Lifshitz discloses the computer readable storage medium of claim 19. Lifshitz further discloses wherein a sanction is imposed in response to at least one alert condition from among the plurality of alert conditions, and wherein the sanction is recorded in association with the at least one alert condition (Lifshitz [0061], [0078]. A Notifications Generator unit 308 may generate a notification or alarm or alert, that said particular IoT device is malfunctioning or is compromised, based on said dissimilar network traffic characteristics that are exhibited by said particular IoT device. In some embodiments, an Enforcement and Quarantine Unit 330, upon detection that said particular IoT device is malfunctioning or compromised, activates or operates a Full IoT Device Isolation Module 331 (i) to block relaying of all traffic that is outgoing from said particular IoT device to all destinations, and also (ii) to block relaying of all traffic that is incoming to said particular IoT device from all senders.). Regarding claim 22, Lifshitz disclose the system of claim 1. Lifshitz further discloses: each tier from among the multiple tiers corresponds to a time window that terminates at a common reference time associated with the data record and has a different duration than at least one other tier, such that the time windows at least partially overlap, and wherein the processor is further programmed to (Lifshitz [0019]. [A] set of IoT devices, such as, an array of Internet-connected vending machines, or an array of Internet-connected smoke detectors, exhibit a generally stable and predictable network behavior and/or traffic activity; for example, each vending machine sends 3 kilobytes of data every hour on the hour, and also sends additional 7 kilobytes of data once a day at 3 AM. Accordingly, the system of the present invention may utilize this information in order to detect anomalies; for example, observing that ten vending machines are currently sending 800 kilobytes of data every 12 minutes, or that one vending machine is currently sending 240 kilobytes of data every minute, triggers a determination that this or these vending machine(s) is (or are) malfunctioning and/or compromised.): generate, for the same data record, a plurality of differences comprising at least a first difference computed between one or more fields in the data record and corresponding one or more fields in the historical datasets for a first tier associated with a first time window, and at least a second difference computed between the one or more fields in the data record and corresponding one or more fields in the historical datasets for a second tier associated with a second time window that at least partially overlaps with the first time window (Lifshitz [0019]. [A] set of IoT devices, such as, an array of Internet-connected vending machines, or an array of Internet-connected smoke detectors, exhibit a generally stable and predictable network behavior and/or traffic activity; for example, each vending machine sends 3 kilobytes of data every hour on the hour, and also sends additional 7 kilobytes of data once a day at 3 AM. Accordingly, the system of the present invention may utilize this information in order to detect anomalies; for example, observing that ten vending machines are currently sending 800 kilobytes of data every 12 minutes, or that one vending machine is currently sending 240 kilobytes of data every minute, triggers a determination that this or these vending machine(s) is (or are) malfunctioning and/or compromised.); and compare each of the first difference and the second difference to a corresponding learned filter parameter that was learned from the historical datasets of the respective tier (Lifshitz [0060] –[0061]. An Outlier Detector unit 307 may detect that a particular IoT device exhibits network traffic characteristics that are dissimilar relative to said cluster of regular pattern of network traffic of said particular type of IoT devices. A Notifications Generator unit 308 may generate a notification or alarm or alert, that said particular IoT device is malfunctioning or is compromised, based on said dissimilar network traffic characteristics that are exhibited by said particular IoT device.), wherein the plurality of alert conditions are generated for the data record based further on the comparisons (Lifshitz [0060] –[0061]. An Outlier Detector unit 307 may detect that a particular IoT device exhibits network traffic characteristics that are dissimilar relative to said cluster of regular pattern of network traffic of said particular type of IoT devices. A Notifications Generator unit 308 may generate a notification or alarm or alert, that said particular IoT device is malfunctioning or is compromised, based on said dissimilar network traffic characteristics that are exhibited by said particular IoT device.). Claims 2, 3, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lifshitz et al. (“Lifshitz,” US 20190380037, published Dec. 12, 2019) and Silverman et al. (“Silverman,” US 20220035770, filed Oct. 26, 2020). Regarding claim 2, Lifshitz discloses the system of claim 1. Lifshitz does not explicitly disclose: wherein the processor is further programmed to: receive, from a user, a user-defined filter parameter; and replace at least one of the plurality of learned filter parameters with the user- defined filter parameter, wherein the plurality of alert conditions is generated based on the user-defined filter parameter instead of the replaced one of the plurality of learned filter parameters. However, in an analogous art, Silver discloses a system, comprising: wherein the processor is further programmed to: receive, from a user, a user-defined filter parameter; and replace at least one of the plurality of learned filter parameters with the user- defined filter parameter, wherein the plurality of alert conditions is generated based on the user-defined filter parameter instead of the replaced one of the plurality of learned filter parameters (Silverman [0041] – [0042]. In certain embodiments, the information collected by the provisioning management application of data source 30 may comprise at least a portion of one or more policies (e.g., the information may comprise one or more rules of a policy or an entire policy). Policies may be developed manually, automatically (e.g., using machine learning), or both (e.g., a user provides initial policy information, machine learning updates the policy information, the user can review/override the policy information). Examples of policies for email may include encrypting, filtering, archiving, and/or branding policies. These policies may indicate content and/or metadata to be reviewed for an email or email attachment and actions to perform if the content and/or metadata matches or fails to match keywords or characteristics defined by the policy. A filter policy may indicate which emails require filtering, which filter(s) to apply (e.g., antivirus, anti-spam), which actions to take (e.g., quarantine the email, discard the email after a certain period of inaction, perform a malware scan and attempt to remediate the email, etc.), and/or other filter-related rules.) Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz and Silverman to include: wherein the processor is further programmed to: receive, from a user, a user-defined filter parameter; and replace at least one of the plurality of learned filter parameters with the user- defined filter parameter, wherein the plurality of alert conditions is generated based on the user-defined filter parameter instead of the replaced one of the plurality of learned filter parameters. One would have been motivated to provide users with a means for using user-defined policy for filtering emails. (See Silverman [0041].) Regarding claim 3, Lifshitz and Silverman disclose the system of claim 2. Silverman further discloses wherein the user is part of a group of users, and wherein processor is further programmed to: use the user-defined filter parameter for all users in the group (Silverman [0036], [0041]-[0042]. An enterprise may generally refer to a group of users configured to have at least some provisioning data 22 in common. As an example, an enterprise may be a company and the users may be employees of the company. As an example, an email service provider may host email services for a number of enterprise customers and/or a number of customers that are individual users. In certain embodiments, the information collected by the provisioning management application of data source 30 may comprise at least a portion of one or more policies (e.g., the information may comprise one or more rules of a policy or an entire policy). A filter policy may indicate which emails require filtering, which filter(s) to apply (e.g., antivirus, anti-spam), which actions to take (e.g., quarantine the email, discard the email after a certain period of inaction, perform a malware scan and attempt to remediate the email, etc.), and/or other filter-related rules.) The motivation is the same as that of claim 2 above. Regarding claim 14, claim 14 is directed to a method corresponding to the system of claim 2. Claim 14 is similar to claim 2 and is therefore rejected under similar rationale. Regarding claim 15, claim 15 is directed to a method corresponding to the system of claim 3. Claim 15 is similar to claim 3 and is therefore rejected under similar rationale. Claims 4, 5, 7, 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lifshitz et al. (“Lifshitz,” US 20190380037, published Dec. 12, 2019) and Lewis (“Lewis,” US 20220046047, filed Aug. 10, 2020). Regarding claim 4, Lifshitz discloses the system of claim 1. Lifshitz does not explicitly disclose: wherein to generate the plurality of differences between one or more fields in the data record with corresponding one or more fields in the historical dataset, the processor is further programmed to: limit the historical datasets to those associated with a user that is the same user that is also associated with the data record to make the comparison to historical activity of the user. However, in an analogous art, Lewis discloses a system comprising: wherein to generate the plurality of differences between one or more fields in the data record with corresponding one or more fields in the historical dataset, the processor is further programmed to: limit the historical datasets to those associated with a user that is the same user that is also associated with the data record to make the comparison to historical activity of the user (Lewis [0066]. For example, in training the machine learning model to the monitored data at step 204, cyber event analysis computing platform 110 may train the machine learning model to recognize an anomaly relative to a typical virtual desktop session accessed by the remote user computing device 170 based on a learned pattern of user activity during a virtual desktop sessions. For instance, the cyber event analysis computing platform 110 may learn that a typical virtual desktop session of a particular person occurs in the morning hours and consists and creating files. Accordingly, a virtual desktop sessions that deviates from this pattern (e.g., a session at night consisting on only viewing files) may be indicative of a potential cyber-attack.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz and Lewis to include: limit the historical datasets to those associated with a user that is the same user that is also associated with the data record to make the comparison to historical activity of the user. One would have been motivated to provide users with a means for detecting anomaly according to the user activity profile. (See Lewis [0066].) Regarding claim 5, Lifshitz discloses the system of claim 1. Lewis further discloses wherein to generate the plurality of differences between one or more fields in the data record with corresponding one or more fields in the historical dataset, the processor is further programmed to: limit the historical datasets to those associated with users in the same organization as the user associated with the data record to make the comparison to historical activity of an organization of a user (Lewis [0067]. In some embodiments, applying the machine learning model to the monitored data received from the one or more data source computer systems may include applying the machine learning model to data received from a virtual desktop session on the remote user computing device 170, where the virtual desktop session accesses information associated with an enterprise organization, and where at least some of the information may be confidential and/or have varying levels of confidentiality. For example, in applying the machine learning model to the monitored data received from the one or more data source computer systems (e.g., remote user computing device 170, local user computing device 140) at step 204, cyber event analysis computing platform 110 may train the machine learning model to associate user accounts with a confidentiality level of information likely to be accessed by that user account, an/or a type of information likely to be accessed by that user account. As such, the machine learning model may be trained to detect anomalous activity based on the user account at the virtual desktop accessing a type of information that differs (e.g., in confidentiality, in classification, and the like) from information that is typically accessed.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz and Lewis to include: limit the historical datasets to those associated with users in the same organization as the user associated with the data record to make the comparison to historical activity of an organization of a user. One would have been motivated to provide users with a means for detecting anomaly according to the user activity profile with respect to policies or expected behavior for an enterprise. (See Lewis [0067].) Regarding claim 7, Lifshitz discloses the system of claim 1. Lewis further discloses wherein the processor is further programmed to: receive, from a user, a number and type of a plurality of signals to use for alert generation, wherein a number of the plurality of signals is based on the multiple tiers and the one or more fields and wherein each signal of the plurality of signals is assessed to generate a corresponding alert condition (Lewis [0065]-[0066], [0081]. For instance, the data received may be used to identify a number of factors associated with the user activity (e.g., actions taken, content accessed, session timing, or the like) for which the machine learning model may be trained to predict a potential cyber-attack. For instance, the cyber event analysis computing platform 110 may learn that a typical virtual desktop session of a particular person occurs in the morning hours and consists and creating files. Accordingly, a virtual desktop sessions that deviates from this pattern (e.g., a session at night consisting on only viewing files) may be indicative of a potential cyber-attack. In some embodiments, sending the security response alert generated based on the new activity data may include sending the security response alert to the one or more enterprise computer systems in real-time as the activity data is being captured and monitored by the cyber event analysis computing platform 110. ). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz and Lewis to include: receive, from a user, a number and type of a plurality of signals to use for alert generation, wherein a number of the plurality of signals is based on the multiple tiers and the one or more fields and wherein each signal of the plurality of signals is assessed to generate a corresponding alert condition. One would have been motivated to provide users with a means for detecting anomaly according to the user activity profile with respect to policies or expected behavior of the user of an enterprise. (See Lewis [0066]) Regarding claim 8, Lifshitz and Lewis disclose the system of claim 7. Lewis further discloses wherein the user is part of a group of users, and wherein processor is further programmed to: use the number and the type of the plurality of signals for all users in the group (Lewis [0067]. In some embodiments, applying the machine learning model to the monitored data received from the one or more data source computer systems may include applying the machine learning model to data received from a virtual desktop session on the remote user computing device 170, where the virtual desktop session accesses information associated with an enterprise organization, and where at least some of the information may be confidential and/or have varying levels of confidentiality. For example, in applying the machine learning model to the monitored data received from the one or more data source computer systems (e.g., remote user computing device 170, local user computing device 140) at step 204, cyber event analysis computing platform 110 may train the machine learning model to associate user accounts with a confidentiality level of information likely to be accessed by that user account, an/or a type of information likely to be accessed by that user account. As such, the machine learning model may be trained to detect anomalous activity based on the user account at the virtual desktop accessing a type of information that differs (e.g., in confidentiality, in classification, and the like) from information that is typically accessed.). The motivation is the same as that of claim 7 above. Regarding claim 17, claim 17 is directed to a method corresponding to the system of claim 5. Claim 17 is similar to claim 5 and is therefore rejected under similar rationale. Claim 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lifshitz et al. (“Lifshitz,” US 20190380037, published Dec. 12, 2019) and Doron et al. (“Doron,” US 20180255095, published Sept. 6, 2018). Regarding claim 6, Lifshitz discloses the system of claim 1. Lifshitz does not explicitly disclose: wherein to generate the plurality of differences between one or more fields in the data record with corresponding one or more fields in the historical dataset, the processor is further programmed to: limit the historical datasets to those associated with a financial instrument that is the same financial instrument in the data record to make the comparison specific to all users in the historical dataset who have traded the financial instrument. However, in an analogous art, Chamberlain discloses a system comprising: wherein to generate the plurality of differences between one or more fields in the data record with corresponding one or more fields in the historical dataset, the processor is further programmed to: limit the historical datasets to those associated with a financial instrument that is the same financial instrument in the data record to make the comparison specific to all users in the historical dataset who have traded the financial instrument (Chamberlain FIGs 3-4, [0038], [0040], [0048]. For example, the rules may be automatically generated based on market trends, data mining, and machine learning. For example, the risk analysis circuit 244 is connected to the account database 240 to access (e.g., query) the account/profile information, historical transaction information, and/or trading partner information stored thereon. The electronic transaction may correspond to a payment or transfer of funds from a user of the user device to a beneficiary. In some arrangements, the anomaly may be detected if the electronic transaction is not consistent with a profile or a pattern of behavior of the user. For example, in some arrangements, user account/profile information, trading partner data, and/or transactional history data are analyzed to detect the anomaly.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz and Chamberlain to include: limit the historical datasets to those associated with a financial instrument that is the same financial instrument in the data record to make the comparison specific to all users in the historical dataset who have traded the financial instrument. One would have been motivated to provide users with a means for detecting suspicious financial transaction through a statistical analysis of historic trading behavior among user’s trading partners. (See Chamberlain [0048].) Regarding claim 16, claim 17 is directed to a method corresponding to the system of claim 6. Claim 16 is similar to claim 6 and is therefore rejected under similar rationale. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Lifshitz et al. (“Lifshitz,” US 20190380037, published Dec. 12, 2019) and Doron et al. (“Doron,” US 20180255095, published Sept. 6, 2018). Regarding claim 9, Lifshitz discloses the system of claim 1. Lifshitz does not explicitly disclose: wherein to generate the plurality of differences, the processor is further programmed to: determine a percentage deviation between the one or more fields in the data record with corresponding one or more fields in the historical dataset. However, in an analogous art, Doron discloses a system, comprising: wherein to generate the plurality of differences, the processor is further programmed to: determine a percentage deviation between the one or more fields in the data record with corresponding one or more fields in the historical dataset (Doran [0047], [0081]. A feature is an individual measurable property of a phenomenon being observed. For example, a feature can be a number of HTTP requests per second. To this end, the detection engine 430 may be configured to automatically compute the normal baseline levels based on monitored features (e.g., over a specified time period, such as the last day, week, or month, on an hourly basis), where potential DDoS attacks are detected based on deviations from the normal baseline levels. In an embodiment, the baseline levels may include high and low levels such that an attack is detected as starting once the high levels has been surpassed for a predetermined period of time and as ending once the telemetric value falls below the low level for a predetermined period of time. Deviation from the baseline can be manually defined as a percentage (or by others) or can be automatically set, for example as 4 times the standard deviation.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the embodiments of Lifshitz and Doron to include: determine a percentage deviation between the one or more fields in the data record with corresponding one or more fields in the historical dataset. One would have been motivated to provide users with a means for setting a statistical cut-off as a threshold for determining network attack. (See Doron [0081].) Claims 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lifshitz et al. (“Lifshitz,” US 20190380037, published Dec. 12, 2019) and Kaimal et al. (“Kaimal,” US 11483339, filed Dec. 18, 2019). Regarding claim 12, Lifshitz discloses the system of claim 1. Lifshitz does not explicitly disclose: wherein the processor is further programmed to: conduct multi-branched filter parameter learning to learn the plurality of learned filter parameters and a second set of a plurality of learned filter parameters for second historical datasets, the multi-branched filter parameter learning being based on a first source associated with the historical datasets and a second source associated with the second historical datasets; identify a source associated with the data record; and select the plurality of learned filter parameters based on a match between the source associated with the data record and the first source associated with the historical datasets. However, in an analogous art, Kaimal discloses a system comprising: wherein the processor is further programmed to: conduct multi-branched filter parameter learning to learn the plurality of learned filter parameters and a second set of a plurality of learned filter parameters for second historical datasets, the multi-branched filter parameter learning being based on a first source associated with the historical datasets and a second source associated with the second historical datasets (Kaimal FIG. 4A, col. 7: 20-26, 30-45. FIG. 4A is a flowchart illustrating operations performed when a data analyzer receives a set network traffic (402). The data analyzer can determine the type of device, for example, using device profile data received from a device profiler (404). The analyzer can determine if the device providing the data is in a learning period (406). If the device is not in a learning period, then the operations of the flowchart of FIG. 4B can be performed (“NO” branch of 406). If the device is in a learning period (“YES” branch of 406), then the analyzer can determine if a baseline has been established for the device, type of device, and/or class of device (408). If a baseline doesn't exist (“NO” branch of 408), then a baseline is created for the device (410). In some aspects, the baseline may be created by determine a particular set of predetermined features from the set of incoming network traffic. For example, device type, domain names of source or destination devices communicating with the device of interest, packet rates, data rates, OS version, software version etc. may be used to create a baseline profile for the device. Similarly, a baseline profile for the device type or device class may be created using similar features.); identify a source associated with the data record (Kaimal col. 7: 65-67; col. 8: 1-8. Features of the incoming data can be compared to the features stored in the baseline profile (418). For example, in the case where the baseline profile stores predetermined features, the feature values can be extracted from the incoming network data and compared with the feature values in the baseline profile. In the case where the baseline profile includes a machine learning model, the incoming data can be run through the model and the resulting predicted features can be used to determine if an anomaly exists (420).); and select the plurality of learned filter parameters based on a match between the source associated with the data record and the first source associ
Read full office action

Prosecution Timeline

Dec 15, 2021
Application Filed
Nov 26, 2023
Non-Final Rejection — §103
Mar 06, 2024
Response Filed
May 25, 2024
Final Rejection — §103
Aug 16, 2024
Applicant Interview (Telephonic)
Aug 21, 2024
Request for Continued Examination
Aug 22, 2024
Examiner Interview Summary
Aug 25, 2024
Response after Non-Final Action
Sep 16, 2024
Non-Final Rejection — §103
Feb 13, 2025
Response Filed
Mar 05, 2025
Final Rejection — §103
Sep 12, 2025
Request for Continued Examination
Sep 17, 2025
Response after Non-Final Action
Sep 17, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603775
DATA INTERACTION
2y 5m to grant Granted Apr 14, 2026
Patent 12598090
INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12587387
PROTECTING WEBCAM VIDEO FEEDS FROM VISUAL MODIFICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12567981
SYSTEMS AND METHODS FOR DATA AUTHENTICATION USING COMPOSITE KEYS AND SIGNATURES
2y 5m to grant Granted Mar 03, 2026
Patent 12563091
SYSTEM AND METHOD FOR DETECTING PATTERNS IN STRUCTURED FIELDS OF NETWORK TRAFFIC PACKETS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+47.9%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 184 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month