Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The IDS filed 8/20/2004 was received and considered.
Claims 1-16 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 15-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because:
Regarding claim 15, the claim is directed to a “computer program comprising instructions”, which does not fall within one of the statutory classes of invention defined under 35 U.S.C. §101.
Regarding claim 16, the claim is directed to a “computer-readable medium”, which could comprise a transitory medium, such as a signal, and hence does not fall within one of the statutory classes of invention defined under 35 U.S.C. §101.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the phrase "such as" (“such as an endpoint”) renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 2, the limitation “the backend side rule-based threat detection mechanism” lacks sufficient antecedent basis.
Regarding claim 5, the limitation “the misuse detection model training set” lacks sufficient antecedent basis.
Regarding claim 7, the limitation “the … anomaly detection model” lacks sufficient antecedent basis.
Regarding claim 8, the limitation “wherein the agent (6a – 6h) of the node uses the local threat detection model for obtaining scores for a stream of observed local events in a timely manner and/or aligns observed events over a timeline in the order of their appearance and combines their scores assigned by the local threat detection model to the timeline” renders the claim indefinite, as it is unclear if “and combines their scores assigned by the local threat detection model to the timeline” is required by the scope of the claim or if “and combines their scores assigned by the local threat detection model to the timeline” is interpreted in the alternative of the “and/or” recitation.
Regarding claim 9, the limitation “the anomaly detection model” lacks sufficient antecedent basis.
Regarding claim 12, the phrase "such as" (“such as an endpoint”) renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 14, the limitation “provide to nodes (5a-5h) a local threat detection model” renders the claim indefinite, as it is unclear if the “a local threat detection model” is the same “local threat detection model” recited in claim 12 (on which claim 14 depends) or an additional model.
Regarding claims 3-4, 6, 10-11, 13 and 15-16 inherit the deficiencies identified in claims 1, 2, 5, 7, 8, 9, 12 and/or 14.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 6 and 12-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2022/0191224 A1 to Stahlberg et al. (Stahlberg).
Regarding claim 1, Stahlberg discloses a method of threat detection in a threat detection network (threat detection network, ¶36), the threat detection network comprising interconnected nodes (5a – 5h) (nodes 5a-5h, ¶38 and Fig. 1) and a backend system (2) (security backend/server 2, Fig. 1, ¶37), wherein the backend system (2) utilizes a backend threat detection mechanism (backend can utilize methods like federated learning to combine knowledge from multiple endpoints and consolidate models of users across multiple endpoints and/or also utilize hierarchical modelling approaches to learn from behaviors of similar users, ¶64; backend provides correlation and analysis of the data sent from the multitude of individual intelligent sensors and can also share behavioral models to the network nodes, ¶91; see also ¶22, comparing at the backend system the anomalous data with other behavior models, e.g. with other behavior models in the same organization and/or behavior models of known malicious users), and at least part of the nodes (5a – 5h) comprise security agent modules (6a – 6h) (nodes comprise security agent modules, ¶38) which collect data related to the respective node (security agent modules, 6a-6h, 4a collect various types of data at the nodes 5a-5h, ¶38), and wherein the nodes (5a – 5h) utilize at least one local threat detection model which comprises a machine learning-based model of a backend threat detection mechanism (agents continuously monitor, build behavioral models and detect anomalies, ¶89), wherein the method comprises: collecting data related to the node (5a – 5h) by the security agent module (6a – 6h) at the node (security agent modules, 6a-6h, 4a collect various types of data at the nodes 5a-5h, ¶38), applying the local threat detection model to the collected data (agents continuously monitor, build behavioral models and detect anomalies, ¶88; known threat can be detected based on the user behavior when comparing the detected behavior to the behavior model, ¶89), and making a security related decision at the node (5a – 5h), such as an endpoint, based on results of the local threat detection model (if the agent already has the means for response, that action may be taken, ¶89).
Regarding claim 12, the claim is similar in scope to claim 1 and is therefore rejected using a similar rationale (the claimed node is found in nodes 5a-5h, ¶38 and Fig. 1).
Regarding claim 13, the claim is similar in scope to claim 1 and is therefore rejected using a similar rationale.
Regarding claim 2, Stahlberg discloses wherein the local threat detection model which comprises a machine learning-based model of the backend threat detection mechanism (agents build behavioral model, utilizing a machine learning model, ¶63) is an approximation of the backend side rule-based threat detection mechanism (common model of normal behavior may be generated by the security server backend of the computer network, ¶72; these common learnings are redistributed to cope for example operating system updates or new application versions which may be global but changing and would otherwise cause problems for such models, ¶73).
Regarding claim 3, Stahlberg discloses wherein the local threat detection model comprises at least one misuse detection model (behavior models, ¶65) which is based on at least one machine learning model for finding events that are likely to contribute to detections of a cyber incident (behavior models can then be used to monitor the activity of the same user and to notice changes in behavior which may be due to automation, attacks or simply another user using the same account-all potential threat scenarios, ¶65).
Regarding claim 6, Stahlberg discloses wherein the local threat detection model further comprises at least one anomaly detection model which is based on at least one machine learning model for finding uncommon events that are likely to contribute to threat detection (behavior models can then be used to monitor the activity of the same user and to notice changes in behavior which may be due to automation, attacks or simply another user using the same account-all potential threat scenarios, ¶65), intelligence and/or hunting purposes, and/or the at least one anomaly detection model is trained in unsupervised, supervised or semi-supervised learning fashion at the backend system or at the node (if the anomaly is determined to be a false positive e.g. by deeper analysis models or by a human analyst, the logic and/or behavior model is trained not to detect similar and corresponding case again as anomalous, ¶78).
Regarding claim 14, Stahlberg discloses a threat detection network comprising: at least one node (5a – 5h) according to claim 12 (see rejection of claim 12), and at least one backend system (2) (security backend/server 2, Fig. 1, ¶37), the backend system comprising at least one server which comprises at least one or more processors (security backend/server 2 comprising processors, Fig. 1, ¶37 and ¶22), and the backend system (2) is configured to utilize a backend threat detection mechanism (backend can utilize methods like federated learning to combine knowledge from multiple endpoints and consolidate models of users across multiple endpoints and/or also utilize hierarchical modelling approaches to learn from behaviors of similar users, ¶64; backend provides correlation and analysis of the data sent from the multitude of individual intelligent sensors and can also share behavioral models to the network nodes, ¶91; see also ¶22, comparing at the backend system the anomalous data with other behavior models, e.g. with other behavior models in the same organization and/or behavior models of known malicious users) and further configured to train and/or provide to nodes (5a – 5h) a local threat detection model (backend system can create a common model, ¶72 that are redistributed to nodes, ¶73), comprising an anomaly detection model and/or a misuse detection model (behavior models can then be used to monitor the activity of the same user and to notice changes in behavior which may be due to automation, attacks or simply another user using the same account-all potential threat scenarios, ¶65).
Regarding claim 15, Stahlberg discloses a computer program comprising instructions which, when executed by a computer, cause the computer to carry out the method according to claim 1 (¶24).
Regarding claim 16, Stahlberg discloses a computer-readable medium comprising the computer program according to claim 15 (¶25).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 5 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Stahlberg, as applied to claims 3 or 1, in view of US 2023/0231871 A1 to Jiao.
Regarding claim 4, Stahlberg lacks wherein the misuse detection models are trained at the backend system in supervised learning fashion for a classification problem. However, Jiao, in an analogous art (training a machine learning detector to detect threats, abstract), teaches that it was known to employ federated learning, including training a misuse detection model (gateway uses misuse detection model to detect malicious traffic, ¶112, ¶114) at a backend system (federated learning is employed such that the server aggregates parameters from gateways to train gateway models, ¶135) in supervised learning fashion for a classification problem (models are trained using malicious sample and normal sample, ¶121), gaining the benefit of improved model training (¶¶137-139). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Stahlberg such that the misuse detection models are trained at the backend system in supervised learning fashion for a classification problem. One of ordinary skill in the art would have been motivated to perform such a modification to utilize federated learning to improve the models of the nodes, as taught by Jiao.
Regarding claim 5, Stahlberg, as modified, teaches wherein the misuse detection model training set comprises of complementary subsets of existing events (malicious samples) that are proven to be relevant for confirmed and existing cyber incidents and/or cyber-attacks and/or represent typical benign (normal) behaviours (per the modification described with respect to claim 4, models are trained in a supervised manner using malicious samples and normal samples, Jiao, ¶121).
Regarding claim 11, Stahlberg discloses wherein preparation of the machine learning based threat detection model comprises defining local threat detection model features (agent builds a behavior model of a user, e.g. a “computer user behavioral persona”, ¶44), and training the local threat detection model based on training data (agent at the network node, e.g. an endpoint agent, locally collects and analyzes data which us used to build a behavior model of a user, e.g. a “computer user behavioral persona”, ¶44). Stahlberg, as modified, lacks defining backend threat detection mechanism features. However, Jiao, in an analogous art (training a machine learning detector to detect threats, abstract), teaches that it was known to employ federated learning, including training a misuse detection model (gateway uses local misuse detection model to detect malicious traffic, ¶112, ¶114) at a backend system (federated learning is employed such that the server aggregates parameters from gateways to train gateway models, ¶135) in supervised learning fashion for (models are trained using malicious sample and normal sample, ¶121), gaining the benefit of improved model training (¶¶137-139). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Stahlberg to include defining local threat detection model features. One of ordinary skill in the art would have been motivated to perform such a modification to utilize federated learning to improve the models of the nodes, as taught by Jiao.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Stahlberg, as applied to claim 3, in view of US 2033/0114260 A1 to Udupi Raghavendra et al. (Raghavendra).
Regarding claim 7, Stahlberg does not explicitly teach wherein training of the misuse detection and/or anomaly detection model is carried out regularly and/or once the training process is over, a new model is transmitted to nodes (5a – 5h) and used locally by the nodes (5a – 5h). However, Raghavendra, in an analogous art (collecting telemetry data to train an anti-malware system, abstract), teaches that it was known to collect telemetry data on non-malicious processes on a regular basis/periodically to adapt to changes of the input data over time (¶19). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Stahlberg such that training of the misuse detection and/or anomaly detection model is carried out regularly and/or once the training process is over, a new model is transmitted to nodes (5a – 5h) and used locally by the nodes (5a – 5h). One of ordinary skill in the art would have been motivated to perform such a modification to adapt the models used by the nodes to changes over time, as taught by Raghavendra.
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Stahlberg, as applied to claim 1, in view of US 2024/0419785 A1 to Bartling et al. (Bartling).
Regarding claim 8, Stahlberg, as modified, is silent regarding wherein the agent (6a – 6h) of the node uses the local threat detection model for obtaining scores for a stream of observed local events in a timely manner and/or aligns observed events over a timeline in the order of their appearance and combines their scores assigned by the local threat detection model to the timeline. However, Bartling, in an analogous art (detecting threats using behavioral models, abstract), teaches that it was known to perform local threat detection on a stream of observed local events (ingests a sequence of events in a stream, ¶16) in a timely manner (correlating multiple events within a period of time, ¶14; events are analyzed as a temporal sequence to determine if the events are undesirable, ¶17; events are recorded and a timer is started, ¶31) and combines their scores assigned by the local threat detection model to the timeline (scores are recorded with a confidence level with respect to a defined category, ¶24, where known malicious behaviors can have corresponding high scores, ¶25; scores are combined with new scores, ¶¶33-35). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Stahlberg such that the agent (6a – 6h) of the node uses the local threat detection model for obtaining scores for a stream of observed local events in a timely manner and/or aligns observed events over a timeline in the order of their appearance and combines their scores assigned by the local threat detection model to the timeline. One of ordinary skill in the art would have been motivated to perform such a modification to utilize the temporal sequence of events in the threat determination, as taught by Bartling.
Regarding claim 9, Stahlberg is silent regarding wherein the anomaly detection model and/or misuse detection model are applied to events observed on each node and overlaid on a timeline graph as a set of respective time series, and/or wherein every new event gets a score or set of scores from the anomaly detection model and/or misuse detection model. However, Bartling, in an analogous art (detecting threats using behavioral models, abstract), teaches that it was known to perform local threat detection on a stream of observed local events (ingests a sequence of events in a stream, ¶16) and observed events are recorded with a score (machine learning model assigns scores, ¶46, which are recorded with a confidence level with respect to a defined category, ¶24, where known malicious behaviors can have corresponding high scores, ¶25) on a timeline (events are analyzed as a temporal sequence to determine if the events are undesirable, ¶17; events are recorded and a timer is started, ¶31). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Stahlberg such that the anomaly detection model and/or misuse detection model are applied to events observed on each node and overlaid on a timeline graph as a set of respective time series, and/or wherein every new event gets a score or set of scores from the anomaly detection model and/or misuse detection model. One of ordinary skill in the art would have been motivated to perform such a modification to utilize the temporal sequence of events in the threat determination, as taught by Bartling.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Stahlberg, as applied to claim 1, in view of “A contextual anomaly detection approach to discover zero-day attack” by AlEroud et al. (AlEroud).
Regarding claim 10, Stahlberg is silent regarding wherein the node utilizes both anomaly detection and misuse detection models at the node (5a – 5h) and uses the models together by analyzing at relations between score patterns between the anomaly detection model and the misuse detection model. However, AlEroud, in an analogous art (detection malicious actions in network-connected devices), teaches that it was known to utilizes both anomaly detection and misuse detection models at a node (Fig. 1, anomaly detection module and misuse detection model) and to use the models together by analyzing at relations between score patterns between the anomaly detection model and the misuse detection model (generating connection record profile similarity scores for misuse detection, p. 41, §III-A, generating an anomaly score for anomaly detection, p. 42, §III-B and apply the scores in conjunction, p. 42, §III-B; see also p. 43, ¶2). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Stahlberg such that the node utilizes both anomaly detection and misuse detection models at the node (5a – 5h) and uses the models together by analyzing at relations between score patterns between the anomaly detection model and the misuse detection model. One of ordinary skill in the art would have been motivated to perform such a modification to gain the benefits of both anomaly and misuse detection while minimizing the dependency on anomaly detection, as taught by AlEroud (see at least p. 43, ¶2).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20240378285 A1 (Lim; Shiau Hong et al.) teaches a model trained as a temporal-event transformer model using a time series formed by low-dimensional causal event streams with pre-determined time-bins and labels; the input events for training the model may include all recorded log events of a given host device over a predetermined time period, e.g., from a last hour, from a last day, from a plurality of previous days, etc. (¶68)
US 20240195826 A1 (Mathews; Sherin et al.) teaches generating anomaly detection models in a federated learning environment and transmitting client models to devices (¶23).
US 20240070286 A1 (Lee; Wei-Han et al.) teaches training models on malicious and benign samples (supervised , ¶23) and deploys the supervised anomaly detector in the federated learning system (¶27).
“Research and implementation on snort-based hybrid intrusion detection system” (Ding et al.) teaches an intrusion detection system utilizing both anomaly and signature-based detection (p. 1415, §3).
US 20210243226 A1 (El Gamal; Aly et al.) teaches a system including both a misuse-detection framework (¶19) and an anomaly-detection framework (¶20; see also ¶¶23-29).
US 10929258 B1 (Gauf; Bernard et al.) teaches training an anomaly-detection model (Fig. 3, col. 4, line 61 – col. 5, line 8 and col. 5, lines 54-67) and creating an event timeline (Fig. 5) and overlaying the event timeline on network suage data (Figs. 6-7).
US 20250225436 A1 (Wang; Zhibi et al.) teaches federated learning, including receiving an initial model from a network entity and training the model (¶119), transmitting messages to a network entity indicating privacy violations (¶122), including transmitting an updated model (¶123).
US 20070239999 A1 (Honig; Andrew et al.) teaches anomaly and misuse detection algorithms (¶11), training detection models classifying data as malicious or normal and deploying models to detectors (¶45).
US 20230308465 A1 (Alroobaea; Roobaea et al.) teaches training models based on federated datasets (¶57).
US 20230186172 A1 (ZIZZO; Giulio et al.) teaches federated learning (¶14), including aggregating federated training and dispatching learning parameters to participating clients (¶¶83-90).
US 20120278890 A1 (Maatta; Marko et al.) teaches utilizing an anomaly detecting model and a misuse detecting model (Fig. 1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J SIMITOSKI whose telephone number is (571)272-3841. The examiner can normally be reached Monday - Friday, 7:00-3:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carl Colin can be reached at 571-272-3862. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael Simitoski/ Primary Examiner, Art Unit 2493
January 13, 2026