Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The following action is in response to the communication(s) received on 01/22/2026.
As of the claims filed 01/22/2026:
1, 4, 14, 16, 18, and 20 have been amended.
Claims 1-20 are pending.
Claims 1, 14, and 18 are independent claims.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/22/2026 has been entered.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 12/19/2025 were filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Applicant’s arguments filed 01/22/2026 have been fully considered, but are not fully persuasive.
With respect to the rejection under 35 USC § 101:
The amendments have overcome the rejection which has thusly been withdrawn.
With respect to the rejection under 35 USC 103:
Applicant’s argument regarding Sukhanov not teaching the claimed invention (p.12-15) have been considered but is moot in view of the new art rejection under Ludwig/Elsayed/Iliyasu. Elsayed teaches the first network trained on majority examples (Elsayed [p.2 right ¶5]; [p.2 left ¶2]), and Iliyasu teaches using a few-shot classifier using the minority examples (Iliyasu [p.5 last 2 ¶]). Ludwig teaches combining the two neural networks in an ensemble network to classify the attacks (Ludwig [p.4 left ¶4]; [p.4 right Definition 4])
Applicant further asserts that Elsayed teaches away from the combination (p.16), but this is moot in view of the new prior art rejection under Ludwig/Elsayed/Iliyasu. Ludwig teaches an ensemble of networks consisting of an autoencoder (Ludwig [abstract]), and Elsayed teaches the method of determining the majority or minority class based on reconstruction errors (Elsayed [p.2 left ¶2]).
Applicant further asserts that Huang teaches away from the combination as it is a standalone system for anomaly detection (p.16), but this is moot in view of the new combination of Ludwig/Elsayed/Iliyasu. Iliyasu teaches a few-shot classifier in a combined method for classifying anomalies (Iliyasu [p.5 last 2 ¶]), while Ludwig teaches the voting method of the combination of model (Ludwig [p.4 left ¶4]; [p.4 right Definition 4]).
Applicant further asserts that the prior arts lack proper motivations to combine (p.17¶1-18¶1), but this is moot in view of the new prior art rejection under Ludwig/Elsayed/Iliyasu, as stated above.
Applicant further asserts that Huang does not teach the combination with the autoencoder, but this is moot in view of the new combination of Ludwig/Elsayed/Iliyasu as stated above.
Applicant further asserts that claims 3, 9-11, 17, 12, and 13 are allowable by virtue of dependency of their respective parent claims. Examiner respectfully submits that the claims remain rejected at least by virtue of dependency of their respective parent claims for the reasons stated above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 2, 4, 5, 9-12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ludwig et al., "Applying a neural network ensemble to intrusion detection" (hereinafter Ludwig), further in view by Elsayed et al., "Network Anomaly Detection Using LSTM Based Autoencoder" (hereinafter Elsayed), further in view of Iliyasu et al., "Few-Shot Network Intrusion Detection Using Discriminative Representation Learning with Supervised Autoencoder" (hereinafter Iliyasu).
Regarding Claim 1, Iliyasu teaches:
A computer implemented method comprising:
(Ludwig [p.4 left ¶4] Ensemble learning is an approach where several classifiers are trained and their results are fused together in order to separate the different classes.) (Note: training classifiers requires a processor and memory, thus corresponding to a computer implemented method)
receiving information representative of an event;
(Ludwig [p.4 left ¶4] In this paper, several deep neural network approaches are used and their results are fused together in order to distinguish between normal and attack behavior of a network.) (Note: the behavior of the network corresponds to the information representative of an event)
executing a first machine learning network on the received information representative of the event, the first machine learning network including an autoencoder… (Ludwig [abstract] The neural network ensemble method consists of an autoencoder…)
Ludwig does not teach, but Elsayed further teaches:
trained on majority class labeled examples to classify majority class events with low reconstruction error and events in a minority class as rare events with high reconstruction error;
(Elsayed [p.2 right ¶5] The majority of real data are unbalanced, where the anomalies data are often challenging and less frequently to obtain compared to normal data… [p.2 left ¶2] The main contributions of this paper are as follows– (a) We proposed a deep learning based on LSTM-autoencoder model for anomaly detection. The idea is to train the deep learning model using normal data only. In this case, the model is capable of replicating the input data at the output layer with a low reconstruction error.) (Note: normal data corresponds to majority class labeled examples)
Elsayed and Ludwig are analogous to the present invention because both are from the same field of endeavor of intrusion detection methods. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the training method of the autoencoder by Elsayed into Ludwig’s ensemble attack detection method. The motivation would be to “the model is capable of replicating the input data at the output layer with a low reconstruction error” (Elsayed [p.2 right ¶5]).
Ludwig/Elsayed does not teach, but Iliyasu further teaches:
executing a second machine learning network on the received information representative of the event, the second machine learning network comprising a few-shot deep neural network trained on minority class labeled examples to classify events in the minority class with high confidence;
(Iliyasu [p.5 last 2 ¶] We trained the discriminative autoencoder during the meta-training stage (Algorithm 1). After training, the decoder part of the model was discarded, while the encoder module, which then served as our feature extractor was retained. The encoder was then employed in a fixed state (no fine-tuning) in the meta testing stage. The meta testing stage consists of the task of identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).
PNG
media_image1.png
355
700
media_image1.png
Greyscale
) (Note: training by converging the parameters for the unseen classes in Algorithm 2 corresponds to training on the minority examples with high confidence)
Iliyasu and Ludwig/Elsayed are analogous to the present invention because both are from the same field of endeavor of intrusion detection methods via combined neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the few-shot classifier from Iliyasu into Ludwig/Elsayed’s ensemble attack detection method. The motivation would be to “enable it to adapt quickly to unseen tasks in the meta-testing stage, using powerful optimization techniques” (Iliyasu [p.4 last ¶])
Ludwig, via Ludwig/Elsayed/Iliyasu, further teaches:
and combining classifications of the first and second machine learning networks, including confidences, to predict the class of the information representative of the event…(Ludwig [p.4 left ¶4] Ensemble learning is an approach where several classifiers are trained and their results are fused together in order to separate the different classes.
[p.4 right Definition 4]
PNG
media_image2.png
286
492
media_image2.png
Greyscale
) (Note: fusing the results of the several classifiers using the weighted majority voting corresponds to predicting the class using the confidences; β corresponds to the set of respective confidences)
Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
wherein classification of the event as a rare event and as a minority event are classified as the minority event
(Iliyasu [p.5 last 2 ¶]…identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).) (Note: unseen classes correspond to rare events; classes of attack correspond to the minority events)
Regarding Claim 2, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Elsayed, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein the first machine learning network comprises an encoder decoder deep neural network classifier that produces a reconstruction error that is higher for minority class events than for majority class events. (Elsayed [p.2 right ¶5] The majority of real data are unbalanced, where the anomalies data are often challenging and less frequently to obtain compared to normal data… [p.2 left ¶2] The main contributions of this paper are as follows– (a) We proposed a deep learning based on LSTM-autoencoder model for anomaly detection. The idea is to train the deep learning model using normal data only. In this case, the model is capable of replicating the input data at the output layer with a low reconstruction error.) (Note: normal data corresponds to majority class labeled examples)
Regarding Claim 4, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Elsayed, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein the first machine learning network is trained only on majority class labeled examples
(Elsayed [p.2 right ¶5] The majority of real data are unbalanced, where the anomalies data are often challenging and less frequently to obtain compared to normal data… [p.2 left ¶2] The main contributions of this paper are as follows– (a) We proposed a deep learning based on LSTM-autoencoder model for anomaly detection. The idea is to train the deep learning model using normal data only. In this case, the model is capable of replicating the input data at the output layer with a low reconstruction error.) (Note: normal data corresponds to majority class labeled examples)
Regarding Claim 5, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein the second machine learning network comprises a few shot deep neural network classifier. (Iliyasu [p.5 last 2 ¶] We trained the discriminative autoencoder during the meta-training stage (Algorithm 1). After training, the decoder part of the model was discarded, while the encoder module, which then served as our feature extractor was retained. The encoder was then employed in a fixed state (no fine-tuning) in the meta testing stage. The meta testing stage consists of the task of identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).
PNG
media_image1.png
355
700
media_image1.png
Greyscale
)
Regarding Claim 9, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
classified… as minority class events
(Iliyasu [p.5 last 2 ¶]…identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).) (Note: unseen classes correspond to rare events; classes of attack correspond to the minority events)
Ludwig, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein combining classifications of the first and second machine learning models to predict the class of the information representative of the event comprises performing a union of events classified by both models… (Ludwig [p.4 right def.3, def.4]
PNG
media_image3.png
517
501
media_image3.png
Greyscale
) (Note: the majority voting sums up the labels from the classifiers, thus corresponding to a union of both models)
Regarding Claim 10, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
classified… as minority class events (Iliyasu [p.5 last 2 ¶]…identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).) (Note: unseen classes correspond to rare events; classes of attack correspond to the minority events)
Ludwig, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein combining classifications of the first and second machine learning models to predict the class of the information representative of the event comprises performing an intersection of events classified by both models … (Ludwig [p.4 right def.3, def.4]
PNG
media_image3.png
517
501
media_image3.png
Greyscale
) (Note: when the weight vector of one model is greater than the other vector and the ensemble selects the greater weight, the voting ensemble corresponds to an intersection of the models.)
Regarding Claim 11, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
classified… as rare events (Iliyasu [p.5 last 2 ¶]…identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).) (Note: unseen classes correspond to rare events; classes of attack correspond to the minority events)
Ludwig, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein combining classifications of the first and second machine learning models to predict the class of the information representative of the event comprises performing a combination of predicted probabilities of events classified by both models … compared to a… threshold. (Ludwig [p.4 right def.3, def.4]
PNG
media_image3.png
517
501
media_image3.png
Greyscale
) (Note: the majority voting sums up the labels from the classifiers, thus corresponding to a combination of both models)
Regarding Claim 12, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
classified… as minority class events (Iliyasu [p.5 last 2 ¶]…identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).) (Note: unseen classes correspond to rare events; classes of attack correspond to the minority events)
Ludwig, via Ludwig/Elsayed/Iliyasu, further teaches:
The method of claim 1 wherein combining classifications of the first and second machine learning models to predict the class of the information representative of the event comprises performing a weighted combination of predicted probabilities of events classified by both models as … compared to a … threshold… (Ludwig [p.4 right def.3, def.4]
PNG
media_image3.png
517
501
media_image3.png
Greyscale
) (Note: the majority voting sums up the labels from the classifiers with weight vectoss, thus corresponding to a weighted combination of both models)
Independent Claim 14 recites A machine-readable storage device having instructions for execution by a processor of a machine to cause the processor to perform operations to perform a method, the operations comprising (Iliyasu [p.5 2nd to last ¶]We trained the discriminative autoencoder during the meta-training stage…) (Note: training an autoencoder requires a processor and memory, which correspond to the storage device and execution by processor) to perform precisely the methods of Claim 1. Thus, Claim 14 is rejected for reasons set forth in Claim 1.
Regarding Claim 15, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 14. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
The device of claim 14 wherein the first machine learning network comprises an encoder decoder deep neural network classifier and wherein the second machine learning network comprises a few shot deep neural network classifier. (Iliyasu [p.5 ¶3] We adopted the discriminative autoencoder proposed in [38], which, in its setup, uses data from two distributions, termed positive (X +) and negative (X −), with their labeled information. The discriminative autoencoder then learns a manifold that is good at reconstructing the data from the positive distribution, while ensuring that those of the negative distributions are pushed away from the manifold. This enables it to learn robust patterns and similarities that separate the two distributions.
In our case, the two distributions, X + and X −, can be generated from benign network traffic classes and malicious traffic classes. Let l(x) denote the label of an example, x, with l(x) ∈ {−1, 1} and d(x) is the distance of that example to the manifold, with d(x) = kx − xk. Then, the loss function is described as:
PNG
media_image4.png
56
544
media_image4.png
Greyscale
[p.5 last 2 ¶] We trained the discriminative autoencoder during the meta-training stage (Algorithm 1). After training, the decoder part of the model was discarded, while the encoder module, which then served as our feature extractor was retained. The encoder was then employed in a fixed state (no fine-tuning) in the meta testing stage. The meta testing stage consists of the task of identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).
PNG
media_image5.png
399
698
media_image5.png
Greyscale
PNG
media_image1.png
355
700
media_image1.png
Greyscale
)
Claim(s) 16, dependent on Claim 14, also recite the system configured to perform precisely the methods of Claims 4, and thus are rejected for reasons set forth in these claims.
Regarding Claim 17, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 14. Iliyasu, via Ludwig/Elsayed/Iliyasu, further teaches:
classified… as rare events (Iliyasu [p.5 last 2 ¶]…identifying a novel class of attack, which has few examples. For a given task (Dqtrain , Dqtest) sampled from the meta-testing set, S, we trained a classifier, f , on top of the extracted features to recognize the unseen classes using the training dataset, Dqtrain (Algorithm 2).) (Note: unseen classes correspond to rare events; classes of attack correspond to the minority events)
Ludwig, via Ludwig/Elsayed/Iliyasu, further teaches:
The device of claim 14 wherein combining classifications of the first and second machine learning models to predict the class of the information representative of the event comprises performing a union, an intersection, or a combination of predicted probabilities of events classified by both models... (Ludwig [p.4 right def.3, def.4]
PNG
media_image3.png
517
501
media_image3.png
Greyscale
) (Note: the majority voting sums up the labels from the classifiers, thus corresponding to a combination of both models)
Independent Claim 18 recites A device comprising: a processor; and a memory device coupled to the processor and having a program stored thereon for execution by the processor to perform operations comprising (Iliyasu [p.5 2nd to last ¶]We trained the discriminative autoencoder during the meta-training stage…) (Note: training an autoencoder requires a processor and memory, which correspond to the storage device and execution by processor ) to perform precisely the methods of Claim 1. Thus, Claim 18 is rejected for reasons set forth in Claim 1.
Claim(s) 19, dependent on Claim 18, also recite the system configured to perform precisely the methods of Claims 15, respectively, and thus are rejected for reasons set forth in these claims.
Claim(s) 20, dependent on Claim 18, also recite the system configured to perform precisely the methods of Claim(s) 4, respectively, and thus are rejected for reasons set forth in these claims.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ludwig/Elsayed/Iliyasu further in view of NVISO, “Using Word2Vec to spot anomalies while Threat Hunting using ee-outliers” (hereinafter NVISO).
Regarding Claim 3, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 2. Ludwig/Elsayed/Iliyasu does not teach, but NVISO further teaches:
The method of claim 2 wherein the received information representative of the event comprises a tensor derived from natural language processing
of a change table (NVISO [Introduction] The basic idea behind this is that we try to identify sentences that look “odd” or unusual – but instead of looking at sentences as a sequence of English words, we will look at security event data in order to spot anomalies.) (Note: the security event data corresponds to the change table)
NVISO and Iliyasu are analogous to the present invention because both are from the same field of endeavor of analyzing anomalies. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the dataset from NVISO into Iliyasu method of predicting a class. The motivation would be to “introduce the user to the concept of using Machine Learning techniques designed to originally spot anomalies in written (English) sentences, and instead apply them to support the Threat Analyst in spotting anomalies in security events” (NVISO [Introduction]).
Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Ludwig/Elsayed/Iliyasu in view of Huang et al., “A Gated Few-shot Learning Model For Anomaly Detection” (hereinafter Huang).
Regarding Claim 6, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Ludwig/Elsayed/Iliyasu does not explicitly teach, but Huang further teaches:
The method of claim 5 wherein the few shot deep neural network classifier comprises a weighted K-nearest neighbor classifier. (Huang
PNG
media_image6.png
352
318
media_image6.png
Greyscale
[p.2 1st col last ¶]
PNG
media_image7.png
64
346
media_image7.png
Greyscale
PNG
media_image8.png
492
349
media_image8.png
Greyscale
) (Note: the labeling method using the cosine distance to compute similarity between the label and the support set, in light of the Specification [0029], corresponds to a weighted k-nearest neighbor classifier.)
Huang and Ludwig/Elsayed/Iliyasu are analogous to the present invention because both are from the same field of endeavor of machine learning for classification. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the similarity comparison method of few-shot learning from Huang into Ludwig/Elsayed/Iliyasu’s method of predicting the anomalous class. The motivation would be to “determine the importance of known anomaly and unknown anomaly.” (Huang [p.2 1st col 1st ¶]).
Regarding Claim 7, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Ludwig/Elsayed/Iliyasu does not teach, but Huang further teaches:
The method of claim 1 wherein the event is a rare event that comprises less than 10% of events. (Huang [p.3 1st col last ¶] We evaluate our few-shot learning method based on the anomaly dataset NSL-KDD [17]. NSL-KDD is an improved version of original KDDCUP’99 [18] dataset, which removed all the repeated records in the entire KDD train and test set and kept only one copy of each record. NSL-KDD is an effective benchmark dataset in the domain of intrusion detection. In the meantime, NSL-KDD is also an imbalanced dataset; it contains fewer samples for some attacks. For instance, the dataset only consists of 0.4% U2R data, although the dataset contains four types of attacks.) (Note: U2R data corresponds to rare events)
Huang and Ludwig/Elsayed/Iliyasu are analogous to the present invention because both are from the same field of endeavor of machine learning for classification. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the dataset comprising rare events from Huang into Ludwig/Elsayed/Iliyasu’s method of training the few-shot network intrusion method. The motivation would be to “determine the importance of known anomaly and unknown anomaly.” (Huang [p.2 1st col 1st ¶]).
Regarding Claim 8, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Ludwig/Elsayed/Iliyasu does not teach, but Huang further teaches:
The method of claim 1 wherein the event is a rare event that comprises 1% or less of events. (Huang [p.3 1st col last ¶] We evaluate our few-shot learning method based on the anomaly dataset NSL-KDD [17]. NSL-KDD is an improved version of original KDDCUP’99 [18] dataset, which removed all the repeated records in the entire KDD train and test set and kept only one copy of each record. NSL-KDD is an effective benchmark dataset in the domain of intrusion detection. In the meantime, NSL-KDD is also an imbalanced dataset; it contains fewer samples for some attacks. For instance, the dataset only consists of 0.4% U2R data, although the dataset contains four types of attacks.) (Note: U2R data corresponds to rare events)
Huang and Iliyasu are analogous to the present invention because both are from the same field of endeavor of machine learning for classification. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the dataset comprising rare events from Huang into Iliyasu’s method of training the few-shot network intrusion method. The motivation would be to “determine the importance of known anomaly and unknown anomaly.” (Huang [p.2 1st col 1st ¶]).
Claims 13 is rejected under 35 U.S.C. 103 as being unpatentable over Ludwig/Elsayed/Iliyasu in view of Vanerio et al., “Ensemble-learning Approaches for Network Security and Anomaly Detection” (hereinafter Vanerio).
Regarding Claim 13, Ludwig/Elsayed/Iliyasu respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Ludwig/Elsayed/Iliyasu does not teach, but Vanerio further teaches:
The method of claim 1 wherein the events comprise changes made to a cloud-based system and wherein minority class events comprise events causing incidents that adversely affect the cloud-based system. (Vanerio [p.1 2nd col 2nd ¶] Network security and anomaly detection represent both a keystone to ISPs, who need to cope with an increasing number of unexpected events that put the network’s performance and integrity at risk. The high-dimensionality of network data provided by current network monitoring systems opens the door to the massive application of machine learning approaches to improve the detection and classification of anomalous events.) (Note: putting the network’s performance and integrity at risk corresponds to adversely affecting the cloud-based system)
Vanerio and Ludwig/Elsayed/Iliyasu are analogous to the present invention because both are from the same field of endeavor of machine learning for network security. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the dataset from Vanerio into Ludwig/Elsayed/Iliyasu’s method of predicting an attack class. The motivation would be to “The high-dimensionality of network data provided by current network monitoring systems opens the door to the massive application of machine learning approaches to improve the detection and classification of anomalous events” (Vanerio [p.1 2nd col 2nd ¶]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.H./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122