Prosecution Insights
Last updated: April 19, 2026
Application No. 18/468,213

ONE-CLASS THREAT DETECTION USING FEDERATED LEARNING

Non-Final OA §103
Filed
Sep 15, 2023
Examiner
TOLENTINO, RODERICK
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Avast Software s.r.o.
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
545 granted / 705 resolved
+19.3% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Office Action is in response to the RCE filed by Applicants on 12/16/2025. Claims 1-19 are pending. This Office Action is Non-Final. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Response to Arguments A) Applicant’s arguments with respect to claim(s) 1, 12 and 19 have been considered but are moot because the new ground of rejection does not rely on the exact combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mushtaq (US 11,595,437) in view of Alexander (US 2024/0259436) and Marathe et al. (US 2023/0052231). As per claim 1, Mushtaq teaches a method of training a machine learning model to classify data as malicious or benign, comprising: receiving in a user device a machine learning model configured to classify data as malicious or benign; training the machine learning model locally on the user device using user-generated data on the user device, the user-generated data classified as known benign (Mushtaq, Col. 18 Lines 12-26 recites “In some embodiments of the present disclosure, at least a portion of the analysis performed by the detection cloud is ported to the on-device machine learning analysis engine 211. The on-device machine learning analysis engine 211 may be part of the endpoint agent and deployed to the endpoint device. For example, the device machine learning analysis engine 211 may include a series of machines learning trained classifiers for analyzing the content of the website being browsed by the end user by dynamically inspecting page contents and server behavior using processing techniques such as computer vision, optical character recognition, and natural language processing (NLP) as described above. In this method, the endpoint agent may hook into the endpoint device browser memory and wait for the user to click a link or browse to a website.”). But fail to teach wherein training the machine learning model comprises training a user-specific one-class training model to establish a baseline behavioral profile for a user, wherein the user-generated data comprises only at least one of a user's sent email and/or sent messages. However, in an analogous art Alexander teaches wherein training the machine learning model comprises training a user-specific one-class training model to establish a baseline behavioral profile for a user, wherein the user-generated data comprises only at least one of a user's sent email and/or sent messages (Alexander, Paragraph 0029 recites “Risk prediction module 160, in the illustrated embodiment, retrieves behavior profiles 162 from database 150 and inputs the behavior profiles into a training module 270. Training module 270, in the illustrated embodiment, trains machine learning model 240 based on behavior profiles 162. For example, training module 270 inputs a behavior profile 162, including a behavior profile ID, for a given user and a simulated message into machine learning model 240. In some embodiments, training module 270 inputs an actual (non-simulated) message sent from the user account corresponding to the behavior profile 162 input to model 240 during training. In addition, training module 270 may input a label indicating whether the simulated message matches the behavior profile 162. Machine learning model 240, in the illustrated embodiment, outputs a risk prediction 242 for the simulated message. For example, model 240 outputs a prediction indicating whether the simulated message matches a behavior profile for the user or whether this simulated message is anomalous. During training, if the prediction output by model 240 is incorrect, training module 270 sends feedback 272 to the model specifying new weights for the model. For example, training module 270 may increase the weights of simulated messages which the model output incorrect predictions. Once the weights of the model 240 have been updated, training module 270 inputs a behavior profile (either the same as the initial input or a different behavior profile) and either the previously simulated message or a new message into machine learning model 240 to see if the model has improved in its prediction. If a new prediction output by model 240 is correct, then training module 270 may pause training at least until behavior profile module 130 generates updated behavior profiles 232 (at which time module 270 retrains machine learning model 240 based on the updated behavior profiles 232). In this way, model 240 may be continuously trained by training module 270 as new messaging activity is added to various behavior profiles.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Alexander’s Security Techniques For Enterprise Messaging Systems with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage of setting a normal behavior of a user to help determine anomalous events. And fails to teach wherein training the machine learning model comprises iteratively updating model parameters on the user device training using stochastic gradient descent; and sending a result of training the machine learning model to a remote server, wherein the result of training comprises one or more gradients generated by the local training on the user device, wherein the user generated data is not sent to the remote server, and wherein the one or more gradients generated as a result of the training are sent to the remote server to be applied to the machine learning model. However, in an analogous Marathe teaches wherein training the machine learning model comprises iteratively updating model parameters on the user device training using stochastic gradient descent (Marathe, Paragraph 0041 recites “wherein training the machine learning model comprises iteratively updating model parameters on the user device training using stochastic gradient descent; and sending a result of training the machine learning model to a remote server, wherein the result of training comprises one or more gradients generated by the local training on the user device, wherein the user generated data is not sent to the remote server, and wherein the one or more gradients generated as a result of the training are sent to the remote server to be applied to the machine learning model”); and sending a result of training the machine learning model to a remote server, wherein the result of training comprises one or more gradients generated by the local training on the user device, wherein the user generated data is not sent to the remote server, and wherein the one or more gradients generated as a result of the training are sent to the remote server to be applied to the machine learning model (Marathe, Paragraph 0044 recites “In some embodiments, the locally updated version of the machine learning model 202 may generate a set of model parameter update gradients 215, such as the model parameter updates/gradients 126 as shown in FIG. 1. These model parameter update gradients may then be clipped at a parameter clipping component 216 according to a global clipping threshold 204 provided by the aggregation server 200, in some embodiments. This global clipping threshold 204 may be selected by the aggregation server for a variety of reasons in various embodiments, including, for example, machine learning model convergence rate and training accuracy. It should be understood, however, that these are merely examples and that other parameters for choosing the threshold c may be imagined. This clipping of the parameter updates according to the provided global clipping threshold 204 may bound sensitivity of the aggregated federated learning model to the model parameter update gradients, in some embodiments.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Marathe’s Subject-Level Granular Differential Privacy In Federated Learning with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage of using federated learning and protect an individual's data. As per claim 12, Mushtaq teaches a method of training a machine learning model to classify data as malicious or benign, comprising: sending a machine learning model configured to classify data as malicious or benign from a server to a user device (Mushtaq, Col. 18 Lines 12-26 recites “In some embodiments of the present disclosure, at least a portion of the analysis performed by the detection cloud is ported to the on-device machine learning analysis engine 211. The on-device machine learning analysis engine 211 may be part of the endpoint agent and deployed to the endpoint device. For example, the device machine learning analysis engine 211 may include a series of machines learning trained classifiers for analyzing the content of the website being browsed by the end user by dynamically inspecting page contents and server behavior using processing techniques such as computer vision, optical character recognition, and natural language processing (NLP) as described above. In this method, the endpoint agent may hook into the endpoint device browser memory and wait for the user to click a link or browse to a website.”). But fails to teach receiving from the user device a result of training the machine learning model locally on the user device using user-generated data on the user device, the user-generated data classified as known benign, wherein the user-generated data comprises only at least one of user-sent email, user-sent instant messages, and/or user-sent communication. However, in an analogous art Alexander teaches receiving from the user device a result of training the machine learning model locally on the user device using user-generated data on the user device, the user-generated data classified as known benign, wherein the user-generated data comprises only at least one of user-sent email, user-sent instant messages, and/or user-sent communication (Alexander, Paragraph 0029 recites “Risk prediction module 160, in the illustrated embodiment, retrieves behavior profiles 162 from database 150 and inputs the behavior profiles into a training module 270. Training module 270, in the illustrated embodiment, trains machine learning model 240 based on behavior profiles 162. For example, training module 270 inputs a behavior profile 162, including a behavior profile ID, for a given user and a simulated message into machine learning model 240. In some embodiments, training module 270 inputs an actual (non-simulated) message sent from the user account corresponding to the behavior profile 162 input to model 240 during training. In addition, training module 270 may input a label indicating whether the simulated message matches the behavior profile 162. Machine learning model 240, in the illustrated embodiment, outputs a risk prediction 242 for the simulated message. For example, model 240 outputs a prediction indicating whether the simulated message matches a behavior profile for the user or whether this simulated message is anomalous. During training, if the prediction output by model 240 is incorrect, training module 270 sends feedback 272 to the model specifying new weights for the model. For example, training module 270 may increase the weights of simulated messages which the model output incorrect predictions. Once the weights of the model 240 have been updated, training module 270 inputs a behavior profile (either the same as the initial input or a different behavior profile) and either the previously simulated message or a new message into machine learning model 240 to see if the model has improved in its prediction. If a new prediction output by model 240 is correct, then training module 270 may pause training at least until behavior profile module 130 generates updated behavior profiles 232 (at which time module 270 retrains machine learning model 240 based on the updated behavior profiles 232). In this way, model 240 may be continuously trained by training module 270 as new messaging activity is added to various behavior profiles.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Alexander’s Security Techniques For Enterprise Messaging Systems with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage of setting a normal behavior of a user to help determine anomalous events. And fails to teach wherein training the machine learning model comprises local iterative training using stochastic gradient descent and wherein receiving from the user device the result of training comprises receiving only one or more gradients generated as a result of the training without receiving the user-generated data, aggregating the received gradients to update a server version of the machine learning model, and sending an updated version of the machine learning model to one or more user devices. However, in analogous art Marathe teaches wherein training the machine learning model comprises local iterative training using stochastic gradient descent (Marathe, Paragraph 0041 recites “wherein training the machine learning model comprises iteratively updating model parameters on the user device training using stochastic gradient descent; and sending a result of training the machine learning model to a remote server, wherein the result of training comprises one or more gradients generated by the local training on the user device, wherein the user generated data is not sent to the remote server, and wherein the one or more gradients generated as a result of the training are sent to the remote server to be applied to the machine learning model”); wherein receiving from the user device the result of training comprises receiving only one or more gradients generated as a result of the training without receiving the user-generated data, aggregating the received gradients to update a server version of the machine learning model, and sending an updated version of the machine learning model to one or more user devices (Marathe, Paragraph 0044 recites “In some embodiments, the locally updated version of the machine learning model 202 may generate a set of model parameter update gradients 215, such as the model parameter updates/gradients 126 as shown in FIG. 1. These model parameter update gradients may then be clipped at a parameter clipping component 216 according to a global clipping threshold 204 provided by the aggregation server 200, in some embodiments. This global clipping threshold 204 may be selected by the aggregation server for a variety of reasons in various embodiments, including, for example, machine learning model convergence rate and training accuracy. It should be understood, however, that these are merely examples and that other parameters for choosing the threshold c may be imagined. This clipping of the parameter updates according to the provided global clipping threshold 204 may bound sensitivity of the aggregated federated learning model to the model parameter update gradients, in some embodiments.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Marathe’s Subject-Level Granular Differential Privacy In Federated Learning with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage of using federated learning and protect an individual's data. Regarding claim 19, claim 19 is directed to a similar system associated with the method of claim 1 respectively. Claim 19 is similar in scope to claim 1, respectively, and are therefore rejected under similar rationale. Claim(s) 2-8 and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mushtaq (US 11,595,437), Alexander (US 2024/0259436) and Marathe et al. (US 2023/0052231) and in further view of Zhang et al. (US 2024/0364723). As per claim 2, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach wherein the user-generated data comprises user communication. However, in an analogous art Zhang teaches wherein the user-generated data comprises user communication (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 3, Mushtaq in combination with Alexander, Marathe and Zhang teaches the method of training a machine learning model to classify data as malicious or benign of claim 2, Zhang further teaches wherein the user communication comprises at least one of email and/or instant messages (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 4, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach wherein the user-generated data comprises at least one of a user's sent email and/or sent messages. However, in an analogous art Zhang teaches wherein the user-generated data comprises at least one of a user's sent email and/or sent messages (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 5, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach wherein the user-generated data comprises user-generated data generated by a user classified as a reputable sender. However, in an analogous art Zhang teaches wherein the user-generated data comprises user-generated data generated by a user classified as a reputable sender (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 6, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach training the machine learning model on the user device using data classified as known malicious. However, in an analogous art Zhang teaches training the machine learning model on the user device using data classified as known malicious (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.” And Paragraph 0015 recites “Additionally, or alternatively, the machine learning model and the corresponding detection models may be trained with both a list of known fraudulent and known authentic users to enhance the detection of fraudulent users from the metadata of emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 7, Mushtaq in combination with Alexander and Marathe and Zhang teaches The method of training a machine learning model to classify data as malicious or benign of claim 6, Zhang further teaches wherein the data classified as known malicious comprises at least one of data from known malicious sources or data identified by a trusted user as malicious (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.” And Paragraph 0015 recites “Additionally, or alternatively, the machine learning model and the corresponding detection models may be trained with both a list of known fraudulent and known authentic users to enhance the detection of fraudulent users from the metadata of emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 8, Mushtaq in combination with Alexander and Marathe and Zhang teaches The method of training a machine learning model to classify data as malicious or benign of claim 7, Zhang further teaches wherein data from known malicious sources comprises data associated with at least one of an email, domain, phone number, or contact information associated with previously known malicious data However, in an analogous art Zhang teaches (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 13, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 12, but fails to teach wherein the user-generated data comprises at least one of user-sent email, user-sent instant messages, and/or user-sent communication. However, in an analogous art Zhang teaches wherein the user-generated data comprises at least one of user-sent email, user-sent instant messages, and/or user-sent communication (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 14, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 12, but fails to teach wherein the result of training the machine learning model further comprises a result of training the machine learning model on the user device using data classified as known malicious. However, in an analogous art Zhang teaches wherein the result of training the machine learning model further comprises a result of training the machine learning model on the user device using data classified as known malicious (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 15, Mushtaq in combination with Alexander, Marathe and Zhang teaches the method of training a machine learning model to classify data as malicious or benign of claim 14, Zhang further teaches wherein the data classified as known malicious comprises at least one of data from known malicious sources or data identified by a trusted user as malicious (Zhang, Paragraph 0028 recites “As such, the machine learning model may operate more efficiently as more fraudulent users are detected. For example, if a user receives an email from an unknown source, the cloud platform 115 may send the email through the machine learning model for detection. The machine learning model may use the metadata of the email along with the data of known fraudulent users and patterns learned from detecting previous fraudulent users to detect if the unknown source may be a fraudulent user. If the unknown source is detected to be a fraudulent user, the machine learning model may learn from the patterns presented in the metadata of that email and the cloud platform 115 may prevent the user from receiving the email entirely. Further, if the machine learning model detects that the unknown source is an authentic user, such user may be added to a list of authentic users to allow any subsequent emails to bypass the detection models, therefore reducing the processing power associated with running the machine learning model. As such, the operation of the machine learning model may improve the user experience by providing a safer and more secure way of receiving emails.” And Paragraph 0015 recites “Additionally, or alternatively, the machine learning model and the corresponding detection models may be trained with both a list of known fraudulent and known authentic users to enhance the detection of fraudulent users from the metadata of emails.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Zhang’s content-oblivious fraudulent email detection system with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. Claim(s) 9 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mushtaq (US 11,595,437), Alexander (US 2024/0259436) and Marathe et al. (US 2023/0052231) and in further view of Hayden et al. (US 2018/0367553) As per claim 9, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach wherein training the machine learning model comprises training a graph neural network. However, in an analogous art Hayden teaches wherein training the machine learning model comprises training a graph neural network (Hayden, Paragraph 0084 recites “For example, each input node 720 in the input layer can represent a different corresponding byte of a fixed length message (in this case, N =162 input nodes for a 162-byte message, as might be used in a message packet in a 1553 network). The output layer includes two output nodes 740, representing true and false (such as anomalous or not). The hidden layer has (N×2)/3=108 hidden nodes (i.e., two-thirds the number of input nodes), but this is only an example, and other embodiments are not so limited. For example, the number of hidden nodes could be two-thirds times the number of input and output nodes. Training the neural network 710 generally refers to finding the best weights that can classify the incoming messages 715. In more detail, training the neural network 710 includes presenting the neural network training data and then using machine learning techniques such as stochastic gradient descent and backpropagation to determine the weights of the corresponding first and second connections 725 and 735 so that the neural network correctly identifies (e.g., classifies) the incoming messages as anomalous or not. The neural network 710 may be a simple network (e.g., looking for only one type of anomaly), so a single hidden layer may suffice. In general, increasing the number of hidden layers makes the neural network more adaptable and trainable, but if the task of the neural network is simple enough, more hidden layers will not improve the accuracy of the neural network.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Hayden’s Cyber warning receiver with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage of having the ability to train a model about communication anomalies. As per claim 16, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 12, but fails to teach wherein training the machine learning model comprises training at least one of a one-class training model and/or a graph neural network. However, in an analogous art Hayden teaches wherein training the machine learning model comprises training at least one of a one-class training model and/or a graph neural network (Hayden, Paragraph 0084 recites “For example, each input node 720 in the input layer can represent a different corresponding byte of a fixed length message (in this case, N =162 input nodes for a 162-byte message, as might be used in a message packet in a 1553 network). The output layer includes two output nodes 740, representing true and false (such as anomalous or not). The hidden layer has (N×2)/3=108 hidden nodes (i.e., two-thirds the number of input nodes), but this is only an example, and other embodiments are not so limited. For example, the number of hidden nodes could be two-thirds times the number of input and output nodes. Training the neural network 710 generally refers to finding the best weights that can classify the incoming messages 715. In more detail, training the neural network 710 includes presenting the neural network training data and then using machine learning techniques such as stochastic gradient descent and backpropagation to determine the weights of the corresponding first and second connections 725 and 735 so that the neural network correctly identifies (e.g., classifies) the incoming messages as anomalous or not. The neural network 710 may be a simple network (e.g., looking for only one type of anomaly), so a single hidden layer may suffice. In general, increasing the number of hidden layers makes the neural network more adaptable and trainable, but if the task of the neural network is simple enough, more hidden layers will not improve the accuracy of the neural network.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Hayden’s Cyber warning receiver with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage of having the ability to train a model about communication anomalies. Claim(s) 10, 11, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mushtaq (US 11,595,437), Alexander (US 2024/0259436) and Marathe et al. (US 2023/0052231) and in further view of Sircar (US 2023/0188563). As per claim 10, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach wherein training the machine learning model comprises training using stochastic gradient descent, and wherein one or more gradients generated as a result of the training are sent back to the server to be applied to the machine learning model. However, in an analogous art Sircar teaches wherein training the machine learning model comprises training using stochastic gradient descent, and wherein one or more gradients generated as a result of the training are sent back to the server to be applied to the machine learning model (Sircar, Paragraph 0060 recites “The system can repeatedly perform the process 400 on inputs selected from a set of training data as part of a conventional machine learning training technique to train the machine learning models, e.g., a gradient descent with backpropagation training technique that uses a conventional optimizer, e.g., stochastic gradient descent, RMSprop, or Adam optimizer, including Adam with weight decay (“AdamW”) optimizer.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Sircar’s identifying a phishing attempt with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 11, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 1, but fails to teach wherein training the machine learning model comprises training a first model to identify malicious data and a second model to identify benign data. However, in an analogous art Sircar teaches wherein training the machine learning model comprises training a first model to identify malicious data and a second model to identify benign data (Sircar, Paragraph 0022 recites “The system 100 includes a plurality of machine learning models 120A-C that are each configured to process the input, data derived from the input, or both to generate a respective embedding, e.g., embedding 122A, of the network accessible page 102 from which a corresponding score, e.g., score 122A, can be determined. The system also includes an output module 130 that is configured to generate the output classification 152 from the respective scores 122A-C. As used herein, an embedding is an ordered collection of numeric values, e.g., a matrix or vector of floating point or quantized values.” It would be obvious that a system with the ability to have multiple models, which perform different tasks, could apply them to malicious and benign data.). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Sircar’s identifying a phishing attempt with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 17, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 12, but fails to teach wherein training the machine learning model comprises training using stochastic gradient descent, and wherein receiving from the user device a result of training the machine learning model comprises receiving one or more gradients generated as a result of the training and applying the one or more received gradients to the machine learning model. However, in an analogous art Sircar teaches wherein training the machine learning model comprises training using stochastic gradient descent, and wherein receiving from the user device a result of training the machine learning model comprises receiving one or more gradients generated as a result of the training and applying the one or more received gradients to the machine learning model (Sircar, Paragraph 0060 recites “The system can repeatedly perform the process 400 on inputs selected from a set of training data as part of a conventional machine learning training technique to train the machine learning models, e.g., a gradient descent with backpropagation training technique that uses a conventional optimizer, e.g., stochastic gradient descent, RMSprop, or Adam optimizer, including Adam with weight decay (“AdamW”) optimizer.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Sircar’s identifying a phishing attempt with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. As per claim 18, Mushtaq in combination with Alexander and Marathe teaches the method of training a machine learning model to classify data as malicious or benign of claim 12, but fails to teach wherein training the machine learning model comprises training a first model to identify malicious data and a second model to identify benign data. However, in an analogous art Sircar teaches wherein training the machine learning model comprises training a first model to identify malicious data and a second model to identify benign data (Sircar, Paragraph 0022 recites “The system 100 includes a plurality of machine learning models 120A-C that are each configured to process the input, data derived from the input, or both to generate a respective embedding, e.g., embedding 122A, of the network accessible page 102 from which a corresponding score, e.g., score 122A, can be determined. The system also includes an output module 130 that is configured to generate the output classification 152 from the respective scores 122A-C. As used herein, an embedding is an ordered collection of numeric values, e.g., a matrix or vector of floating point or quantized values.” It would be obvious that a system with the ability to have multiple models, which perform different tasks, could apply them to malicious and benign data.). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Sircar’s identifying a phishing attempt with Mushtaq’s Method And System For Stopping Multi-vector Phishing Attacks Using Cloud Powered Endpoint Agents because it offers the advantage preventing fraudulent communication. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached on 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. RODERICK . TOLENTINO Examiner Art Unit 2439 /RODERICK TOLENTINO/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
May 22, 2025
Non-Final Rejection — §103
Aug 27, 2025
Response Filed
Sep 12, 2025
Final Rejection — §103
Dec 16, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603907
SERVER AND METHOD FOR PROVIDING ONLINE THREAT DATA BASED ON USER-CUSTOMIZED KEYWORDS FOR PRIVATE CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12592915
INFERENCE-BASED SELECTIVE FLOW INSPECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580946
SYSTEMS AND METHODS FOR TRIGGERING TOKEN ALERTS
2y 5m to grant Granted Mar 17, 2026
Patent 12580948
CYBERSECURITY OPERATIONS MITIGATION MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572632
SYSTEMS AND METHODS FOR DATA SECURITY MODEL MODIFICATION AND ANOMALY DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.4%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month