Prosecution Insights
Last updated: April 19, 2026
Application No. 18/758,550

METHODS AND DEVICES FOR ENHANCING SECURITY PROTECTION FOR A NETWORK SERVICE DEVICE

Non-Final OA §101§103§112
Filed
Jun 28, 2024
Examiner
AHMED, MAHABUB S
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
F5 Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
247 granted / 289 resolved
+27.5% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
306
Total Applications
across all art units

Statute-Specific Performance

§101
17.3%
-22.7% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 289 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communication filed on 06/28/2024. Status of claims in the instant application: Claims 1-20 are pending. Election/Restrictions No claim restrictions is warranted at the applicant’s initial time of filing for patent. Priority The instant application does not claim benefit to any earlier filed application for patent. Information Disclosure Statement Information Disclosure Statements (IDS) filed on 06/28/2024, 09/24/2024 and 08/26/2025 have been considered, and a signed copies of the IDS forms have been attached to this office action. Drawings Drawings filed on 06/28/2024 have been inspected, and it’s in compliance with MPEP 608.02. Specification Specification filed on 06/28/2024 has been inspected and it’s in compliance with MPEP 608.01. Claim Interpretation No claim interpretation under 35 USC 112(f) is warranted. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-5, 9-10, 14-15 and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 4 recites, “The method of claim 1, wherein the obtaining the traffic data transmitted from the network service device further comprises: obtaining the traffic data …”. However, There is no earlier recitation of “obtaining the traffic data”; therefore, “the obtaining the traffic data” is missing antecedent basis, thus making the claim language indefinite/ambiguous. Therefore claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 5, 9-10, 14-15 and 19-20 are also rejected for reasons similar to that of claim 4. Appropriate corrections require. **** Note: For examination purposes, the claim 4 (and the other similar claims) are interpreted to recite, “The method of claim 1, wherein obtaining the traffic data transmitted from the network service device further comprises: obtaining the traffic data …”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-7, 9-12, 14-17 and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 recites the limitations, “executing a security enhancing model to detect one or more security anomalies from the retrieved one or more attributes, wherein the security enhancing model is not subscribed by the network service device; in response to the one or more security anomalies being detected, generating a notification comprising information on at least one of the one or more security anomalies and the security enhancing model”. As recited, in the claim limitations above, the detection of anomaly can be considered a abstract idea of mental process/step of comparing some data/attribute with a threshold value and flagging as an anomaly if it’s exceeding the threshold. These limitations can reasonably be performed in human mind with the aid of pencil and paper as appropriate. The remaining limitations of collecting data from network communication, transmitting data are considered ordinary computing/network activities/functions. The claim does not recite any other limitations that can be considered significantly more than a mental step or that integrates the abstract idea into a practical application. Therefore, claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Furthermore, limitations recited in dependent claims 2, 4 and 5 further recites receiving/transmitting of data that is considered an ordinary computing/network function. The gathering (sending/receiving) of data is considered an Insignificant Extra-Solution Activity (mpep 2106.059g)). Therefore, claims 2, 4 and 5 are also rejected for reason similar to that of claim 1. Claims 6-7, 9-12, 14-17 and 19-20 are also rejected for reasons similar to that of claims 1,2, 4 and 5. Appropriate corrections require. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-7, 9-12, 14-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20160063218 A1 to Nachenberg (hereinafter “Nachenberg”) in view of Pub. No.: US 20130333032 A1 to Delatorre et al. (hereinafter “Delatorre”). Regarding Claim 1. Nachenberg discloses A method for protecting a network service device (Nachenberg: Abstract, FIG. 3), the method implemented by a network traffic management system comprising one or more network traffic management apparatuses, client devices, or server devices (Nachenberg: FIG. 2), the method comprising: monitoring traffic data of the network service device (Nachenberg, Para [0003, 0008, 0047, 0033]: … At step 304, one or more of the systems described herein may monitor, at each of a plurality of Internet-traffic chokepoints, Internet traffic for fraudulent uses of brands. For example, monitoring module 106 may, as part of endpoint device 202 and/or intermediate device 204 in FIG. 2, monitor Internet traffic for impersonation scams and attacks … As illustrated in FIG. 2, endpoint device 202 and intermediate device 204 may be capable of exchanging Internet traffic with illegitimate third-party server 210 and legitimate third-party server 212 via network 208. Illegitimate third-party server 210 generally represents any type or form of computing device or service that may transmit Internet traffic that includes a fraudulent use of a brand and/or that may be used to perform a fraudulent use of a brand. For example, illegitimate third-party server 210 may represent a computing device that hosts a phishing website … the plurality of Internet-traffic chokepoints may include one or more network components of a communications service provider (e.g., one or more network components of an Internet service provider or a wireless-communication service provider), one or more residential gateways (e.g., cable modems or Digital Subscriber Line (DSL) modems), and/or one or more endpoint devices (e.g., laptops, tablets, or smartphones) …); retrieving one or more attributes from the monitored traffic data of the network service device (Nachenberg, Para [0048-0049]: … The systems described herein may monitor various types of Internet traffic. As used herein, the term “Internet traffic” generally refers to any transfer of data (e.g., packets, streams, or files) between two computing devices, especially any transfer of data between two computing devices via the Internet. Using FIG. 2 as an example, the term “Internet traffic” may refer to data transferred between endpoint device 202 or intermediate device 204 and one or more of server 206, illegitimate third-party server 210, and legitimate third-party server 212 via network 208. In some examples, monitoring module 106 may monitor web traffic, hypertext transfer protocol (HTTP) traffic, email traffic, and/or Domain Name System (DNS) traffic for fraudulent uses of brands … Monitoring module 106 may monitor Internet traffic for these fraudulent uses of brands using any suitable monitoring technique (e.g., deep packet inspection (DPI), fingerprint inspection, heuristic analysis, reputation analysis, domain name analysis, etc.). …) executing a security enhancing model to detect one or more security anomalies from the retrieved one or more attributes (Nachenberg, Para [0052, 0060]: … As mentioned above, a fraudulent use of a brand may include a fraudulent use of an imitation of a brand. For at least this reason, detecting module 108 may determine what brand is being imitated in the fraudulent use of an imitation of a brand. For example, detecting module 108 may, after detecting a phishing website with the domain name “bankofamerixa.com,” determine that the domain name of the phishing website is imitating a domain name “bankofamerica.com.” … FIG. 4 illustrates an exemplary method 400 that provides an example of how the systems and methods described herein may encourage a holder of a brand to take advantage of the brand-protection offerings provided by a brand-protection service when a fraudulent use of the brand is detected at an Internet-traffic chokepoint managed by the brand-protection service. As shown in FIG. 4, at step 402, subscriber module 104 may, after a fraudulent use of a brand is detected (e.g., as part of step 306 in FIG. 3), determine that a holder of the brand is not a subscriber of the brand-protection service …), [wherein the security enhancing model is not subscribed by the network service device]; However Nachenberg does not explicitly teach, but Delatorre from same or similar field of endeavor teaches: “wherein the security enhancing model is not subscribed by the network service device (Delatorre, Para [0019, 0038, 0058]: … As illustrated in FIG. 1, computing device 13d is a portable computer (PC). The PC may include a wireless transceiver compatible with the particular type of packet data service offered by the system 10, similar to any of those in the other types of mobile devices 13a-13c. Alternatively, the PC may connect to a handset device, similar to the handset or smart-phone type mobile device or may connect to a separate mobile data (only) device such as an air-card or data device or communicate over WiFi with a mobile hotspot. For discussion of one usage notification message example and the associated exemplary notification service, we assume that the mobile devices 13a, 13b, 13c, and 13d, are all covered under one subscriber account or account holder (AH) … there may be several levels of protection for a computing device against a security attack, ranging from a low level (e.g., where the incident is recorded); mid level (e.g., where the communication is blocked and the account holder notified); to high level (e.g., where the protection algorithms are frequently updated, communication is blocked, the account holder notified, and the computing device is cured of the security problem). The level of protection is based on account holder subscriptions to such protection service. In addition, there may be different levels of security threats (e.g., low, medium, or high) … Referring back to step 224, if on the other hand it is determined that the account holder has not subscribed to any protection for a computing device against a security attack (or a high enough level of protection), complete corrective action is not performed. Instead, in step (i.e., step 228), the account holder is sent a notification that a security attack has occurred. For example, this notification may be provided by the CRM 41. Further, the CRM 41 of the respective account is updated to include the security attack information (e.g., date/time of the security attack (and/or each event collectively comprising a security attack), the type of the event, and the specific action that should be taken) …)” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Delatorre into the teachings of Nachenberg, because it discloses that, “if on the other hand it is determined that the account holder has not subscribed to any protection for a computing device against a security attack (or a high enough level of protection), complete corrective action is not performed. Instead, in step (i.e., step 228), the account holder is sent a notification that a security attack has occurred. For example, this notification may be provided by the CRM 41. Further, the CRM 41 of the respective account is updated to include the security attack information (e.g., date/time of the security attack (and/or each event collectively comprising a security attack), the type of the event, and the specific action that should be taken) (Delatorre, Para [0058])”. Nachenberg Further discloses: “in response to the one or more security anomalies being detected, generating a notification comprising information on at least one of the one or more security anomalies and the security enhancing model (Nachenberg, Para [0060]: …. FIG. 4 illustrates an exemplary method 400 that provides an example of how the systems and methods described herein may encourage a holder of a brand to take advantage of the brand-protection offerings provided by a brand-protection service when a fraudulent use of the brand is detected at an Internet-traffic chokepoint managed by the brand-protection service. As shown in FIG. 4, at step 402, subscriber module 104 may, after a fraudulent use of a brand is detected (e.g., as part of step 306 in FIG. 3), determine that a holder of the brand is not a subscriber of the brand-protection service. Then at step 404 subscriber module 104 may notify (e.g., via an email or a website) the holder of the brand of the fraudulent use of the brand and/or the brand-protection offerings provided by the brand-protection service to brand holders. In some examples, subscriber module 104 may explain how the brand is being used without the brand holder's authorization, how many of the brand holder's users are encountering fraudulent uses of the brand, how the brand-protection offerings provided by the brand-protection service may protect the brand holder and/or the brand, and/or how the brand-protection service is currently protecting the brand …); and transmitting the notification to the network service device (Nachenberg, Para [0060]: … Then at step 404 subscriber module 104 may notify (e.g., via an email or a website) the holder of the brand of the fraudulent use of the brand and/or the brand-protection offerings provided by the brand-protection service to brand holders …).” Regarding Claim 2. The combination of Nachenberg-Delatorre discloses the method of claim 1, Delatorre further discloses, “wherein the method further comprising: receiving a query associated with the corresponding security enhancing model from the network service device (Delatorre, Para [0055-0056]: … In step 236 the logic engine 42 provides the DCCS 37 tailored message for the computing device with respect to the security attack. For example, the message includes contact information of the affected computing device (e.g., MDN) and the action to be performed on the affected computing device. In step 240, there is secure communication between the DCCS 37 and the computing device. Secure communication has been discussed before and will not be repeated here for brevity. The message sent to the computing device by the DCCS 37 is based on the security attack and the available tools. In one example (step 244), authorization is requested from the computing device (e.g., the account holder or user) to perform the corrective action. When no authorization is received, no corrective action is performed (i.e., step 256). In one example, the lack of authorization is recorded in the CRM server 41 for the respective account. The method continues with monitoring network activity (i.e., step 204) …); generating a reply comprising a recommendation of one or more configurations of the security enhancing model (Delatorre, Para [0055-0056]: … if in step 244 authorization is received from the account holder (or computing device user), in step 248 the DCCS 37 communicates with the computing device to perform the corrective action. In one example, a protection application on the computing device is activated (e.g., virus/malware application). If the protection application is not up-to-date, an update is performed. If the appropriate protection application does not exist on the computing device, the DCCS 37 provides the protection application over the secure communication link …); and transmitting the reply to the network service device (Delatorre, Para [0057]: … the authorization step 244 is skipped. Accordingly, the DCCS 37 communicates with the computing device over a secure communication link to perform the corrective action (e.g., to remove the malware) when it receives the message (e.g., instructions) from the logic engine 42 …).” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further combine the teachings of Delatorre, because it discloses that, “A network based approach helps prevent the propagation of malware. Network traffic is monitored for a computing device security attack. It is determined whether there is a security event using one or more network based security tools. Next, it is determined whether an event pattern of a plurality of security events meets a predetermined criteria. Upon determining that there is a security attack, corrective action is tailored, based on the type of the computing device, the operating system of the computing device, the type of security attack, and/or the available protection tools. A different course of action may be performed depending on whether an account of the computing device includes a security protection service. If there is a security protection service associated with the account, a message is sent over a secure link to the computing device. This message includes the corrective action to cure the computing device from the security attack (Delatorre, Para [0015])”. Regarding Claim 4. The combination of Nachenberg-Delatorre discloses the method of claim 1, Delatorre further discloses, “wherein the obtaining the traffic data transmitted from the network service device further comprises: obtaining the traffic data transmitted from the network service device for a predetermined period of time after the network service device being active in the network traffic management system (Delatorre, Para [0031]: … malware is identified based on a degree of correspondence between the data from the computing device sent through the network 21 and one of the signatures in the database. If a predetermined threshold of correspondence is met, it is indicative that there is a strong likelihood that malware is present. A threshold could be a large number of concurrent connections to mail server in a short time frame from a single device where typically you would expect only a few connections over the same period, which would be indicative of a spambot … ).” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further combine the teachings of Delatorre, because it discloses that, “The logic engine 42 receives event information from the correlation engine 40 and the malware detection server 39 and uses the event information to determine appropriate corrective action for the particular computing device that is subject to each detected security event (Delatorre, Para [0029])”. Regarding Claim 5. The combination of Nachenberg-Delatorre discloses the method of claim 1, Delatorre further discloses, “wherein the obtaining the traffic data transmitted from the network service device further comprises: obtaining the traffic data transmitted from the network service device for a predetermined period of time after the security enhancing model being active in the network traffic management system (Delatorre, Para [0031, 0038, 0047]: … The logic engine 42 of system 10 receives event information from the correlation engine 40 and the malware detection server 39. Logic engine 42 uses the event information to determine appropriate corrective action for the particular computing device that is subject to each detected security event. The event information received by the logic engine 42 includes the nature of the security attack, the type of computing device affected and the account holder. For example, the nature of the security attack may include the type of malware or form of social engineering attack. The type of device may include the platform (e.g., tablet, pc, smart phone, etc.) and operating system (e.g., type and version) … It will be understood that other communication means between the DCCS and the computing device (e.g., 13d) can be used as well, such as the polling method, where the application simply checks in with the service at specific intervals. For example a memory of the computing device has a malware protection program stored in it that periodically samples the DCCS 37 for security updates …).” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further combine the teachings of Delatorre, because it discloses that, “The logic engine 42 receives event information from the correlation engine 40 and the malware detection server 39 and uses the event information to determine appropriate corrective action for the particular computing device that is subject to each detected security event (Delatorre, Para [0029])”. Regarding Claims 6, 11 and 16. These claims contain all the same or similar limitations as claim 1, hence they are similarly rejected as claim 1. *** Note: Nachenberg also discloses “An apparatus for protecting a network service device, comprising memory comprising programmed instructions stored in the memory and one or more processors configured to be capable of executing the programmed instructions stored in the memory (Nachenberg, Para [0010])”; “A non-transitory computer readable medium having stored thereon instructions for protecting a network service device, comprising executable code which when executed by one or more processors (Nachenberg, Para [0010])”; “A network traffic management system, comprising one or more traffic management apparatuses, server devices, or client devices, the network traffic management system comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions (Nachenberg, FIG. 2)”. Regarding Claims 7, 12 and 17. These claims contain all the same or similar limitations as claim 2, hence they are similarly rejected as claim 2. Regarding Claims 9, 14 and 19. These claims contain all the same or similar limitations as claim 4, hence they are similarly rejected as claim 4. Regarding Claims 10, 15 and 20. These claims contain all the same or similar limitations as claim 5, hence they are similarly rejected as claim 5. Claims 3, 8, 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No.: US 20160063218 A1 to Nachenberg (hereinafter “Nachenberg”) in view of Pub. No.: US 20130333032 A1 to Delatorre et al. (hereinafter “Delatorre”), as applied to claim 1 above, and further in view of Pub. No.: US 20220060491 A1 to Achleitner et al. (hereinafter “Achleitner”). Regarding Claim 3. The combination of Nachenberg-Delatorre discloses the method of claim 1, Delatorre further discloses, “wherein the security enhancing model is [a machine learning based model] trained with data representing a type of attack (Delatorre, Para [0027]: … The malware detection server 39 may include one or more security tools. For example, server 39 may include an Intrusion Detection System (IDS), which monitors packet data communication traffic through the mobile traffic network 21 essentially looking for any abnormal network traffic pattern and reports any detected suspicious packets. An IDS does not stop malicious attacks. Instead, it reports such attacks (e.g., due to malware running on a computing device) to administrators via conventional methods, such as email, text messages, or graphical displays. Server 39 may include an Intrusion Detection and Protection (IDP), which is more proactive in stopping malicious attacks. For example, an IDP is intelligent in that it is able to learn and adapt. In this regard, an IDP database may be updated at any point to protect against the latest security threats proactively. An IDP can respond to a detected attack by stopping the attack itself, changing the security environment (e.g., reconfiguring a firewall), or changing the attack's content …), and generating the notification comprising information on at least one of the one or more security anomalies and the security enhancing model (Delatorre, Para [0029]: … System 10 includes a Device Command and Control Service (DCCS) (i.e., server 37) that provides messages and/or instructions to a computing device (e.g., 13d) after it receives a notification from the logic engine 42 in connection with a security attack regarding a computing device (e.g., 13d). The logic engine 42 receives event information from the correlation engine 40 and the malware detection server 39 and uses the event information to determine appropriate corrective action for the particular computing device that is subject to each detected security event …) comprising: identifying a predetermined number of security anomalies with a security risk above an upper threshold from the one or more security anomalies (Delatorre, Para [0029]: … In another exemplary approach, malware is identified based on a degree of correspondence between the data from the computing device sent through the network 21 and one of the signatures in the database. If a predetermined threshold of correspondence is met, it is indicative that there is a strong likelihood that malware is present. A threshold could be a large number of concurrent connections to mail server in a short time frame from a single device where typically you would expect only a few connections over the same period, which would be indicative of a spambot. Additionally, a threshold could be the amount of network traffic a device generates over a short period of time after visiting a sight that is known to house malware. Thus, the identification of malware may be based on a probabilistic determination that a correspondence between the data of the computing device on the mobile traffic network 21 is malware infected …); and generating the notification comprising information on the predetermined number of security anomalies (Delatorre, Para [0029, 0033]: … Accordingly, since the signature of the information (i.e., nefarious website) is consistent with a list of blacklisted websites stored in a database of the server 39 (i.e., predefined criterion), this is considered a single security event. The user may be warned of the security risk but may still be allowed to access the site. This event may be recorded and added to the respective client's records for further evaluation and/or notification of the client …).” Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further combine the teachings of Delatorre, because it discloses that, “The logic engine 42 receives event information from the correlation engine 40 and the malware detection server 39 and uses the event information to determine appropriate corrective action for the particular computing device that is subject to each detected security event (Delatorre, Para [0029])”. However the combination of Nachenberg-Delatorre does not explicitly teach, but Achleitner from same or similar field of endeavor teaches: “a machine learning based model trained with data representing a type of attack (Achleitner, Para [0026-0029], FIG. 4: … An anomaly detection model 107 receives the feature vectors 122 and classifies the feature vectors 122 as anomalous or non-anomalous/normal. The anomaly detection model 107 communicates the classification with either an unclassified traffic indicator 108 or malicious traffic indicator 110 to a device/component 112. The anomaly detection model 107 is typically a one-class classifier that is trained on feature vectors for known malicious traffic samples to detect malicious traffic samples as “normal” or “non-anomalous” and everything else as “anomalous,” i.e. it is trained to detect the distribution of feature vectors for known malicious traffic and distinguish them from outlier samples that may or may not be malicious … A security product 201 communicates known malicious traffic samples 200, or possibly just the payloads thereof, and corresponding attack (malware type/family) identifiers 216 to a malicious traffic feature generator 203. The known malicious traffic samples 200 are session based (i.e., represent traffic from a network session). Each of the attack identifiers 216 identifies and possibly provides a descriptor for one or more of the per session malicious traffic samples 200 (e.g., an attack name and description of the attack, malware family type, etc.). After an anomaly detection model trainer 205 trains an anomaly detection model with the known malicious traffic samples 200 (or a subset thereof) and corresponding attack identifiers 216 to classify malicious traffic features as normal, the trained malicious traffic detection model can be deployed to a cloud service 207 …)”. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Achleitner into the combination of Nachenberg-Delatorre, because it discloses that, “the anomaly detection model trainer tests the retrained anomaly detection model on test feature vectors omitting the entry corresponding to the current malicious traffic feature. Thus, correctly classified software samples that were previously false positive software samples reflect a direct improvement in the false positive rate of the anomaly detection model due to removal of the current malicious traffic feature (Achleitner, Para [0086])”. Regarding Claims 8, 13 and 18. These claims contain all the same or similar limitations as claim 3, hence they are similarly rejected as claim 3. Pertinent Prior Arts The following prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. US 20150039513 A1; Adjaoute: Adjaoute discloses A real-time fraud prevention system enables merchants and commercial organizations on-line to assess and protect themselves from high-risk users. A centralized database is configured to build and store dossiers of user devices and behaviors collected from subscriber websites in real-time. Real, low-risk users have webpage click navigation behaviors that are assumed to be very different than those of fraudsters. Individual user devices are distinguished from others by hundreds of points of user-device configuration data each independently maintains. A client agent provokes user devices to volunteer configuration data when a user visits respective webpages at independent websites. A collection of comprehensive dossiers of user devices is organized by their identifying information, and used calculating a fraud score in real-time. Each corresponding website is thereby assisted in deciding whether to allow a proposed transaction to be concluded with the particular user and their device. US 20180191577 A1; HERCZOG: HERCZOG discloses A monitoring apparatus for a network of communications-enabled devices includes: a processing resource arranged to support a data selection module and a data analysis module. The apparatus also includes a communications interface operably coupled to the processing resource and arranged to receive a plurality of data fragments. The plurality of data fragments each bear a respective device identifier and associated observation data, the plurality of data fragments respectively including a plurality of unique device identifiers. The data selection module is arranged to read the respective identifiers of the plurality of data fragments and the associated observation data and to identify a set of the plurality of data fragments generated as a result of a common device characteristic. The data analysis module is arranged to analyse the set of the plurality of data fragments identified by the data selection module in order to detect anomalous device activity. US 20180115563 A1; LUEKEN et al.: LUEKEN discloses measures for mitigation of malicious software in a mobile communications network. An example measure includes monitoring network traffic on at least one network interface of the mobile communications network, detecting a network traffic anomaly caused by the malicious software running on a communication endpoint, identifying the communication endpoint using a device identifier associated with the communication endpoint, and causing manipulation of a traffic handling of the network traffic of the communication endpoint based on the device identifier. US 20190036954 A1; GARCIA et al.: GARCIA discloses A method, computer system, and computer program product that generates a whitelist for each subject device in a field area network (FAN). The whitelist includes one or more whitelist entries corresponding to one or more peer devices in the same FAN communicating with the subject device. Each whitelist entry includes one or more attribute values expected in respective traffic between the subject device and each peer device that is represented by a respective whitelist entry. The traffic in the FAN is monitored at one or more points of the FAN for anomaly by use of the whitelist. US 20230254328 A1; COSTANTE: COSTANTE discloses A method of detecting anomalous behaviour in data traffic includes parsing data traffic to extract protocol field values of a protocol message of data traffic, deriving attribute values of attributes of one of the first host, the second host, and the link. The method includes selecting a model relating to the one of the first host, the second host, and the link. The mode includes at least one semantic attribute expressing a semantic meaning for the first host, the second host, or the link. The method further includes updating the selected model with the derived attribute values, assessing whether the updated model complies with a set of attribute-based policies defining a security constraint of the data communication network, and generating an alert signal in case the attribute-based policies indicate that the updated model violates at least one of the attribute-based policies. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHABUB S AHMED whose telephone number is (571)272-0364. The examiner can normally be reached on 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Zand can be reached on (571)272-3811. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHABUB S AHMED/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591864
METHODS AND SYSTEMS FOR THE EFFICIENT TRANSFER OF ENTITIES ON A BLOCKCHAIN
2y 5m to grant Granted Mar 31, 2026
Patent 12574393
CYBER SECURITY SYSTEM UTILIZING INTERACTIONS BETWEEN DETECTED AND HYPOTHESIZE CYBER-INCIDENTS
2y 5m to grant Granted Mar 10, 2026
Patent 12574370
VERIFYING PARTY IDENTITIES FOR SECURE TRANSACTIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12563053
METHODS AND SYSTEMS FOR FRAUD DETECTION USING RELATIVE MOVEMENT OF FACIAL FEATURES
2y 5m to grant Granted Feb 24, 2026
Patent 12542662
APPARATUS AND METHOD FOR FEDERATED LEARNING BASED ON GROUP KEY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
93%
With Interview (+7.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 289 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month