Prosecution Insights
Last updated: April 19, 2026
Application No. 18/126,827

AUTOMATED EMAIL ACCOUNT COMPRISE DETECTION AND REMEDIATION

Non-Final OA §103
Filed
Mar 27, 2023
Examiner
ABDULLAH, SAAD AHMAD
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
4 (Non-Final)
77%
Grant Probability
Favorable
4-5
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
54 granted / 70 resolved
+19.1% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
112
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
61.6%
+21.6% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 70 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the application 18/126,827 in response to the Remarks filed on 10/31/2025. Claims 1-20 have been examined and are pending in this application. This application is a 2nd Non-Final. Response to Arguments Applicant’s arguments, see pages 9-11, filed 10/31/2025, with respect to the rejection(s) of claim(s) 1, 9 and 17 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art reference(s). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 7, 9, 15 and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeyakumar (US 20200344251 A1) in view of Syme (US 10528731 B1). Regarding Claim 1 Jeyakumar discloses: A method comprising: scanning, by an adaptive pre-filter, electronic mail messages (emails) within an organization, wherein the emails originate within the organization (Jeyakumar ¶0065 and ¶0095: Teaches scanning electronic mail messages within an organization by disclosing that monitoring module 308 monitors incoming emails at a network maintained by the customer (enterprise), and expressly defines “internal emails” as emails sent from one employee to another employee within the enterprise.); analyzing, by the adaptive pre-filter, the emails with respect to known fraudulent email practices (Jeyakumar ¶019; ¶097: Teaches analyzing emails with respect to known fraudulent email practices by disclosing analysis modules configured for specific known attack types (e.g., phishing, impersonation, payroll fraud, credential theft, ransom schemes), including hierarchical ML-based classification of attack vectors and goals, and applying probabilities based on models of known attack types and heuristics such as blacklisting and whitelisting.), wherein the adaptive pre-filter is configured with fuzzy logic to detect suspicious traits of emails, and wherein the suspicious traits are related to one or more of phishing traits, scam traits, business email compromise (BEC) traits, or malware traits; based at least in part on the analyzing, determining, by the adaptive pre-filter, whether an email is a questionable email or a safe email (Jeyakumar ¶0076; ¶0113; FIG. 4 (steps 406–407): Teaches determining whether an email is questionable or safe by analyzing incoming emails using attack detectors and ML models, flagging the email as a possible attack if the detectors indicate one, and otherwise allowing non-malicious emails to proceed, thereby determining whether the email represents a security threat or not.); based at least in part on determining an email is a questionable email, forwarding, by the adaptive pre-filter, the questionable email to a retrospective behavior engine for further analysis, wherein the retrospective behavior engine is separate from the adaptive pre-filter (Jeyakumar ¶0070; ¶0096–¶0097; FIG. 2–FIG. 3: Teaches that a first model 204 determines whether an incoming email can be verified as non-malicious and, when it cannot, treats the email as “possibly malicious” and applies a second model 208 and analysis module 312 for further threat analysis, the analysis module being a distinct component of the threat detection platform. This discloses forwarding, based on determining the email is questionable, the email from an adaptive pre-filter to a separate retrospective analysis engine for further evaluation.); analyzing, by the retrospective behavior engine, the questionable email with respect to one or more historical traits to provide a feature set (Jeyakumar ¶0084; ¶0090–¶0091; ¶0096; FIG. 3 Teaches analyzing incoming emails using historical traits derived from prior communications, including customer behavior norms, attack history, email usage patterns, and entity profiles generated from historical email data. The analysis module 312 applies these historical attributes to the email, thereby analyzing the questionable email with respect to one or more historical traits to generate attributes/features used as inputs to the machine learning models, which constitute a feature set.); providing the feature set to a verdict correlation engine, wherein the verdict correlation engine is separate from the adaptive pre-filter and the retrospective behavior engine; training, using the feature set, a machine learning model (Jeyakumar ¶0096–¶0097; ¶0105–¶0106; Teaches that attributes and behavioral traits derived from historical email data by the analysis module 312 are used as inputs to machine learning models that classify emails into attack types, with the models being trained by a separate training module 306 using historical communications and labeled datasets.); forwarding, by the retrospective behavior engine, the questionable email to the verdict correlation engine for further analysis (Jeyakumar ¶0096–¶0097; FIG. 3; FIG. 5: Teaches that the analysis module 312 analyzes incoming emails and derives attributes that are applied to one or more machine learning models that classify the email into specific attack types, thereby passing the email and/or its derived attributes from the retrospective analysis component to a separate ML-based classification engine for further analysis.); determining, by the verdict correlation engine using the machine learning model, that the questionable email belongs in a class of emails from multiple classes of emails (Jeyakumar ¶0097; ¶0106; FIG. 5: Teaches that one or more machine learning models classify incoming emails into different attack types, including impersonation techniques, attack vectors, and specific fraud categories, with different ML models developed for different known types of attacks, thereby determining that the questionable email belongs to a class of emails from multiple possible classes); and based at least in part on the class, performing, by the verdict correlation engine a responsive action (Jeyakumar ¶0097; ¶0105–¶0106: Teaches that machine learning models classify incoming emails into specific attack types, and that a remediation engine 314 performs responsive actions, including flagging, alerting, or providing threat detection results to customers based on the classification, thereby performing a responsive action based at least in part on the determined class.). Jeyakumar teaches determining whether an email is potentially malicious using machine learning models that detect various attack types including impersonation, attack vectors, payroll fraud, wire fraud, and credential theft, thereby detecting phishing- and BEC-related traits in incoming emails. However, Jeyakumar does not expressly disclose that the adaptive pre-filter is configured using fuzzy logic. Syme teaches using fuzzy hashing techniques to detect phishing attacks and malicious code by comparing similar hashes and flagging suspicious files prior to dissemination (Column 3, Lines 26-36), thereby disclosing fuzzy logic-based detection of phishing and malware traits. It would have been obvious to one of ordinary skill in the art to modify Jeyakumar’s adaptive pre-filter to incorporate the fuzzy hashing/fuzzy logic detection techniques of Syme in order to improve the detection of phishing and malware-related suspicious traits at an earlier filtering stage, because both references are directed to improving email security by identifying malicious or phishing content using probabilistic or similarity-based detection techniques. The combination merely substitutes one known fuzzy-based detection mechanism for another known detection mechanism to enhance threat identification accuracy, yielding predictable results. Regarding Claim 7 Jeyakumar discloses: The method of claim 1, wherein analyzing, by the retrospective behavior engine, the questionable email with respect to the one or more historical traits to provide the feature set comprises one or more of: analyzing uniform resource locators (URLs) in the questionable email for one or more of (i) for anomalies in security certificates, (ii) whether a URL belongs to a cloud service, or (iii) whether the URL contains URL or base64 encoded components of a URL; analyzing Internet Protocol (IP) addresses in the questionable email and one or more of (i) comparing the IP addresses with historical IP addresses, (ii) checking whether an IP address is included on a list of blocked IP addresses, or (iii) checking whether the IP address is located in a suspicious country; comparing one or more recipients with historical recipients; or analyzing a historical email-sending behavior of a sender of the questionable email (Jeyakumar ¶0084; ¶0090–¶0091; FIG. 3: teaches generating entity profiles based on historical email data, including email usage patterns, communication frequency, identities of entities frequently communicated with, and prior communication behavior, and applying these historical behavioral traits when analyzing incoming emails.). Regarding Claim 9 Claim 9 is directed to a system corresponding to the computer-implemented method in claim 1. Claim 9 is similar in scope to claim 1 and is therefore rejected under similar rationale. Regarding Claim 15 Claim 15 is directed to a system corresponding to the computer-implemented method in claim 7. Claim 15 is similar in scope to claim 7 and is therefore rejected under similar rationale. Regarding Claim 17 Claim 17 is directed to non-transitory computer readable media corresponding to the computer-implemented method in claim 1. Claim 17 is similar in scope to claim 1 and is therefore rejected under similar rationale. Regarding Claim 18 Claim 18 is directed to non-transitory computer readable media corresponding to the computer-implemented method in claim 7. Claim 18 is similar in scope to claim 7 and is therefore rejected under similar rationale. Claims 2-4 and 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeyakumar (US 20200344251 A1), in view of Syme (US 10528731 B1) as applied to claim 1 and 9 above, and in further view of Singh (US 20230164180 A1). Regarding Claim 2 Jeyakumar and Syme combined teach using machine learning and fuzzy techniques to analyze email content and historical behavioral traits in order to detect phishing, fraud, and other malicious email attacks. Jeyakumar and Syme do not disclose the following limitation “wherein the multiple classes comprise (i) benign, (ii) suspicious, or (iii) malicious” However, in an analogous art, Singh discloses an email inspection system/method that includes: Singh further discloses: The method of claim 1, wherein the multiple classes comprise (i) benign, (ii) suspicious, or (iii) malicious (Singh ¶84: Teaches a multi-class classification system where emails are evaluated and confirmed as being either benign, suspicious, or malicious). Given the teachings of Singh, a person having ordinary skill in the art before the effective filing date would have found it obvious to modify the teaching of Jeyakumar and Syme by classifying emails into multiple categories such as benign, suspicious, or malicious. Adopting would have been an obvious to a POSITA as it improves email classification and can facilitate downstream remediation actions (Singh ¶84). Regarding Claim 3 Jeyakumar discloses an email inspection system/method that includes: The method of claim 2, wherein if the questionable email is benign, the responsive action comprises deeming an originating email address of the questionable email as safe (Jeyakumar ¶109, 146, 181: Teaches that a rules engine includes whitelist logic used during threat detection to determine when emails meet benign criteria. Emails are “hydrated” with whitelist signals, and benign classifications can result in marking the originating sender as safe. Additionally, historical analysis of email attributes such as sender addresses enables the system to treat non-malicious entities as safe, aligning with deeming the originating email address safe.). Regarding Claim 4 Jeyakumar discloses: The method of claim 2, wherein if the questionable email is deemed suspicious, the responsive action comprises forwarding an originating email address of the questionable email to a security platform for monitoring and rule enforcement (Jeyakumar ¶149, 151, 205: Teaches that when an email is identified as suspicious, the originating email address is extracted as an indicator of compromise (IOC). These IOCs, including sender email addresses, are forwarded to a central threat intelligence system where they are stored as signatures and used for monitoring future threats. The system enables integration with external security products such as SOAR tools and firewalls, thus supporting rule enforcement based on the originating email address.). Regarding Claim 10 Claim 10 is directed to a system corresponding to the computer-implemented method in claim 2. Claim 10 is similar in scope to claim 2 and is therefore rejected under similar rationale. Regarding Claim 11 Claim 11 is directed to a system corresponding to the computer-implemented method in claim 3. Claim 11 is similar in scope to claim 3 and is therefore rejected under similar rationale. Regarding Claim 12 Claim 12 is directed to a system corresponding to the computer-implemented method in claim 4. Claim 12 is similar in scope to claim 4 and is therefore rejected under similar rationale. Claims 5, 6, 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeyakumar (US 20200344251 A1), in view of Syme (US 10528731 B1), in view of Singh (US 20230164180 A1) as applied to claim 2 above, and in further view Moeglein (US 2023/0025323 A1). Regarding Claim 5 Jeyakumar, Syme and Singh combined teach using machine learning and fuzzy techniques to analyze email content and historical behavioral traits in order to detect phishing, fraud, and other malicious email attacks. Jeyakumar, Syme and Singh do not disclose the following “wherein if the questionable email is deemed malicious, the responsive action comprises forwarding an originating email address of the questionable email to a security platform that forwards the originating email address to (i) an account directory that suspends an account of the originating email address and (ii) a cloud access security broker (CASB) that blocks the originating email address” However, in an analogous art, Moeglein discloses an account suspension and blocking system/method that includes: The method of claim 2, wherein if the questionable email is deemed malicious, the responsive action comprises forwarding an originating email address of the questionable email to a security platform that forwards the originating email address to (i) an account directory that suspends an account of the originating email address and (ii) a cloud access security broker (CASB) that blocks the originating email address (Moeglein Paragraph 268: describes reporting suspicious entities to a centralized server, the centralized server performs further action by suspending flagged accounts and blocking associated contact information.). Given the teaching of Moeglein, a person having ordinary skill in the art before the effective filing date of the claimed invention would have recognized the desirability of modifying the teachings of Jeyakumar, Syme and Singh by incorporating a method for taking responsive actions to mitigate security threats posed by malicious email accounts. Moeglein describes a system in which an app uses ratings to identify likely bots and report them to a centralized server. The server can flag accounts as suspected bots and take actions such as suspending or removing the bot's account. Additionally, the system can block contact information associated with these flagged accounts to prevent further malicious activity. It would have been obvious to apply this approach to a method where a questionable email, once deemed malicious, triggers responsive actions to suspend the account associated with the originating email address and to block the email address (Moeglein Paragraph 268). Regarding Claim 6 Jeyakumar, Syme, Singh and Moeglein combined teach using machine learning and fuzzy techniques to analyze email content and historical behavioral traits in order to detect phishing, fraud, and other malicious email attacks. Singh further teaches: The method of claim 5, wherein the responsive action further comprises removing the questionable email from any email accounts that received the questionable email (Singh ¶84: teaches a responsive action that includes removing a questionable email if it has been determined to being malicious.). Given the teachings of Singh, a person having ordinary skill in the art before the effective filing date would have found it obvious to modify the teachings of Jeyakumar, Syme, and Moeglein by implementing a system that detects compromised email accounts to include a responsive action that removes questionable emails from recipient inboxes. Singh explicitly discloses that upon detecting an account compromise, the system enables deletion or quarantine of malicious emails from mailboxes of their recipients. Removing such emails is a predictable and well-known remediation practice to limit the spread of harmful content and reduce risk to the organization. Therefore, implementing this responsive action as part of an automated threat response would have been an obvious improvement to enhance email system security (Singh ¶84). Regarding Claim 13 Claim 13 is directed to a method corresponding to the computer-implemented method in claim 5. Claim 13 is similar in scope to claim 5 and is therefore rejected under similar rationale. Regarding Claim 14 Claim 14 is directed to a method corresponding to the computer-implemented method in claim 6. Claim 14 is similar in scope to claim 6 and is therefore rejected under similar rationale. Claims 8, 16 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeyakumar (US 20200344251 A1), in view of Syme (US 10528731 B1) as applied to claim 1 above, and in further view of Pratt (US 10673880 B1). Regarding Claim 8 Jeyakumar and Syme combined teach using machine learning and fuzzy techniques to analyze email content and historical behavioral traits in order to detect phishing, fraud, and other malicious email attacks. Jeyakumar and Syme do not disclose analyzing by the retrospective behavior engine the questionable email with respect to the one or more historical traits to provide the feature set comprises one or more of: analyzing operating system audit log events; or analyzing virtual private network (VPN) logs. Pratt teaches ingesting and parsing VPN connection activity and authentication events, and applying behavioral baselining and anomaly detection models to those VPN-related events to detect suspicious activity, thereby analyzing VPN logs as part of behavioral threat detection (Column 25, Line 45 - Column 26, Line 36 and Column 29, Line 38-50.). It would have been obvious to one of ordinary skill in the art to modify the behavioral threat detection systems of Jeyakumar and Syme to further analyze VPN logs as taught by Pratt, in order to enhance detection of suspicious activity using additional historical behavioral telemetry, because each reference is directed to improving cybersecurity threat detection through behavioral analysis and machine learning applied to security-relevant data sources. The combination merely incorporates a known log-based behavioral analysis technique into an existing ML-based threat detection framework to improve accuracy and coverage, yielding predictable results. Regarding Claim 16 Claim 16 is directed to a method corresponding to the computer-implemented method in claim 8. Claim 16 is similar in scope to claim 8 and is therefore rejected under similar rationale. Regarding Claim 19 Claim 19 is directed to a method corresponding to the computer-implemented method in claim 8. Claim 19 is similar in scope to claim 8 and is therefore rejected under similar rationale. Claims 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeyakumar (US 20200344251 A1), in view of Syme (US 10528731 B1) as applied to claim 17 above, and in further view of Singh (US 20230164180 A1) and in view of Moeglein (US 2023/0025323 A1). Regarding Claim 20 Jeyakumar, Syme, Singh and Moeglein combined teach using machine learning and fuzzy techniques to analyze email content and historical behavioral traits in order to detect phishing, fraud, and other malicious email attacks Singh further teaches: The one or more non-transitory computer-readable media of claim 17, wherein: the multiple classes comprise (i) benign, (ii) suspicious, or (iii) malicious (Singh ¶84: Teaches a multi-class classification system where emails are evaluated and confirmed as being either benign, suspicious, or malicious) … and if the questionable email is deemed malicious, the responsive action comprises removing the questionable email from any email accounts that received the questionable email (Singh ¶84: teaches a responsive action that includes removing a questionable email if it has been determined to being malicious.). Given the teachings of Singh, a person having ordinary skill in the art before the effective filing date would have found it obvious to modify the teachings Jeyakumar, Syme, and Moeglein by implementing a system that detects compromised email accounts to include a responsive action that removes questionable emails from recipient inboxes. Singh explicitly discloses that upon detecting an account compromise, the system enables deletion or quarantine of malicious emails from mailboxes of their recipients. Removing such emails is a predictable and well-known remediation practice to limit the spread of harmful content and reduce risk to the organization. Singh further discloses classifying emails into multiple categories such as benign, suspicious, or malicious. Adopting would have been an obvious to a POSITA as it improves email classification and can facilitate downstream remediation actions (Singh ¶84); Jeyakumar further discloses: if the questionable email is benign, the responsive action comprises deeming an originating email address of the questionable email as safe (Jeyakumar ¶109, 146, 181: Teaches that a rules engine includes whitelist logic used during threat detection to determine when emails meet benign criteria. Emails are “hydrated” with whitelist signals, and benign classifications can result in marking the originating sender as safe. Additionally, historical analysis of email attributes such as sender addresses enables the system to treat non-malicious entities as safe, aligning with deeming the originating email address safe.); if the questionable email is deemed suspicious, the responsive action comprises forwarding the originating email address to a security platform for monitoring and rule enforcement (Jeyakumar ¶149, 151, 205: Teaches that when an email is identified as suspicious, the originating email address is extracted as an indicator of compromise (IOC). These IOCs, including sender email addresses, are forwarded to a central threat intelligence system where they are stored as signatures and used for monitoring future threats. The system enables integration with external security products such as SOAR tools and firewalls, thus supporting rule enforcement based on the originating email address.). Moeglein discloses: and forwarding the originating email address to a security platform that forwards the originating email address to (i) an account directory that suspends an account of the originating email address and (ii) a cloud access security broker (CASB) that blocks the originating email address (Moeglein Paragraph 268: describes reporting suspicious entities to a centralized server, the centralized server performs further action by suspending flagged accounts and blocking associated contact information.). Given the teaching of Moeglein, a person having ordinary skill in the art before the effective filing date of the claimed invention would have recognized the desirability of modifying the teachings of Jeyakumar, Syme and Singh by incorporating a method for taking responsive actions to mitigate security threats posed by malicious email accounts. Moeglein describes a system in which an app uses ratings to identify likely bots and report them to a centralized server. The server can flag accounts as suspected bots and take actions such as suspending or removing the bot's account. Additionally, the system can block contact information associated with these flagged accounts to prevent further malicious activity. It would have been obvious to apply this approach to a method where a questionable email, once deemed malicious, triggers responsive actions to suspend the account associated with the originating email address and to block the email address (Moeglein Paragraph 268). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD ABDULLAH whose telephone number is 571-272-1531. The examiner can normally be reached on Monday-Friday 9am-5pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, LYNN FIELD can be reached on 571-272-2092. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /SAAD AHMAD ABDULLAH/Examiner, Art Unit 2431 /SARAH SU/ Primary Examiner, Art Unit 2431
Read full office action

Prosecution Timeline

Mar 27, 2023
Application Filed
Dec 04, 2024
Non-Final Rejection — §103
Feb 05, 2025
Interview Requested
Mar 07, 2025
Response Filed
Mar 07, 2025
Applicant Interview (Telephonic)
Mar 07, 2025
Examiner Interview Summary
Apr 03, 2025
Final Rejection — §103
Jul 09, 2025
Request for Continued Examination
Jul 13, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §103
Oct 31, 2025
Response Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603895
PACKET METADATA CAPTURE IN A SOFTWARE-DEFINED NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12592961
QUANTUM-BASED ADAPTIVE DEEP LEARNING FRAMEWORK FOR SECURING NETWORK FILES
2y 5m to grant Granted Mar 31, 2026
Patent 12580886
Network security gateway onboard an aircraft to connect low and high trust domains of an avionics computing infrastructure
2y 5m to grant Granted Mar 17, 2026
Patent 12554871
SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR SECURE AND PRIVATE DATA VALUATION AND TRANSFER
2y 5m to grant Granted Feb 17, 2026
Patent 12554832
AUTOMATED LEAST PRIVILEGE ASSIGNMENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 70 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month