Prosecution Insights
Last updated: April 19, 2026
Application No. 18/632,209

COMPREHENSIBLE THREAT DETECTION

Final Rejection §102
Filed
Apr 10, 2024
Examiner
DOAN, HUAN V
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
228 granted / 283 resolved
+22.6% vs TC avg
Strong +42% interview lift
Without
With
+42.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
7 currently pending
Career history
290
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 283 resolved cases

Office Action

§102
DETAILED ACTION 1. This office action is in response to the communication filed on 11/13/2025. 2. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status 3. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 4. Applicant’s arguments filed on 11/13/2025 have been fully considered but they are not persuasive. Applicant’s argument: Muddu does not disclose “determining a first mapping between a first endpoint identifier associated with the first modality and a user account associated with an entity; determining a second mapping between a second endpoint identifier associated with the second modality and the user account associated with the entity; determining, based at least in part on the first mapping and the second mapping, that the first abnormal event and the second abnormal event are each associated with a same entity; based at least in part on the first abnormal event and the second abnormal event being associated with the same entity.” Examiner’s response: The examiner respectfully directs applicant’s attention to see Muddu, paras. 214-215, where relationships between entities are discovered and recorded from data/event data associated with data sources; see para. 408 where security threats are identified by correlating the anomalies across the relationships; see paras. 424 and/or 602-606 where anomalous/security events/activities/threat associated with entities are detected based on the relationships, wherein entities include physical computing devices, users, user accounts, and identifiers/identifications associated with entities, and wherein two or more identifiers/identifications are associated with same entity (same user account and/or same user). In other words, determining relationships (i.e., first relationship/mapping and second relationship/mapping) between physical computing devices’ identifiers (i.e., first endpoint identifier and second endpoint identifier) associated with data sources (i.e., a first modality and a second modality) and a user account associated with a user (i.e., entity); determining, based on the relationships, that anomalous events/activities are associated with the same entity. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Muddu et al. (US 2017/0134415 A1, hereafter Muddu). Regarding claim(s) 1, 9, and 18: Muddu discloses a system comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors (see fig. 1 and para. 139 where a security platform comprises one or more computers), cause the system to perform operations comprising: receiving telemetry data associated with at least a first modality and a second modality, the second modality being different from the first modality; detecting, in the telemetry data, a first abnormal event and a second abnormal event associated with security incidents, the first abnormal event associated with the first modality and the second abnormal event associated with the second modality (see fig. 4 and paras. 135-137, 147-148, 163 where data/event data/machine data (i.e., telemetry data) is received from various data sources (i.e., modalities include a first modality and a second modality), wherein events/activities are derived from the data to be analyzed to detect anomalous events/activities (i.e., a first abnormal event and a second abnormal event) associated with threats/attacks/malwares (i.e., security incidents)); determining a first mapping between a first endpoint identifier associated with the first modality and a user account associated with an entity; determining a second mapping between a second endpoint identifier associated with the second modality and the user account associated with the entity; determining, based at least in part on the first mapping and the second mapping, that the first abnormal event and the second abnormal event are each associated with a same entity; based at least in part on the first abnormal event and the second abnormal event being associated with the same entity, determining that a correlation between the first abnormal event and the second abnormal event is indicative of a security incident (see paras. 214-215 where relationships between entities are discovered and recorded from data/event data associated with data sources; see para. 408 where security threats are identified by correlating the anomalies across the relationships; see paras. 424 and/or 602-606 where anomalous/security events/activities/threat associated with entities are detected based on the relationships, wherein entities include physical computing devices, users, user accounts, and identifiers/identifications associated with entities, and wherein two or more identifiers/identifications are associated with same entity (same user account and/or same user). In other words, determining relationships (i.e., first relationship/mapping and second relationship/mapping) between physical computing devices’ identifiers (i.e., first endpoint identifier and second endpoint identifier) associated with data sources (i.e., a first modality and a second modality) and a user account associated with a user (i.e., entity); determining, based on the relationships, that anomalous events/activities are associated with the same entity, determining that a correlation between anomalous events/activities is indicative of a security threat); and based at least in part on the correlation, outputting an indication of the security incident (see para. 171). Regarding claim(s) 2: Muddu discloses: wherein the first modality and the second modality are associated with at least one of: a web proxy log, a file execution log, a firewall log, a network connection log, an endpoint log, an email activity tog, or an instant messaging log (see paras. 135, 163, 278, and/or 326). Regarding claim(s) 3: Muddu discloses: wherein the indication of the security incident includes information associated with the first modality and the second modality (see fig. 4, and paras. 171, 173). Regarding claim(s) 4: Muddu discloses: determining that the first abnormal event and the second abnormal event originated from the same entity, wherein determining that the first abnormal event and the second abnormal event are each associated with the same entity is based at least in part on the first abnormal event and the second abnormal event having originated from the same entity (see para. 408 where security threats are identified by correlating the anomalies across the relationships; see paras. 424 and/or 602-606 where anomalous/security events/activities/threat associated with entities are detected based on the relationships, wherein entities include physical computing devices, users, user accounts, and identifiers/identifications associated with entities, and wherein two or more identifiers/identifications are associated with same entity (same user account and/or same user)). Regarding claim(s) 5: Muddu discloses: wherein the first abnormal event is detected by a first unimodal detector that is specific to the first modality and the second abnormal event is detected by a second unimodal detector that is specific to the second modality (see paras. 158, 161, 163, and/or 193 where a plurality of components/applications/analyzers are used to detect anomalies/threats/attacks/malwares from various data sources). Regarding claim(s) 6: Muddu discloses: wherein determining that the first abnormal event and the second abnormal event are each associated with the same entity comprises determining that the first abnormal event and the second abnormal event are each associated with a same server (see paras. 215 where a user is using a machine with an IP address to visit a certain website; see para. 244 where a machine is used as a server; see paras. 604-606 where detected anomalous events/activities are associated with entities, and wherein the identifications are associated with same entity (i.e., same user account, same server having same IP address)). Regarding claim(s) 7: Muddu discloses: wherein determining that the first abnormal event and the second abnormal event are each associated with the same entity comprises determining that the first abnormal event and the second abnormal event are each associated with a same user device (see paras. 215 where a user is using a machine (i.e., user device) with an IP address to visit a certain website; see paras. 604-606 where detected anomalous events/activities are associated with entities, and wherein the identifications are associated with same entity (i.e., same user account, same user device having same IP address)). Regarding claim(s) 8: Muddu discloses: assigning the first abnormal event and the second abnormal event to a same network address; and determining the correlation between the first abnormal event and the second abnormal event based at least in part on the assigning (see para. 408 where security threats are identified by correlating the anomalies across the relationships; see paras. 604-606 where detected anomalous events/activities are associated with entities, physical computing devices, users, user accounts, IP addresses, and identifiers/identifications associated with entities, and wherein the identifications are associated with same entity (i.e., same IP address)). Regarding claim(s) 10: Muddu discloses: determining that the telemetry data associated with the first modality indicates that the same entity is affected by the first abnormal event; and determining that the telemetry data associated with the second modality indicates that the same entity is affected by the second abnormal event, wherein the correlation is associated with determining that the same entity is affected by the first abnormal event and the second abnormal event (see paras. 604-606 where detected anomalous events/activities are associated with entities, wherein each of the entities includes identifications comprising user account, identifier, and IP addresses, and wherein the identifications are associated with same entity (i.e., same user account); see para. 136 where a compromised account is used to conduct malicious activities; see para. 186 where anomalies/threats are detected based on time-series analysis (e.g., number of log-ins per hour); see para. 350 where an anomaly associated with an account is detected; see paras. 442-443 where anomalies/threats are identified from even data generated from user log-ins to an account). Regarding claim(s) 11: Muddu discloses: wherein: the telemetry data associated with the first modality includes a first timestamp associated with the first abnormal event, the telemetry data associated with the second modality includes a second timestamp associated with the second abnormal event, and determining that the correlation is indicative of the security incident is further based at least in part on the first timestamp and the second timestamp (see fig. 4 and paras. 135-137, 147-148, 163 where data/event data/machine data is received from various data sources (i.e., modalities include a first modality and a second modality), wherein events/activities are derived from the data to be analyzed to detect anomalous events/activities (i.e., a first abnormal event and a second abnormal event) associated with threats/attacks/malwares; see paras. 409, 412, 428, and/or 434 where anomalous events/activities are detected based on timestamps from the event data). Regarding claim(s) 12: Muddu discloses: determining a length of a period of time between the first timestamp and the second timestamp, wherein determining that the correlation is indicative of the security incident is further based at least in part on the length of the period of time (see paras. 186, 217, 221, 224 and/or 317). Regarding claim(s) 13: Muddu discloses: wherein the telemetry data associated with the first modality is different from the telemetry data associated with the second modality, the telemetry data associated with the first modality comprising at least one of: a web proxy log, a file execution log, a firewall log, a network connection log, an endpoint log, an email activity log, or an instant messaging log (see paras. 135, 163, 278, and/or 326). Regarding claim(s) 14: Muddu discloses: inputting, into a machine-learned model, first telemetry data associated with the first abnormal event and second telemetry data associated with the second abnormal event; and receiving, from the machine-learned model, an output indicating that the first abnormal event and the second abnormal event are indicative of the security incident (see fig. 4 and paras. 170, 182). Regarding claim(s) 15: Muddu discloses: wherein determining that the first abnormal event and the second abnormal event are each associated with the same entity is based at least in part on a mapping between endpoint identifiers associated with the first modality and the second modality and at least one network address associated with the same entity (see paras. 604-606 where detected anomalous events/activities are associated with entities, wherein each of the entities includes identifications comprising user account, identifier, and IP addresses, and wherein the identifications are associated with same entity (i.e., same user account, same identifiers, same IP addresses)). Regarding claim(s) 16: Muddu discloses: wherein detecting the first abnormal event comprises employing a first unimodal detector specifically configured for the first modality and wherein detecting the second abnormal event comprises employing a second unimodal detector specifically configured for the second modality (see fig. 4 and paras. 135, 158, 161, and/or 170 for various applications/analyzers/machine learning models used for detecting anomalous events/activities from various data sources). Regarding claim(s) 17: See the rejection to claim 6 or 7. Regarding claim(s) 19: Muddu discloses: wherein: the telemetry data associated with the first modality includes a first indication of the same entity affected by the first abnormal event, the telemetry data associated with the second modality includes a second indication of the same entity affected by the second abnormal event, and determining that the first abnormal event and the second abnormal event are each associated with the same entity is based at least in part on the first indication and the second indication (see paras. 604-606 where detected anomalous events/activities are associated with entities, wherein each of the entities includes identifications comprising user account, identifier, and IP addresses, and wherein the identifications are associated with same entity; see para. 136 where a compromised account is used to conduct malicious activities; see para. 186 where anomalies/threats are detected based on time-series analysis (e.g., number of log-ins per hour); see para. 350 where an anomaly associated with an account is detected; see paras. 442-443 where anomalies/threats are identified from even data generated from user log-ins to an account). Regarding claim(s) 20: See the rejection to claim 11. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUAN V. DOAN whose telephone number is 571-272-3809. The examiner can normally be reached on Monday – Thursday, 9:00am – 5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PHILIP CHEA, can be reached on 571-272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUAN V DOAN/Primary Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Apr 10, 2024
Application Filed
Aug 10, 2025
Non-Final Rejection — §102
Nov 03, 2025
Interview Requested
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Examiner Interview Summary
Nov 13, 2025
Response Filed
Jan 12, 2026
Final Rejection — §102
Apr 10, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592959
DETECTING MALICIOUS COMMAND AND CONTROL CLOUD TRAFFIC
2y 5m to grant Granted Mar 31, 2026
Patent 12593207
SYSTEMS AND METHODS FOR VERIFYING CANDIDATE COMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12580913
MANAGEMENT SYSTEM, MANAGEMENT METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12574361
ELIMINATING A REDUNDANT LOGIN BY LEVERAGING A SECURE POSIX ENVIRONMENT SESSION
2y 5m to grant Granted Mar 10, 2026
Patent 12568088
ENTERTAINMENT INTERACTION BASED ON ACCESSING A SEPARATE SYSTEM TO POPULATE A HIDDEN FIELD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+42.5%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 283 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month