Prosecution Insights
Last updated: April 19, 2026
Application No. 18/648,201

IDENTIFYING THREATS USING AGGREGATED SECURITY DATA AND TRAINED MODELS

Final Rejection §102§103
Filed
Apr 26, 2024
Examiner
LEE, MICHAEL M
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Mastercard International Incorporated
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
217 granted / 259 resolved
+25.8% vs TC avg
Strong +44% interview lift
Without
With
+44.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
286
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 259 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments This is a Final office action in response to applicant’s amendment filed on 1/2/2026. Claims 1, 5-8, 12, 14-20 are amended. Claims 1-20 are pending and considered. Applicant’s filed Oath or Declaration on 11/7/2025 has been acknowledged. The objection to Drawings has been withdrawn in lights of applicant’s submitted Replacement Sheet on 1/2/2026. The objections to claims 1, 5 due to informalities has been withdrawn in light of applicant’s amendment to the claims. See updated Claim Objections below. The rejections of claims 6-7, 14 under 35 USC 112(b) as being indefinite has been withdrawn in light of applicant’s amendment to the claims. The rejection of claims 15-20 under 35 USC 101 as being directed to non-statutory subject matter has been withdrawn in light of applicant’s amendment to the claims. Response to Arguments Applicant’s argument, see page 9-10 of the Remarks filed 1/2/2026 with respect to claims rejected under 35 USC 102 over prior arts of record has been fully considered and are persuasive in view of applicant’s amendment to the claims 8, 15 respectively. Therefore, the rejection of claims under 35 USC 102 has been withdrawn. However, upon updated search, prior arts, e.g., Muddu is found to teach the amended limitation(s). Examiner asserts combination of Murphy and Muddu teaches all limitations recited in the amended independent claims. See the updated Claim Rejections under 35 USC 103 below. Applicant is encouraged to include innovative features into the independent claims to advance the case. Claim Objections Claims 12, 19 are objected to because of the following informalities: Claim 12 line 8, “and the indicators of occurrences …” may read “and the associated indicators of occurrences …”. Similarly, claim 19 line 9. Appropriate correction is suggested. Examiner Notes Examiner cites particular paragraphs, columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 8-9, 11, 14-16, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Murphy et al (US20230018895A1, hereinafter, “Murphy”), in view of Muddu et al (US20170063889A1, hereinafter, “Muddu”). Regarding claim 8, Murphy teaches: A computerized method (Murphy, discloses system and method for threat mitigation that utilize artificial intelligence and machine learning based on security profile generated from consolidated platform information, see [Abstract]) comprising: receiving a first group of security data from a first security data source (Fig. 8 at 400, and [0110] Threat mitigation process 10 may obtain 400 system-defined consolidated platform information 236 for computing platform 60 from an independent information source), [wherein the first security data source is a tool that monitors data traffic into and/or out of a monitored system]; (See Muddu below for teachings of limitations in brackets above and below) receiving a second group of security data from a second security data source (Fig. 8 at 312, and [0111] Further and as discussed above, threat mitigation process 10 may obtain 312 client-defined consolidated platform information 238 for computing platform 60 from a client information source), [wherein the second security data source is a platform that captures user details and other associated data during user interactions with the monitored system]; normalizing the first group of security data and the second group of security data such that the normalized first group of security data and the normalized second group of security data are compatible with a model trained for a use case (Fig. 8 at 406/408, homogenizing/ normalizing the system-defined consolidated platform information prior to comparing the system-defined consolidated platform information to the client-defined consolidated platform information. Further refer to Fig. 17, at 908/910, and [0169] When assigning 908 a threat level to the above-described security event, threat mitigation process 10 may assign 910 a threat level using artificial intelligence/machine learning. As discussed above and with respect to artificial intelligence/machine learning being utilized to process data sets, an initial probabilistic model may be defined, wherein this initial probabilistic model may be subsequently (e.g., iteratively or continuously) modified and revised, thus allowing the probabilistic models and the artificial intelligence systems (e.g., probabilistic process 56) to “learn” so that future probabilistic models may be more precise and may explain more complex data sets. And [0234] Accordingly and through the use of probabilistic process 56, information may be processed so that a probabilistic model may be defined (and subsequently revised) to define training routine 272 for a specific attack); predicting a future security event associated with the use case using the model, the normalized first group of security data, and the normalized second group of security data (e.g., [0170] Once assigned 910 a threat level, threat mitigation process 10 may execute 912 a remedial action plan (e.g., remedial action plan 252) based, at least in part, upon the assigned threat level); and presenting data associated with the predicted security event using a visualization layer (e.g., Fig. 14, and [0144] Referring also to FIG. 14 and as will be discussed below, threat mitigation process 10 may generate 702 comparison information 750 that compares the current security-relevant capabilities of computing platform 60 to the comparative platform information determined 700 for the comparative platform to identify a threat context indicator for computing platform 60, wherein comparison information 750 may include graphical comparison information 752 (i.e., visualization layer)). While Murphy teaches threat mitigation based on first security data source and second security data source, but does not specifically teach following, in the same field of endeavor Muddu teaches: wherein the first security data source is a tool that monitors data traffic into and/or out of a monitored system (Muddu, discloses system and method of detecting security related anomalies and threats in a computer network environment, see [Abstract]. Refer to Fig. 4, and [0163] Data source 308 is a source of network management or analyzer data (e.g., event data related to traffic on a node, a link, a set of nodes, or a set of links) (i.e., the first security data source). The network management or analyzer data may be obtained from various network operating systems and protocols, such as Cisco Netflow™); wherein the second security data source is a platform that captures user details and other associated data during user interactions with the monitored system ([0163] data source 304 is a source of data pertaining to logs including, for example, user log-ins and other access events (i.e., the second security data source). These records may be generated from operational (e.g., network routers) and security systems (e.g., firewalls or security software products)); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Muddu in the threat mitigation of Murphy by providing various data sources to be analyzed for anomalies and threat. This would have been obvious because the person having ordinary skill in the art would have been motivated to implement security platform to analyze various data sources to detect security related anomalies and threats (Muddu, [Abstract], [0163]). Regarding claim 15, claim 15 is a computer storage media claim that encompasses limitations similar to those limitations of the method claim 8. Therefore, claim 15 is rejected with the same rationale as applied against claim 8. In addition, Murphy teaches one or more non-transitory computer readable storage media having computer-executable instructions that, upon execution by a processor (Murphy, discloses system, computer program product and method for threat mitigation that utilize artificial intelligence and machine learning based on security profile generated from consolidated platform information, see [Abstract]. And see processor in e.g., [0270], and [Claim 28] computer program product residing on a non-transitory computer readable medium), and perform a remedial operation in response to the detected anomalous event (Fig. 17 at 912, and [0170] threat mitigation process 10 may execute 912 a remedial action plan). Regarding claim 9, similarly claim 16, Murphy-Muddu combination teaches the computerized method of claim 8, the one or more non-transitory computer readable storage media of claim 15, Murphy further teaches: wherein the first group of security data and the second group of security data are received using data interfaces including one or more of the following: application programming interfaces (APIs) associated with security data sources, connector interfaces, and data streaming interfaces (e.g., [0179] When establishing 950 connectivity with a plurality of security-relevant subsystems, threat mitigation process 10 may utilize 952 at least one application program interface (e.g., API Gateway 224) to access at least one of the plurality of security-relevant subsystems). Regarding claim 11, similarly claim 18, Murphy-Muddu combination teaches the computerized method of claim 8, the one or more non-transitory computer readable storage media of claim 15, Murphy further teaches: wherein the predicted future security event includes one or more of: abnormal behavior associated with a user profile, a detected intrusion over an external network connection, and abnormal behavior by a running process (e.g., [0153] When obtaining 802 platform performance information concerning the operation of computing platform 60, threat mitigation process 10 may (as discussed above): obtain 500 consolidated platform information for computing platform 60 to identify one or more … UBA (i.e., User Behavior Analytics) systems). Regarding claim 14, Murphy-Muddu combination teaches the computerized method of claim 8, Murphy further teaches: wherein presenting the data associated with the predicted future security event using the visualization layer includes displaying the data associated with the predicted future security event in a statistical data dashboard interface, wherein the presented data is included in statistical data associated with a security state of an associated system (e.g., Fig. 14, and [0144] comparison information 750 that compares the current security-relevant capabilities of computing platform 60 to the comparative platform information determined 700 for the comparative platform to identify a threat context indicator for computing platform 60, wherein comparison information 750 may include graphical comparison information 752. And [0145] Graphical comparison information 752 (which in this particular example is a bar chart) may identify one or more of: a current threat context score 754 for a client … a threat context score 758 for one or more vendor customers in a specific industry…). Claims 10, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Murphy-Muddu as applied above in claim 8, 15 respectively, further in view of Johnson et al (US20200280573A1, hereinafter, “Johnson”), and further in view of Meehan et al (US20230153740A1, hereinafter, “Meehan”). Regarding claim 10, similarly claim 17, Murphy-Muddu combination teaches the computerized method of claim 8, the one or more non-transitory computer readable storage media of claim 15, The combination of Murphy-Muddu does not teach the following, in the same field of endeavor Johnson teaches: wherein normalizing the first group of security data and the second group of security data includes one or more of the following: scaling numerical data values of the first and second groups of security data such that the scaled data values are of similar scales (Johnson, discloses system and method for cybersecurity detection and mitigation using machine learning and advanced data correlation, see [Abstract]/[Title]. And [0026] Data ingestion queue 204 may store incoming data as a buffer, and normalization module may process data into a uniform style (e.g. data may be harmonized between disparate sources; if one security software vendor reports a qualitative data on a scale of 1-10 while a second security software vendor reports similar qualitative data on a scale of 1-5, the second sets of data from the second vendor may have their values doubled (e.g. to normalize the values)), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Johnson in the threat mitigation of Murphy-Muddu by normalizing data into a uniform style by scaling qualitative data of disparate sources. This would have been obvious because the person having ordinary skill in the art would have been motivated to have data from disparate sources harmonized to build adaptive models using machine learning techniques that integrate data from multiple different domain (Johnson, [Abstract], [0026]). The combination of Murphy-Muddu-Johnson does not specifically teach, in the similar field of endeavor Meehan teaches: and generating new data features based on categorical data values of the first and second groups of security data such that the new data features can be used with the model (Meehan, discloses methods and systems for using machine learning to categorize and select suggested source entities, see [Abstract]. And [0110] Further, the categorization machine learning model may also output a previously identified category to be associated with the new normalized supplier entity name …, or the categorization machine learning model may output a new categorization for the new normalized supplier entity name (e.g., where the resource transaction data and resource line item(s) do not match previously identified resource transaction data and resource line item(s) for previously identified categories). Thus, the categorization machine learning model may also be trained to generate new categories for normalized supplier entity names). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Meehan in the threat mitigation of Murphy-Muddu-Johnson by outputting a new categorization for the new normalized supplier entity name. This would have been obvious because the person having ordinary skill in the art would have been motivated to use machine learning to categorize and select suggested source entities (Meehan, [Abstract]). Claims 12, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Murphy-Muddu as applied above in claim 8, 15 respectively, further in view of Karta et al (US20240114052A1, hereinafter, “Karta”). Regarding claim 12, similarly claim 19, Murphy-Muddu combination teaches the computerized method of claim 8, the one or more non-transitory computer readable storage media of claim 15, The combination of Murphy-Muddu does not teach the following, in the same field of endeavor Karta teaches: further comprising: obtaining training data including training data instances with past data from the first and the second security data sources and associated indicators of occurrences of security events; providing the past data from the first and the second security data sources to the model as input; determining feedback data indicating accuracy of the model using output of the model and the indicators of occurrences of security events; and adjusting parameters of the model using the determined feedback data thereby improving accuracy of the model at predicting future security events (Karta, discloses system and method for detecting and preventing network attacks in real-time using machine learning, see [Abstract]. And [0028] The malicious attack classifier 305 determines a type of the corresponding malicious network attack. The network security system 140 trains the preprocessing classifier 300 and the malicious attack classifier 305 using a training data set 325. The training data set 325 includes historical data packets 330, which are data packets (e.g., similar to the data packet 200) that were transmitted during past malicious network attacks. And [0036] The machine-learned models 300 and 305, as well as the postprocessing classifier, can be retrained based on the classified malicious signal noise 370… In some embodiments, users and/or administrators of the network 105 may provide feedback to the network security system 140 on the accuracy of the machine-learned models). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Karta in the threat mitigation of Murphy-Muddu by using past historical attack data to retrain the machine learning model. This would have been obvious because the person having ordinary skill in the art would have been motivated to improve the performance of network security features and/or determination of risk associated with various malicious network attacks to mitigate the effects of the attacks (Karta, [Abstract]). Regarding claim 20, Murphy-Muddu combination teaches the one or more non-transitory computer readable storage media of claim 15, The combination of Murphy-Muddu does not teach the following, in the same field of endeavor Karta teaches: wherein performing a remedial operation in response to the detected anomalous event includes one or more of the following: blocking network traffic associated with the anomalous event, revoking access privileges of a user profile associated with the anomalous event, and halting a running process associated with the anomalous event (Karta, discloses system and method for detecting and preventing network attacks in real-time using machine learning, see [Abstract]. And [0022] The network security system 140 implements security measures to detect, prevent, and mitigate attacks on the network 105 by malicious actors (e.g., including the malicious actor 130)… Examples of security operations include identifying an IP address of the malicious actor 130, notifying users of the network 105 of the attack, blocking incoming data traffic from the malicious actor 130). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Karta in the threat mitigation of Murphy by blocking incoming data traffic from the malicious actor. This would have been obvious because the person having ordinary skill in the art would have been motivated to improve the performance of network security features and prevent spoofed IP attacks (Karta, [Abstract]). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Murphy-Muddu as applied above in claim 8, further in view of Tcherchian et al (US20170118245A1, hereinafter, “Tcherchian”). Regarding claim 13, Murphy-Muddu combination teaches the computerized method of claim 8, The combination of Murphy-Muddu does not teach the following, in the same field of endeavor Tcherchian teaches: wherein presenting the data associated with the predicted future security event using the visualization layer includes displaying the data associated with the predicted future security event in a notification interface, wherein the presented data includes one or more of the following: a predicted datetime of the predicted future security event, a predicted user profile associated with the predicted future security event, a network vulnerability associated with the predicted future security event, and a running process associated with the predicted future security event (Tcherchian, discloses system and method for aggregating and correlating disparate and unrelated events to enable faster security event detection, see [Abstract]. And [0042] the middleware component 208 may format the results of the evaluation into a visual format for display to system administrators on a computer display... The middleware component 208, in some embodiments, may identify the method of delivering notifications to system administrators, which may be delivered to any of the devices listed above. And [0053] In addition, a security context generator 690 is shown, which is designed to analyze the unassociated events data aggregated by the enterprise service bus and information previously stored in the database 650 to identify relationships between disparate events and platform-specific data. The security context generator 690 leverages predictive analytics to understand the impact certain events and actions have on others from different sources …). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Tcherchian in the threat mitigation of Murphy-Muddu by aggregating and correlating disparate and unrelated events. This would have been obvious because the person having ordinary skill in the art would have been motivated to identify security incidents indicative of security threats (Tcherchian, [Abstract]). Allowable Subject Matter Claims 1-7 are allowable subject matter. As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a). The following is a statement of reasons for the indication of allowable subject matter: Claim 1 recites unique features of “receive a first group of security data from a first security data source, wherein the first security data source is an intrusion prevention system (IPS), and the first group of security data includes authentication data; receive a second group of security data from a second security data source, wherein the second security data source is a firewall, and the second group of security data includes network flow data; normalize the authentication data and the network flow data such that the normalized authentication data and the normalized network flow data are compatible with a model trained for detection of suspicious login events, wherein the normalizing includes: adjusting a scale of event data of the authentication data such that the event data of the authentication data and event data of the network flow data are of a same scale; and synchronizing the event data of the authentication data and the event data of the network flow data with respect to time; detect a suspicious login event using the model, the normalized authentication data, and the normalized network flow data; and automatically present data associated with the detected suspicious login event using a visualization layer, wherein the automatically presented data includes a portion of the authentication data associated with an identifier of the IPS and a portion of the network flow data associated with an identifier of the firewall”, as defined by applicant. The prior arts identified, Murphy, Muddu, Karta, Johnson, Meehan, Tcherchia, either singularly or in combination fails to anticipate or render obvious the claimed limitations of claim 1. Dependent claims 2-7 depend on claim 1, and further limit the claims. Citation of References The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited but not been replied upon for this office action: Nair et al (US20230109926A1) discloses threat management facility for an enterprise network integrating native threat management capabilities with threat data from a cloud service provider. Sweeney et al (US20180124091A1) discloses method for assessing a cyber security risk based cyber security precursor information from a plurality of sources. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL M LEE whose telephone number is (571)272-1975. The examiner can normally be reached on M-F: 8:30AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL M LEE/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Oct 14, 2025
Non-Final Rejection — §102, §103
Nov 05, 2025
Interview Requested
Nov 14, 2025
Examiner Interview Summary
Nov 14, 2025
Applicant Interview (Telephonic)
Jan 02, 2026
Response Filed
Feb 26, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596786
ANOMALOUS EVENT AGGREGATION FOR ANALYSIS AND SYSTEM RESPONSE
2y 5m to grant Granted Apr 07, 2026
Patent 12579301
Data Plane Management Systems and Methods
2y 5m to grant Granted Mar 17, 2026
Patent 12580927
DETECTING AND PROTECTING CLAIMABLE NON-EXISTENT DOMAINS
2y 5m to grant Granted Mar 17, 2026
Patent 12579279
System and Method for Summarization of Complex Cybersecurity Behavioral Ontological Graph
2y 5m to grant Granted Mar 17, 2026
Patent 12580938
CONDITIONAL HYPOTHESIS GENERATION FOR ENTERPRISE PROCESS TREES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+44.1%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 259 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month