Prosecution Insights
Last updated: April 19, 2026
Application No. 18/309,693

RISK BASED ALERTING AND ENTITY PRIORITIZATION DETECTION FRAMEWORK

Non-Final OA §102§103
Filed
Apr 28, 2023
Examiner
GILLESPIE, KAMRYN JORDAN
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
Snowflake Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+14.7% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
17 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
26.4%
-13.6% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102 §103
Detailed Action This communication is in response to Request for Continued Examination (RCE) filed 10/15/2025. Claims 1-11, 13-18, 20-22 are pending. Claims 12 and 19 remain cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments This communication is in respond to applicant’s RCE filed on 10/15/2025. Claims 1-11, 13-18, 20-22 are pending. Applicant's arguments filed on 09/22/2025 have been fully considered but they are not persuasive for the following reasons: Applicant’s Argument: “At least a portion of the recitations of limitations from amended claim 1 were not explicitly considered by the Examiner in setting forth the rejections of the pending claims in the Office Action. However, based on a review of Elgressy, Applicant respectfully submits that the Elgressy fails disclose at least a portion of the above referenced recitations of claim 1. For example, Elgressy focuses on static risk scoring rather than accumulation of risk scores over time.4 That is, Elgressy fails to describe a process in which a risk score is determined "by combining [a] risk score corresponding to the first time period with multiple risk scores of the one or more security activities accumulated over multiple time periods." Thus, Elgressy fails to disclose "generating a cumulative risk score,""determining the cumulative risk score exceeds a predetermined risk threshold," and "in response to determining the cumulative risk score exceeds the predetermined risk threshold, generating an alert output based on the cumulative risk score," as recited by claim 1.5 Accordingly, because each and every element of independent claim 1 is not disclosed in the Elgressy, as arranged in the claim and in as complete detail as in the claim, claim 1 is not anticipated by Elgressy. Thus, it is submitted that claim 1 is allowable. In view of the remarks submitted above, it is also submitted that independent claims 13 and 20 are also allowable. Dependent claims are allowable at least by virtue of their dependence upon claims 1, 13, or 20. Moreover, the dependent claims are each patentable based on elements recited therein. Thus, it is respectfully requested that these rejections be reconsidered and withdrawn and that the claims be allowed.”Examiner’s Response: Applicant’s arguments are fully considered, but rendered moot upon further consideration. The proposed arguments are directed to amended limitations that were not present for consideration within the previous Office Action, mailed 07/21/2025. Further, Elgressy describes in column 9 lines 13-16 “the group or risk profile for a specific user is a dynamic aspect of that person, including their behavior and role, and events that occur over a relevant timeframe.” and column 9 lines 17-26 “As used herein, the term “dynamic” as used with reference to the membership of a person, group, sub-group, or target type refers to the characteristic that the members of a group, sub-group, or target type, or the category a person is placed into, are not fixed and may change over time. Such changes can be due to… a time period over which certain events are counted, a change in a person's behavior, etc.” to teach a process in which a risk score is determined "by combining [a] risk score corresponding to the first time period with multiple risk scores of the one or more security activities accumulated over multiple time periods." With this, Elgressy discloses "generating a cumulative risk score," "determining the cumulative risk score exceeds a predetermined risk threshold," and "in response to determining the cumulative risk score exceeds the predetermined risk threshold, generating an alert output based on the cumulative risk score," as recited by claim 1. Applicant’s Argument: “Claim 6 was rejected under 35 U.S.C. § 103 over Elgressy in view of Jones (U.S. 2022/0038481). Claims 7-10, 16-17 and 22 were rejected under 35 U.S.C. § 103 over Elgressy in view of Marsenic (U.S. 2023/0132703). As noted above, Elgressy fails to disclose each and every element of independent claims 1, 13, and 20. Jones and Marsenic fail to remedy the deficiencies of Elgressy set forth above. Claims 6-10, 16, 17, and 22 depend on claims 1, 13, and 20, respectively, and thus include the limitations of claims 1, 13, and 20, respectively. Thus, claims 6-10, 12, 16, 17, and 22 are allowable over any combination of Elgressy, Jones, and Marsenic. Accordingly, Applicant respectfully requests that the rejections of claims 6-10, 16, 17, and 22 be reconsidered and withdrawn and that the claims be allowed.” Examiner’s Response: Applicant’s arguments are fully considered, but rendered moot as the argument is directed to alleged deficiencies of the rejection of amended claim 1, wherein the limitations have been demonstrated to be taught by prior art. See full rejection within below response. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5 & 11, 13-15 & 18, and 20-21 are rejected under 35 U.S.C 102 as being unpatentable over Elgressy (US 11636213 B1), hereafter Elgressy. Regarding Claim 1, Elgressy teaches: A computer-implemented method comprising: accessing, by one or more processors of an alerting system, security event data generated by one or more computing devices, the security event data comprising network traffic data that identifies a combination of any one or more of: source IP addresses, destination IP addresses, ports, protocols, payloads, timestamps, and intervals; ((col. 20, lines 49-56) “The events are detected by Security Products 240 and in response, Security Products 240 generate signals and/or data. The signals and/or data are representative of the attack and/or information characterizing some aspect of the attack. This may include but is not limited to an alert, time stamp, counter value, indication of an attack mechanism, attack payload, etc.”) computing, by the one or more processors, an identity prioritization score by applying an identity prioritization algorithm to the security event data ((col. 22, lines 31-61) “obtain data regarding the person's title, role and responsibilities from a directory of employees and information about access privileges to sensitive systems and data from a privilege management system…the processing may include application of experience to convert a signal or data into a risk score or measure (in absolute or relative terms)”, (col. 12, lines 43-47) “As noted, this segmentation can be viewed as sorting a set of people based on their likelihood of being a target of an attack and the likelihood of an attack being successful…”); computing, by the one or more processors, an asset prioritization score by applying an asset prioritization algorithm to the security event data ((col. 10, lines 1-9) “In some examples, the attack may be intended to obtain unauthorized access to data or information, to devices, to networks, or to systems. In one example, an attack may be in the form of an attempt to obtain a person's credentials, such as username and password. The cybersecurity risk or risk may be expressed in any suitable manner, including, but not limited to a score, a range of scores, a descriptive level or degree, an indication of inclusion in a specific risk category or group, etc.” One of ordinary skill in the art would appreciate that data, information, devices, networks, systems, and credentials are all recognized as assets within the art.); determining, by the one or more processors a detection likelihood score of one or more security activities identified in the security event data, by applying a detection likelihood algorithm to the security event data ((col. 27, lines 40-49) “a machine learning (ML) model may be trained to assist in determining a cybersecurity risk score, metric, or level for a specific risk contribution for a person... As shown in the figure, in one example, a machine learning model 302 may be trained using a set of training data. The training data may include information relevant to each of a set of people's likelihood of being attacked and/or the likelihood of an attack being successful”); computing, by the one or more processors, a risk score of the one or more security activities by applying a risk-based algorithm that is based on the identity prioritization score, the asset prioritization score, and the detection likelihood score of the one or more security activities ((col. 29, lines 28-30) “Next, at step or stage 332 the person or employee's overall cybersecurity risk score, metric, or profile may be determined. This may be the result of combining the risk scores or metrics for a plurality of risk contributions.”, (col. 14, lines 39-46) “The data for each person may include signals and information describing the person with regards to one or more features (such as role, seniority, behaviors, previous levels or types of cybersecurity attacks, etc.). Associated with the data for each person is a label or annotation indicating the “correct” risk segment, classification, metric, cybersecurity risk category, score, etc. for that person.”, (col. 14, lines 54-63) “After training, the model operates to receive as an input a new person's characterizing data (which may be tokenized into words, but this is not required) and in response generate an output representing the person's expected or predicted cybersecurity risk (e.g., expressed as a score, metric, classification, relative level, etc.) as a result of those characteristics. As mentioned, this score or metric may represent one risk aspect or factor, such as the contribution of a leaf in the risk modeling tree (or a similar representation of risk contributions) to the overall risk.”, (col. 7, lines 60-67) “The scores and assignment to a higher-level group can be used to sort a set of people, where the factors may suggest a person's likelihood of being a target of an attack (e.g., their previous history of being attacked and/or their access to potentially valuable information) and/or the likelihood of an attack being successful (e.g., based on their behavior or cybersecurity training experience).” ). the risk score corresponding to a first time period (column 9 lines 13-16 “the group or risk profile for a specific user is a dynamic aspect of that person, including their behavior and role, and events that occur over a relevant timeframe.”); generating a cumulative risk score by combining the risk score corresponding to the first time period with multiple risk scores of the one or more security activities accumulated over multiple time periods (column 9 lines 17-26 “As used herein, the term “dynamic” as used with reference to the membership of a person, group, sub-group, or target type refers to the characteristic that the members of a group, sub-group, or target type, or the category a person is placed into, are not fixed and may change over time. Such changes can be due to… a task they are assigned or have completed, a change to a threshold for “membership” in a group or sub-group, a time period over which certain events are counted, a change in a person's behavior, etc.”): determining the cumulative risk score exceeds a predetermined risk threshold ((col. 16, lines 15-19) “For example, a process may convert specific types of data into static risk scores, may use ranges of counters, apply thresholds, perform a comparison to company and industry averages, or may use specific algorithms which add additional factors to a risk score calculation.”); in response to determining the cumulative risk score exceeds the predetermined risk threshold, generating an alert output based on the cumulative risk score ((col. 20, lines 49-56) “The events are detected by Security Products 240 and in response, Security Products 240 generate signals and/or data. The signals and/or data are representative of the attack and/or information characterizing some aspect of the attack. This may include but is not limited to an alert, time stamp, counter value, indication of an attack mechanism, attack payload, etc.”); and communicating the alert output to at least one device configured to adjust an operation of the one or more computing devices based on the cumulative risk score((col. 8, lines 43-51) “Segmenting the people in an organization into the groups, sub-groups and intersections of sub-groups (and hence into target types or profiles) described herein may provide a cybersecurity team with one or more of the following benefits: The ability to associate people in an organization with their relative degree of risk, and in response to prioritize the application of cybersecurity prevention and remediation services,”, (col. 34, lines 24-29) “For example, in the context of the present application, such an architecture may be used to provide email analysis and filtering services, network cybersecurity services, risk evaluation services, employee segmentation services, risk remediation services, etc. through access to one or more applications or models.”, (col. 24, lines 20-24) “As a result of the segmentation, embodiments enable a cybersecurity analyst to perform a set of functions, including but not limited to: Set or modify a security policy/protocol applicable to a specific group, sub-group, or target group”, (col. 29-30, lines 58-2) “In some examples, the security process or protocol may involve one or more of generating alerts, blocking access to sites, filtering of posted information, preventing access to systems, websites, or data during certain times, review of on-line social network or other postings to remove company specific information, development of rules for employee use of social network sites, increased training, restrictions on the use of specific vendors or services, etc. Other possible security processes or protocols that may be applied include, but are not limited to or required to include: Denying access from externally controlled networks, such as login from an internet café,”). Regarding claim 13, claim 13 is rejected for similar reasoning and rationale as claim 1, Elgressy also teaches: A computing apparatus ((col. 3, lines 35-38) “Embodiments of the disclosure are directed to systems, apparatuses, and methods for more effectively preparing for and responding to cybersecurity threats directed at people or at groups of people.”). Regarding claim 20, claim 20 is rejected for similar reasoning and rationale as claim 1, Elgressy also teaches: A non-transitory computer-readable storage medium ((col. 6, lines 15-18) “The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored in a suitable non-transitory data storage element.”). Regarding claim 2, Elgressy teaches: The computer-implemented method of claim 1, wherein the at least one device corresponds to a Security Operation Centers (SOC) and an Incident Response (IR) team((col. 8, lines 43-51) “Segmenting the people in an organization into the groups, sub-groups and intersections of sub-groups (and hence into target types or profiles) described herein may provide a cybersecurity team with one or more of the following benefits: The ability to associate people in an organization with their relative degree of risk, and in response to prioritize the application of cybersecurity prevention and remediation services,”, (col. 34, lines 24-29) “For example, in the context of the present application, such an architecture may be used to provide email analysis and filtering services, network cybersecurity services, risk evaluation services, employee segmentation services, risk remediation services, etc. through access to one or more applications or models.”, (col. 24, lines 20-24) “As a result of the segmentation, embodiments enable a cybersecurity analyst to perform a set of functions,”). Regarding claim 14, claim 14 is rejected for similar reasoning and rationale as claim 2, Elgressy also teaches: A computing apparatus ((col. 3, lines 35-38) “Embodiments of the disclosure are directed to systems, apparatuses, and methods for more effectively preparing for and responding to cybersecurity threats directed at people or at groups of people.”). Regarding claim 3, Elgressy teaches: The computer-implemented method of claim 1, wherein the identity prioritization algorithm is configured to identify identity prioritization parameters and compute an identity priority scalar score based on values of the identity prioritization parameters and an identity scalar map((col. 4, lines 56-62) “FIG. 3(a) is flowchart or flow diagram illustrating a method, process, operation or function for obtaining data from security events or products, processing that data to convert it into cybersecurity risk measures, mapping the risk measures to a risk modeling “tree”, and using that mapping to segment a set of people into risk groups, sub-groups, and target groups, in accordance with embodiment,”, (col. 7, lines 45-57) “At a high level, embodiments of the system and methods described herein provide a cybersecurity team with techniques to segment people into different groups corresponding to different levels and types of risk—this process is termed “People Risk Segmentation (PRS)” herein. In one embodiment, these groups may include people that belong to one or more of a group of Attacked People (AP), Vulnerable People (VP), and Privileged People (PP). A risk score, metric, or level may be associated with each member of each group. The risk score, metric, or level may be a result of combining other scores, metrics, or levels obtained from an evaluation of factors that impact a person's likelihood of being attacked or of an attack being successful.”). Regarding claim 15, claim 15 is rejected for similar reasoning and rationale as claim 3, Elgressy also teaches: A computing apparatus ((col. 3, lines 35-38) “Embodiments of the disclosure are directed to systems, apparatuses, and methods for more effectively preparing for and responding to cybersecurity threats directed at people or at groups of people.”). Regarding claim 21, claim 21 is rejected for similar reasoning and rationale as claim 3, Elgressy also teaches: A non-transitory computer-readable storage medium ((col. 6, lines 15-18) “The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored in a suitable non-transitory data storage element.”). Regarding claim 4, Elgressy teaches: The computer-implemented method of claim 3, wherein the identity prioritization parameters comprise at least one of: a computing environment parameter, a privilege within the computing environment parameter, an employment status parameter, an employment type parameter, or an employee profile parameter((col. 13, lines 15-44) “The data or information used to determine a risk profile, metric, score, classification, segmentation, category, or other indication of a degree of risk for a specific person may include, but is not limited to or required to include: ... information about personal privileges and access; for example, a person's access to sensitive data and applications,”). Regarding claim 5, Elgressy teaches: The computer-implemented method of claim 4, wherein the identity prioritization algorithm is configured to calculate a total identity risk score based on a sum of multiple identity risk scores ((col. 45, lines 40-52) “Imminent Targets The imminent targets group contains users who are VAP, VPP & VVP together. Major Targets The Major targets group contains users who are both VAP & VPP & not VVP. Latent Targets The Latent targets group contains users who are both VPP & VVP Soft Targets The Soft targets group contains users who are both VAP & VVP.”) A probability of being a first type of employee ((col. 45, lines 21-36) VAP—Very Attacked People The VAP group contains all the users whose Attacked risk score is above the threshold. The threshold is the Attacked risk 90.sup.th percentile values and all users who have an attacked risk score above it, will be part of this group.), a probability of being a second type of employee (VVP—Very Vulnerable People The VVP group contains all the users whose vulnerable risk score is above the threshold.), a probability of being a third type of employee (VPP—Very Privileged People The VPP group contains all the users whose privileged risk score is above the threshold.). Regarding claim 11, Elgressy teaches: The computer-implemented method of claim 1, wherein the risk score of the one or more security activities is a product of the identity prioritization score, the asset prioritization score and the detection likelihood score ((col. 7, lines 54-57) “The risk score, metric, or level may be a result of combining other scores, metrics, or levels obtained from an evaluation of factors that impact a person's likelihood of being attacked or of an attack being successful.”). Regarding claim 18, claim 18 is rejected for similar reasoning and rationale as claim 11, Elgressy also teaches: A computing apparatus ((col. 3, lines 35-38) “Embodiments of the disclosure are directed to systems, apparatuses, and methods for more effectively preparing for and responding to cybersecurity threats directed at people or at groups of people.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6 is rejected under 35 U.S.C 103 as being unpatentable over Elgressyin view of Jones (US 20220038481 A1), hereafter Jones. Regarding claim 6, Elgressy teaches The computer-implemented method of claim 5, Elgressy does not teach but in a related art, Jones teaches: wherein the identity prioritization algorithm is configured to calculate interquartile ranges for the total identity risk score to dynamically set a threshold for identity priority within a population, and to identity the identity priority scalar score based on the total identity risk score relative to the interquartile ranges ([0055] “In some examples, additionally or alternatively to comparing individual risk values associated with individual attributes to respective threshold risk values, the network security system 100 computes a combined risk value and compares the combined risk value to a combined threshold. The network security system 100 may statistically aggregate multiple attribute risk values to obtain the combined risk value for recent transactions. The multiple attribute risk values may be aggregated by using the statistical mean of the attribute risk values,”). Since both Elgressy and Jones are from the same field of endeavor as both are directed to cybersecurity prioritization, which is within the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine the teachings of Elgressy and Jones by incorporating the algorithm is configured to calculate interquartile ranges for the total identity risk score to dynamically set a threshold for identity priority within a population, and to identity the identity priority scalar score based on the total identity risk score relative to the interquartile ranges of Jones into Elgressy for providing cybersecurity measures and prioritization as claimed. The motivation to combine is to improve detection and classification of cybersecurity aspects since this implementations offers improved analysis and performance due to the inherent data processing. Claims (7-10), (16-17), and (22) are rejected under 35 U.S.C 103 as being unpatentable over Elgressy in view of Marsenic (US 20230132703 A1), hereafter Marsenic. Regarding claim 7, Elgressy teaches The computer-implemented method of claim 1, Elgressy does not teach but in related art Marsenic teaches: wherein the asset prioritization algorithm is configured to identify asset prioritization parameters and compute an asset priority scalar score based on values of the asset prioritization parameters and an asset scalar map ([0037] “A first graph is drawn with directed edge weights representing the estimated probability of rapid lateral movement from the source to the destination entity.”, [0139] “If the pattern of behaviours under analysis is believed to be indicative of a malicious actor, then a score of how confident is the system in this assessment of identifying whether the unusual pattern was caused by a malicious actor is created. Thereafter, the AI cyber security system 100 may also have a scoring module (or the analyser module itself) configured to assign a threat level score or probability indicative of what level of threat does this malicious actor pose (e.g., as shown with the scores depicted in the graph 600 of FIG. 12).”, [0055] “In additional embodiments, users can also seed the graph with the most institutionally important entities (e.g., those relating to high level managers, CTOs, COOs, etc.). Subsequently, an importance score can be computed for each node in the graph.”). Since both Elgressy and Marsenic are from the same field of endeavor as both are directed to cybersecurity prioritization, which is within the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine the teachings of Elgressy and Marsenic by incorporating the asset prioritization algorithm configured to identify asset prioritization parameters and compute an asset priority scalar score based on values of the asset prioritization parameters and an asset scalar map of Marsenic into Elgressy for providing cybersecurity measures and prioritization as claimed. The motivation to combine is to improve detection and classification of cybersecurity aspects since this implementations offers improved analysis and performance due to the inherent data processing. Regarding claim 16, claim 16 is rejected for similar reasoning and rationale as claim 7, Elgressy also teaches: A computing apparatus ((col. 3, lines 35-38) “Embodiments of the disclosure are directed to systems, apparatuses, and methods for more effectively preparing for and responding to cybersecurity threats directed at people or at groups of people.”). Regarding claim 22, claim 22 is rejected for similar reasoning and rationale as claim 7, Elgressy also teaches: A non-transitory computer-readable storage medium ((col. 6, lines 15-18) “The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored in a suitable non-transitory data storage element.”). Regarding claim 8, Elgressy- Marsenic teaches: The computer-implemented method of claim 7, Wherein the asset prioritization parameters comprise at least one of: an asset environment parameter, a workload priority parameter, a data type parameter, an expected parameter, a security agent enabled parameter, vulnerabilities parameter, maintenance parameter. (Elgressy (col. 13, lines 16-35) “The data or information used to determine a risk profile, metric, score, classification, segmentation, category, or other indication of a degree of risk for a specific person may include, but is not limited to or required to include…information about behavioral vulnerabilities of the person obtained from the security products…”). Regarding claim 9, Elgressy- Marsenic teaches: The computer-implemented method of claim 8, wherein the asset priority scalar score is based on a product of quantitative values of the asset prioritization parameters (Marsenic, [0071] “The AI based cyber security system can factor how important nodes are based on what is discussed in where users and their devices are ranked based on their importance in the organization. Resources (on premises via SMB, through SaaS logs, etc.) observed in user activity are recorded. Resources can be ranked for their impact and ability to propagate. Resources with more than one user interacting, or users interacting who have a high impact score as derived based on user importance, can be considered high impact”, [0132] “Similarly, the AI based cyber security system can determine vulnerable groups of devices and prioritize their protection based on the fact that similar devices to one device of each of the vulnerable groups has already been attacked and therefore is at a higher risk, i.e., has a high task score.” The reasons of obviousness have been noted in the rejection of claim 7 above and applicable herein). Regarding claim 10, Elgressy- Marsenic teaches: The computer-implemented method of claim 1, wherein the detection likelihood score is based on a likelihood score, wherein the likelihood score includes a calculation of likelihood parameters scores (Marsenic, [0194] “For example, the analyzer module may rank supported candidate cyber threat hypotheses by a combo of likelihood that this candidate cyber threat hypothesis is supported and a severity threat level of this incident type.” The reasons of obviousness have been noted in the rejection of claim 7 above and applicable herein). Regarding claim 17, claim 17 is rejected for similar reasoning and rationale as claim 10, Elgressy also teaches: A computing apparatus ((col. 3, lines 35-38) “Embodiments of the disclosure are directed to systems, apparatuses, and methods for more effectively preparing for and responding to cybersecurity threats directed at people or at groups of people.”). Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure: ANDRIUKHIN; Evgenii US-20250021657-A1 SYSTEMS AND METHODOLOGIES FOR AUTO LABELING VULNERABILITIES ([AB] “A system and methodology for automated security assessment of microservices includes a microservice composition model, data gathering, security assessment, and labeling components. It treats microservices as separate projects, collecting source code, dependencies, and runtime information. The security assessment employs tools to analyze code, track vulnerabilities, and identify risks. Predefined rules categorize microservices and assign security state labels. A hidden Markov model predicts security states based on historical data, enabling proactive security management and risk mitigation.”) Brown; Joseph US-12149558-B1 Cybersecurity Architectures For Multi-contextual Risk Quantification ([AB] “The present disclosure relates to cybersecurity architectures and systems for assessing and quantifying security threats and risks associated with machine-readable codes, such as quick response codes, barcodes, data matrix codes, and other types of codes. A security application comprises a multi-context threat assessment system configured to analyze a broad spectrum of risk assessment attributes across multiple contexts. These contexts relate to the machine-readable code itself, target network resources identified by the code, entities affiliated with the code, end-users interacting with the code, and enterprise systems policies. The system can evaluate various risk assessment attributes for each of these contexts to more accurately quantify potential security risks associated with the machine-readable codes. The security application further includes an API for extending its threat assessment capabilities to various digital ecosystems and an AI-powered learning network comprising language models and computer vision systems to enhance threat detection and risk quantification capabilities.”) Meshi; Yinnon US-20230224311-A1 NETWORK ADAPTIVE ALERT PRIORITIZATION SYSTEM ([AB] “A method, including receiving, from multiple sources, respective sets of incidents, and respective suspiciousness labels for the incidents. A set of rules are applied so as to assign training labels to respective incidents in a subset of the incidents in the received sets. For each given incident in the subset, the respective training label is compared to the respective suspiciousness label so as to compute a respective quality score for each given source. Any sources having respective label quality scores meeting a predefined criterion are identified, and a model for computing predicted labels is fit to the incidents received from the identified sources and the respective suspiciousness labels of the incidents. The model is applied to an additional incident received from one of the sources to compute a predicted label for the additional incident, and a notification of the additional incident is prioritized in response to the predicted label.”) Amar; Shmuel US-11640470-B1 System And Methods For Reducing An Organization's Cybersecurity Risk By Determining The Function And Seniority Of Employees ([AB] “Systems, methods, and apparatuses directed to implementations of an approach and techniques for more effectively preparing for, detecting, and responding to cybersecurity threats directed at people or at groups of people. Embodiments are directed to classifying or segmenting employees by “predicting” what are believed to be two attributes of an employee that contribute to making them at a higher risk of being a target of a cybersecurity attack. These attributes are the employee's seniority level (e.g., employee, contractor, manager, executive, board member) and the employee's primary function or role in an organization (e.g., HR, Legal, Operations, Finance, Marketing, Sales, R&D, etc.”) Hicks; Raymond US-20220035929-A1 EVALUATING A SYSTEM ASPECT OF A SYSTEM ([AB] “Operating an analysis system that includes one or more computing entities by installing a system user interface module on a system to be evaluated, installing a data extraction module on the system to be evaluated and establishing at least one secure data pipeline between the analysis system and the system to be evaluated to service communications between the analysis system and the system user interface module and the analysis system and the data extraction module. The method further includes determining analysis parameters for the system to be evaluated, identifying assets of the system to be evaluated based upon the analysis parameters, receiving data from the data extraction module, via the at least one secure data pipeline and based upon the analysis parameters, the data regarding the assets of the system to be evaluated, and processing the data based upon the analysis parameters to produce one or more evaluation outputs.”) Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kamryn Gillespie whose telephone number is 703-756-5498. The examiner can normally be reached on Monday through Thursday from 9am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.J.G./Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Feb 19, 2025
Non-Final Rejection — §102, §103
Apr 10, 2025
Interview Requested
Apr 23, 2025
Examiner Interview Summary
Apr 23, 2025
Applicant Interview (Telephonic)
May 21, 2025
Response Filed
Jul 15, 2025
Final Rejection — §102, §103
Sep 22, 2025
Response after Non-Final Action
Oct 15, 2025
Request for Continued Examination
Oct 23, 2025
Response after Non-Final Action
Oct 28, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596795
DETECTING A CURRENT ATTACK BASED ON SIGNATURE GENERATION TECHNIQUE IN A COMPUTERIZED ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596796
Self-synchronous Side-Channel Attack Countermeasure
2y 5m to grant Granted Apr 07, 2026
Patent 12554859
GENERATING 3-DIMENSIONAL MODELS AND CONNECTIONS TO PROVIDE VULNERABILITY CONTEXT
2y 5m to grant Granted Feb 17, 2026
Patent 12518004
MITIGATING POINTER AUTHENTICATION CODE (PAC) ATTACKS IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Jan 06, 2026
Patent 12511376
METHOD, SYSTEM, AND TECHNIQUES FOR PREVENTING ANALOG DATA LOSS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+50.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month