Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,962

MULTI-STAGE ANOMALY DETECTION FOR PROCESS CHAINS IN MULTI-HOST ENVIRONMENTS

Non-Final OA §102§103
Filed
Mar 27, 2024
Examiner
ABDULLAH, SAAD AHMAD
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Darktrace Holdings Limited
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
54 granted / 70 resolved
+19.1% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
112
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
61.6%
+21.6% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 70 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to the application 18/618,962 filed on 03/27/2024. Claims 20-36 have been examined and are pending in this application. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 28 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Chen (US 2016/0330226 A1) Regarding Claim 28 Chen discloses: A non-transitory computer readable medium including software modules that comprise one or more instructions executable by one or more processors operating as a host endpoint agent configured to detect one or more potential cyber threats on an endpoint computing device (Chen ¶20–23: teaches an Automatic Security Intelligence system with an agent installed on each endpoint that collects process and network data for detecting cyber threats.), comprising: an analyzer module configured to generate one or more anomaly scores for a potential cyber threat detected on the endpoint computing device, wherein the analyzer module includes a multi-stage anomaly detector that comprises a plurality of anomaly detectors including at least a first stage of anomaly detectors to generate a first anomaly score and a second stage of anomaly detectors to generate a second anomaly score (Chen ¶24, 42-45: discloses multiple anomaly detection modules that analyze different host-level events to generate initial anomaly scores (first stage) and a malicious process path discovery module that aggregates and refines those results using a random-walk model to produce a final normalized anomaly score (second stage).); a collections module configured to monitor and collect pattern of life data of one or more software processes executing on the endpoint computing device and one or more users of the endpoint computing device, wherein the pattern of life data, including at least one or more of metadata, events, and alerts, is provided to a cyber security appliance that is installed on a network communicatively coupled to the endpoint computing device and comprises at least one or more machine-learning models to analyze the pattern of life data for the host endpoint agent communicatively coupled to the cyber security appliance (Chen ¶20–28, 42–44: disclose that each endpoint includes an agent installed on the host to collect process, file and user-behavior telemetry, representing pattern-of-life information for the endpoint. The agent transmits this operational data to the backend servers and analysis server that perform intrusion-detection analysis using learning-based statistical models within the host-level analysis module. ), wherein a cyber threat module is configured in the host endpoint agent or the cyber security appliance to reference one or more machine-learning models to analyze the collected pattern of life data to detect a potential cyber threat of the one or more potential cyber threats upon the collected pattern of life data deviating from normal pattern of life data for that endpoint computing device (Chen ¶20-22, 24, 27-28, 42-46: discloses that a cyber-threat detection module within the host-level analysis system analyzes the operational data collected by the endpoint agent using ML based statical models implemented in the host-level analysis module or the analysis server. The module learned behavioral profiles of processes, files, sockets, and user derived from normal system activity and applies a random-walk-based anomaly scoring model with box-cox communication normalization to determine deviation from the patterns. When the observed behaviors deviates from the expected baseline the system detects and reports potential cyber threats.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 20, 24-25, 29-31 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 2016/0330226 A1) and in further view of Sadrieh (US 11,294,756 B1). Regarding Claim 20 Chen discloses: A multi-stage anomaly detector deployed within a host endpoint agent configured to detect at least a potential cyber threat on an endpoint computing device (Chen ¶20–23: teaches an Automatic Security Intelligence system with an agent installed on each endpoint that collects process and network data for detecting cyber threats.), the multi-stage anomaly detector comprising: a first stage of anomaly detectors including a symbol frequency anomaly detector configured to (i) analyze a first process chain of parameters to estimate how often a process associated with the first process chain has been executed on the endpoint computing device and (ii) generate a first anomaly score (Chen ¶23–24, 30, 36, 42: discloses that the host-level analysis module 43 includes process-to-file and user-to-process anomaly detection components that model each process’s normal execution behavior by analyzing operational data collected from the endpoint agent. The system constructs a directed process graph, where each edge stores timestamps representing the number of times a given process event or chain of processes has executed, thereby quantifying execution frequency on the endpoint. Using this historical process-chain data, a random-walk algorithm computes sender and receiver probabilities to evaluate behavioral deviation, generating an anomaly score that reflects how abnormal a process observed frequency is compared to its expected baseline.); and a second stage of anomaly detectors including a jump frequency anomaly detector configured to (i) analyze a second process chain of parameters to estimate how often a particular process launches and then accesses or launches another process and (ii) generate a second anomaly score (Chen ¶24, 27–38, 42–44: discloses a malicious process path discovery module that analyzes sequences of process interactions across a time window by combining prior and current event data to identify abnormal process chains. The system constructs a multipartite graph in which each edge corresponds to an information flow between processes, files, or sockets, and calculates a transition probability matrix to estimate how often one process launches or accesses another. Using these transition probabilities, Chen applies a random-walk computation and normalization via a Box-Cox power transformation to generate an anomaly score for each process path, showing the abnormality of process-to-process jump frequencies relative to normal system behavior.), (Chen ¶46: discloses that once a suspicious path has been detected, the host-level analysis module provides information regarding the anomaly, generating a report that may include one or more alerts, and that the visualization module outputs these detection results to end users.). In an analogous art, Sadrieh discloses a weighted combination anomaly score system/method that includes: wherein a weighted combination of the first anomaly score and the second anomaly score is used to produce a combined anomaly score that is correlated to a likelihood that the potential cyber threat is maliciously harmful for the endpoint computing device (Sadrieh Column 2, Lines 54- Column 3, Line 3 and Column 5, Line 12-33: disclose computing an anomaly score by inputting reconstruction probabilities from a Variational Autoencoder (VAE) into a Random Isolation Forest (RIF) layer to produce a final combined anomaly score that reflects a weighted relationship between multiple intermediate anomaly detections.). Given the teachings of Sadrieh, a person having ordinary skill in the art would have recognized the desirability of modifying the teachings of Chen multi-stage anomaly detector to generate a weighted combination of stage outputs as a unified anomaly score correlated with malicious likelihood. Sadrieh teaches combining outputs of a VAE and RIF to produce a final probabilistic anomaly score representing the likelihood of a harmful event (Sadrieh Column 2, Lines 54 - Column 3, Line 3 and Column 5, Line 12-33). It would have been obvious to apply Sadrieh’s fusion approach to Chen’s stage outputs to yield a single likelihood-based threat score. Regarding Claim 24 Chen discloses: The multi-stage anomaly detector of claim 20, wherein at least the first stage of the anomaly detectors is configured to use one or more computational processes and factors that are different from at least one computational process used by the second stage of the anomaly detectors (Chen ¶23–24, 27–38: discloses that the first-stage detectors perform frequency-based process and user behavior analysis, while the second-stage malicious process path discovery module uses a graph-based random-walk model to analyze process chains.), Sadrieh further discloses a weighted combination anomaly score system/method that includes: and wherein different computational processes and factors are implemented by each of the first stage of the anomaly detectors and the second stage of the anomaly detectors to ultimately form a weighted combination of the first anomaly score and the second anomaly score (Sadrieh Column 2, Lines 54- Column 3, Line 64 and Column 5, Line 12-33: discloses a first-stage VAE that generates probabilistic latent variables and reconstruction probabilities, and a second-stage RIF that consumes those probabilities to compute a final anomaly score. Because the VAE and RIF use different computational processes and the RIF derives its score from the weighted probabilistic outputs of the VAE, Sadrieh teaches forming a weighted combination of first- and second-stage anomaly scores correlated with anomaly likelihood.). Given the teachings of Sadrieh , a person having ordinary skill in the art would have recognized the desirability of modifying the teachings of Chen to employ distinct computational processes whose outputs are fused into a weighted anomaly score. Sadrieh discloses a first-stage VAE generating probabilistic latent variables and a second-stage RIF using those probabilistic outputs to compute a final anomaly score, combining neural and statistical detection stages improves accuracy and temporal sensitivity. It would have been obvious to implement such differing computational stages within a multi-stage detector to form a weighted combination of stage-specific scores correlated with threat likelihood, thereby enhancing anomaly-detection precision (Sadrieh Column 2, Lines 54 - Column 3, Line 64 and Column 5, Line 12-33). Regarding Claim 25 Sadrieh further discloses a weighted combination anomaly score system/method that includes: The multi-stage anomaly detector of claim 20 further comprising: third stage of anomaly detectors including a neural network anomaly detector to analyze two or more details of how a process is interacting with other processes or resources on the endpoint computing device (Sadrieh Column 2, Lines 54 - Column 3, Line 64: discloses a neural-network-based anomaly detector (VAE + LSTM) that models temporal and behavioral interactions among multiple network processes and resources, and outputs a probabilistic anomaly score that is further processed by a second-stage RIF layer.). Given the teachings of Sadrieh, a person of ordinary skill in the art would have recognized the desirability of modifying the teachings of Chen by incorporating a neural network anomaly detector as an additional stage in a multi-stage anomaly detection system to analyze detailed behavioral interactions among processes and resources. Sadrieh discloses that the VAE with LSTM layers models short and long-term dependencies in time-series data to capture complex process interactions and that the VAE’s probabilistic reconstruction output is provided to a RIF to compute a final anomaly score. It would have been obvious to employ such a neural network stage as part of a hierarchical detector to enhance detection accuracy by learning temporal dependencies and contextual relationships between process behaviors and system resources, thereby improving multi-stage anomaly correlation and reducing false positives (Sadrieh Column 2, Lines 54 - Column 3, Line 64). Regarding Claim 29 Sadrieh further discloses a weighted combination anomaly score system/method that includes: The non-transitory computer readable medium of claim 28, further comprising: an autonomous response module configured to cause one or more actions to be taken autonomously to contain the detected potential cyber threat, wherein the one or more actions are configured to be triggered when a likelihood of a combined anomaly score, generated based on the first anomaly score and the second anomaly score, satisfies a predetermined threshold that indicates a high likelihood that the detected potential cyber threat is malicious (Sadrieh Column 2, Lines 54- Column 3, Line 3 and Column 5, Line 12-33: disclose computing an anomaly score by inputting reconstruction probabilities from a Variational Autoencoder (VAE) into a Random Isolation Forest (RIF) layer to produce a final, combined anomaly score that reflects a weighted relationship between multiple intermediate anomaly detections. This final score, ranging from 0–1, represents the likelihood that a detected anomaly is a malicious or harmful cyber threat), Given the teachings of Sadrieh, a person having ordinary skill in the art would have recognized the desirability of modifying Chen’s multi-stage anomaly detector to generate a weighted combination of stage outputs as a unified anomaly score correlated with malicious likelihood. Sadrieh teaches combining outputs of a VAE and RIF to produce a final probabilistic anomaly score representing the likelihood of a harmful event (Sadrieh Column 2, Lines 54- Column 3, Line 3 and Column 5, Line 12-33). It would have been obvious to apply Sadrieh’s fusion approach to Chen’s stage outputs to yield a single likelihood-based threat score. Regarding Claim 30 Sadrieh further discloses a weighted combination anomaly score system/method that includes: The non-transitory computer readable medium of claim 29, wherein the cyber threat module is configured in the host endpoint agent and is further configured to generate the combined anomaly score based on a real-time analysis of the collected pattern of life data deviating from the normal pattern of life data for that endpoint computing device (Sadrieh Column 2, Lines 54- Column 3, Line 3 and Column 5, Line 12-33: discloses an anomaly-detection module that receives real time time-series data representing normal operational behavior. Sadrieh describes using a variation autoencoders to model deviation from normal patterns and employ a RIF to generate a combined anomaly score indicating a threat.). Given the teachings of Sadrieh, a person having ordinary skill in the art would have recognized the desirability of modifying Chen’s multi-stage anomaly detector to generate a weighted combination of stage outputs as a unified anomaly score correlated with malicious likelihood. Sadrieh teaches combining outputs of a VAE and RIF to produce a final probabilistic anomaly score representing the likelihood of a harmful event (Sadrieh Column 2, Lines 54- Column 3, Line 3 and Column 5, Line 12-33). It would have been obvious to apply Sadrieh’s fusion approach to Chen’s stage outputs to yield a single likelihood-based threat score. Regarding Claim 31 Claim 31 is directed to a method corresponding to the system in claim 20. Claim 31 is similar in scope to claim 20 and is therefore rejected under similar rationale. Regarding Claim 35 Claim 35 is directed to a method corresponding to the system in claim 24. Claim 34 is similar in scope to claim 24 and is therefore rejected under similar rationale. Claims 21-23 and 32-34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 2016/0330226 A1), in view of Sadrieh (US 11,294,756 B1), and in further view of Karasaridis (US 2020/0195669 A1). Regarding Claim 21 In an analogous art, Karasaridis discloses a severeness factor system/method that includes: The multi-stage anomaly detector of claim 20, wherein the combined anomaly score is determined based on one or more interest-level factors comprising at least one or more of a level of interest factor and an estimation level of severeness factor of an impact of the potential cyber threat could have on the endpoint computing device (Karasaridis ¶25–26: discloses computing a combined anomaly or reputation score based on weighted contributions reflecting the severity and type of detected anomalous behavior, such as assigning higher negative weights (−10, −5) for more severe anomalies and using thresholds to reclassify entities as “good,” “unknown,” or “bad”.). Given the teachings of Karasaridis, a person having ordinary skill in the art would have recognized the desirability of modifying the teachings of Chen and Sadrieh to incorporate weighted severity-based scoring for combining anomaly outputs. Karasaridis discloses assigning weighted contributions (e.g., −10, −5) to anomalies based on their type and severity and computing an overall score that correlates with the likelihood of malicious impact. It would have been obvious to apply a weighted scoring mechanism to a multi-stage anomaly scores to generate a single combined anomaly score reflecting both anomaly magnitude and threat severity, thereby improving prioritization and triage of detected cyber threats in a predictable manner (Karasaridis ¶25–26). Regarding Claim 22 Chen discloses: The multi-stage anomaly detector of claim 21 is deployed within the host endpoint agent configured to use one or more machine-learning models to analyze a collected pattern of life data for the endpoint computing device against a normal pattern of life data for the endpoint computing device to provide the determination of the combined anomaly score for the potential cyber threat for that endpoint computing device (Chen ¶20–28, 42–44: discloses that each endpoint includes an agent installed on the host to collect process, file, and socket telemetry, and that the host-level analysis module employs multiple anomaly detectors that model normal behavioral roles and apply learning-based statistical methods to generate an anomaly score for each process path.). Regarding Claim 23 Chen discloses: The multi-stage anomaly detector of claim 22, wherein at least one level of the interest factor is used to indicate a degree of difference of a behavior pattern of the potential cyber threat from a normal behavior pattern of life for the endpoint computing device (Chen ¶44–46: discloses determining how far suspicious behavior deviates from normal by calculating statistical deviation measures such as Box-Cox normalization and t-values between suspicious and normal paths. This quantifies the difference between a potential cyber threat and normal behavior, effectively serving as an interest-level factor indicating deviation from normal activity. Once a suspicious path has been detected, the host-level analysis module provides information regarding the anomaly, generating a report that may include one or more alerts). Regarding Claim 32 Claim 32 is directed to a method corresponding to the system in claim 21. Claim 32 is similar in scope to claim 21 and is therefore rejected under similar rationale. Regarding Claim 33 Claim 33 is directed to a method corresponding to the system in claim 22. Claim 33 is similar in scope to claim 22 and is therefore rejected under similar rationale. Regarding Claim 34 Claim 34 is directed to a method corresponding to the system in claim 23. Claim 34 is similar in scope to claim 23 and is therefore rejected under similar rationale. Claims 26-27 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 2016/0330226 -A1), in view of Sadrieh (US 11,294,756 B1), and in further view of Kukreja (US 2019/0294485 A1). Regarding Claim 26 In an analogous art, Kukreja discloses an anomaly detector system/method that includes: The multi-stage anomaly detector of claim 25, wherein the neural network anomaly detector further comprises a recurrent neural network anomaly detector, wherein the analyzer module comprises a controller having one or more threshold parameters, wherein the threshold parameters comprise one or more urgency parameters are configured as one or more static/dynamic thresholds (Kukreja ¶27, 50–54; FIGs. 2, 4–5: discloses a neural-network-based anomaly detector that uses a predictive scoring model to evaluate time-series telemetry data and determine prediction errors. The analyzer module applies both static thresholds and dynamic thresholds to decide when anomalies occur, functioning as a controller with urgency parameters governing sensitivity and timing.), and wherein the urgency parameters are configured to govern a time duration for each of the first, second, third stages of the anomaly detectors, and to establish the thresholds in order to move from one anomaly score of such detectors to another anomaly score of such detectors in a hierarchy configuration (Kukreja ¶38–40, 53–54; FIGs. 3A–3C & 5: discloses a hierarchical anomaly detection framework where anomalies detected at lower “pivot” levels are rolled up through successive layers based on severity and persistence. The system applies time-based thresholds that determine when a prediction error persists long enough to escalate to a higher anomaly stage, effectively governing the time duration and transitions between successive anomaly scores.) Given the teachings of Kukreja, a person of ordinary skill in the art would have recognized the desirability of modifying the teachings of Chen and Sadrieh by implementing a hierarchical, time-governed anomaly detection system where progression between detection stages is based on the persistence and severity of anomalies. Kukreja discloses that anomalies detected at lower levels (pivots) are rolled up through successive layers based on severity over a predetermined time period, with escalation occurring when prediction errors exceed threshold differences. It would have been obvious to employ such time-based, threshold-controlled escalation (“urgency parameters”) to govern the duration and transition between successive anomaly detection stages, thereby improving the responsiveness and accuracy of multi-stage anomaly scoring within a hierarchical configuration (Kukreja ¶38–40, 53–54; FIGs. 3A–3C & 5). Regarding Claim 27 In an analogous art, Kukreja discloses an anomaly detector system/method that includes: The multi-stage anomaly detector of claim 26, wherein the first anomaly score associated with the first stage of anomaly detectors and the second anomaly score associated with the second stage of the anomaly detectors exceed established thresholds of urgency parameters to be capable of moving and generating the third anomaly score of the third stage of the anomaly detectors (Kukreja ¶27, 37–44, 50–54; FIGs 3A–3C: discloses a hierarchical anomaly detection system in which lower-level anomaly scores (leaf or dimension nodes) that exceed defined thresholds or persist for a set time are rolled up into higher-level aggregate scores, forming a multi-stage escalation chain. The prediction-error thresholds and persistence windows function as urgency parameters governing when a stage advances, producing successive anomaly scores across the hierarchy. Thus, Kukreja teaches that first and second stage anomaly scores trigger generation of a higher-level (third-stage) anomaly score once their threshold conditions are met). Given the teachings of Kukreja, a person of ordinary skill in the art would have recognized the desirability of modifying the teachings of Chen and Sadrieh by applying a multi-stage hierarchal anomaly scoring in order to improve localization and escalation of anomalous events. The prediction-error threshold and persistence windows functions as urgency parameters governing when a stage advances, producing successive anomaly scores across the hierarchy. This improves the responsiveness and accuracy of anomaly localization by enabling hierarchal escalation-based on urgency threshold (Kukreja ¶27, 37–44, 50–54; FIGs 3A–3C). Regarding Claim 36 Claim 36 is directed to a method corresponding to the system in claim 27. Claim 36 is similar in scope to claim 27 and is therefore rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ABBASZADEH US 2020/0244677 A1 - teaches a hierarchical system for detecting and localizing abnormalities in a cyber-physical system by analyzing monitoring node data across multiple levels of a hierarchy. It uses feature vectors and decision boundaries to identify which subsystem or node exhibits abnormal behavior, refining the search from global to local levels. The hierarchy can be created using knowledge-based, data-driven, or hybrid methods, and the model is trained using normal and abnormal operation data to automatically generate detection boundaries. Huang US 2020/0293653 A1 - teaches detecting abnormal or malicious system call sequences using a trained recurrent neural network (LSTM) that predicts expected calls based on embeddings of system call features and arguments. Deviations between predicted and observed calls trigger anomaly alerts or automated responses such as quarantining or blocking processes. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD ABDULLAH whose telephone number is 571-272-1531. The examiner can normally be reached on Monday-Friday 9am-5pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, LYNN FIELD can be reached on 571-272-2092. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAAD AHMAD ABDULLAH/ Examiner, Art Unit 2431 /LYNN D FEILD/ Supervisory Patent Examiner, Art Unit 2431
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Feb 20, 2025
Response after Non-Final Action
Mar 27, 2025
Response after Non-Final Action
Nov 12, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603895
PACKET METADATA CAPTURE IN A SOFTWARE-DEFINED NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12592961
QUANTUM-BASED ADAPTIVE DEEP LEARNING FRAMEWORK FOR SECURING NETWORK FILES
2y 5m to grant Granted Mar 31, 2026
Patent 12580886
Network security gateway onboard an aircraft to connect low and high trust domains of an avionics computing infrastructure
2y 5m to grant Granted Mar 17, 2026
Patent 12554871
SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR SECURE AND PRIVATE DATA VALUATION AND TRANSFER
2y 5m to grant Granted Feb 17, 2026
Patent 12554832
AUTOMATED LEAST PRIVILEGE ASSIGNMENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 70 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month