Prosecution Insights
Last updated: April 19, 2026
Application No. 18/236,638

Heterogeneous Processor With High-Speed Decision Tree Scheduler

Non-Final OA §103
Filed
Aug 22, 2023
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Wisconsin Alumni Research Foundation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Vega et al. (US 2022/0004433 A1 – cited in the IDS) in view of Sharaff et al. (US 2023/0106318 A1). As to claim 1, Vega teaches a computer architecture of a computer comprising (A heterogeneous SoC is a specialized computing system; paragraph [0020]): a plurality of heterogeneous processor cores having clusters of homogeneous processor cores (Heterogeneous SoC 100-1 can further include processing elements 108. Processing elements 108 can include a variety of types of processing elements. For example, PE-1 140-1 can be a general purpose processor (e.g., CPU), PE-2 140-2 can be a graphics processing unit (GPU), and PE-3 140-3 can be a hardware accelerator. These processing elements 108 can be generically referred to as PEs 140. In addition to these broader categories of processing elements, each category of processing element can include various different models and/or brands of processing elements which may having correspondingly unique performance characteristics; paragraph [0032]); and a computer memory storing operating program instructions that when executed on the computer cause the computer to (Computer 900 includes memory 925, storage 930, interconnect 920 (e.g., BUS), one or more CPUs 905; paragraph [0071] and Each CPU 905 retrieves and executes programming instructions stored in memory 925 or storage 930; paragraph [0072]): (1) collect a set of feature values related to performance of the heterogeneous processor cores during up execution of application program instructions comprised of tasks (For example, learning agent 200 can be configured to perform machine learning on current and/or historical DAGs 112, critical paths 114, ranked tasks 116, the ready queue 130, the completed queue 136, heuristics 138, processing elements 108, and/or other information; paragraphs [0036], [0029]-[0030], and operation 404 includes dynamically receiving one or more DAGs 112 at the meta pre-processor 104. The DAGs 112 can be received dynamically insofar as each DAG 112 can be associated with a control flow graph 110 of an application 102, and the application 102 can implement a control flow graph 110 according to execution path dependencies and/or environmental triggers; paragraphs [0046] and [0058]); (2) identify a task of the application program instructions to be executed on a plurality of heterogeneous processor cores (Operation 406 includes determining ranked tasks 116 for one or more tasks in one or more received DAGs 112; paragraph [0047] and [0026]); (3) apply the feature values to a decision tree (the learning agent 200 can include any number of machine learning algorithms such as, but not limited to, natural language processing (NLP), natural language understanding (NLU), decision tree learning; paragraph [0035]) according to a function and the feature values to identify to a PE associated with a cluster (After training the learning agent 200, the learning agent 200 can ingest one or more of a DAG 112, a ready queue 130, completed queue 136, heuristics 138, and/or processing elements 108 and generate ranked tasks 116 and/or PE indicators 126. The ranked tasks 116 can then be provided to the scheduler 128; paragraph [0037] and Operation 406 includes determining ranked tasks 116 for one or more tasks in one or more received DAGs 112. In some embodiments, operation 406 includes determining ranked tasks 116 for at least one task in a critical path 114 of the DAG 112. Operation 406 can determine a rank in ranked tasks 116 by dividing a priority 118 of a task by the slack 120 in the DAG 112. The slack 120, meanwhile, can be calculated by subtracting a computational cost 124 from a sub-deadline 122. The sub-deadline 122 can be determined using a kernel execution time table 312. The slack 120 can be specific to a single node or summed for each node remaining in a critical path 114 of the DAG 112.; paragraphs [0047], [0048] and [0065]); and (4) assign the task to the cluster identified by the identified PE (Operation 408 includes providing one or more of the ranked tasks 116 to the scheduler 128 for execution by one or more of the processing elements 108 according to the rank. In some embodiments, operation 408 includes sending a highest ranked, or a set of highest ranked tasks to the scheduler 128. In some embodiments, for each task sent to the scheduler 128, a PE indicator 126 corresponding to the task is also provided to the scheduler 128. In some embodiments, upon receiving the ranked tasks 116 (and optionally the PE indicators 126) the scheduler 128 can place the ranked tasks 116 in a ready queue 130 where each task 132 is associated with a PE assignment 134, and where the PE assignment 134 can be assigned by the scheduler 128 or based on the PE indicator 126 (if a PE indicator 126 is provided); paragraph [0050]). Vega does not teach a decision tree providing a set of nodes selecting among branches to other nodes to identify to a leaf node associated with a cluster, and assign the task to the cluster identified by the identified leaf node. However, Sharaff teaches teach a decision tree providing a set of nodes selecting among branches to other nodes to identify to a leaf node associated with a cluster, and assign the task to the cluster identified by the identified leaf node (this is an example of a binary decision tree in which each node has two children. Other types of decision trees feature nodes with more than two children. The first node 1402 in the decision tree is referred to as the “root node.” Terminal nodes in the decision tree, such as terminal node 1403, are referred to as “leaf nodes.” All other nodes of the decision tree, such as node 1404, are referred to as “internal modes.” The root node and internal nodes each have two children while the leaf nodes have no children. The root nodes and internal modes are each associated with a rule, such as the rule “A=a” 1405 in root node 1402. A VM characterization, comprising a set of attribute values for a set of attributes (1206 in FIG. 12) that characterize a virtual machine to be hosted on a cloud-computing facility or data center, is input to the root node and used to traverse the decision tree to a leaf node. The leaf node contains either an indication of one or more flavors, described below, that represent the true processing-bandwidth and memory needs of a virtual machine characterized by the VM characterization as determined from previous executions or hosting of VMs described the VM characterization or an indication of predicted processing-bandwidth and memory needs for a virtual machine of a type that has not yet been executed; paragraphs [0056]-[0057], [0061]-[0062]). Given the teaching of Vega regarding using decision tree to identify cluster of processor cores to execute a task, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Sharaff to the system of Vega because Sharaff teaches in detail how to select a leaf node from a decision tree that can perform the task, and the teaching employs machine learning to provide accurate estimates of the computational resources for the VMs of a distributed application (abstract). As to claim 2, Vega as modified by Sharaff teaches the computer architecture of claim 1 wherein the operating program when executed on the computer further assigns the task to a processor core of the identified cluster according to an availability of the processor cores (see Vega: The slack 120, meanwhile, can be calculated by subtracting a computational cost 124 from a sub-deadline 122. The sub-deadline 122 can be determined using a kernel execution time table 312. The slack 120 can be specific to a single node or summed for each node remaining in a critical path 114 of the DAG 11; paragraph [0047] and The slack 120 can represent the availability of processing resources for successfully executing one or more tasks; paragraph [0028]). As to claim 3, Vega as modified by Sharaff teaches the computer architecture of claim 1 wherein the feature values are selected from the group consisting of: a position of a task in a directed graph of the application, an application type, and an availability of processor cores within the clusters (see Vega: The meta pre-processor 104 can determine a sequence of ranked tasks 116 for each of the tasks in the DAG 112. Each rank in the ranked tasks 116 can be based on a priority 118 of the task and a slack 120 of the DAG 112. In some embodiments, a rank of the ranked tasks 116 is equal to the priority 118 divided by the slack 120; paragraph [0026] and [0057]-[0060]). As to claim 11, Vega teaches a method of scheduling tasks on a computer architecture (a computer-implemented method of claim 1) having a plurality of heterogeneous processor cores having clusters of homogeneous processor cores (Heterogeneous SoC 100-1 can further include processing elements 108. Processing elements 108 can include a variety of types of processing elements. For example, PE-1 140-1 can be a general purpose processor (e.g., CPU), PE-2 140-2 can be a graphics processing unit (GPU), and PE-3 140-3 can be a hardware accelerator. These processing elements 108 can be generically referred to as PEs 140. In addition to these broader categories of processing elements, each category of processing element can include various different models and/or brands of processing elements which may having correspondingly unique performance characteristics; paragraph [0032]), comprising: (1) collecting a set of feature values related to performance of the heterogeneous processor cores during up execution of application program instructions comprised of tasks (For example, learning agent 200 can be configured to perform machine learning on current and/or historical DAGs 112, critical paths 114, ranked tasks 116, the ready queue 130, the completed queue 136, heuristics 138, processing elements 108, and/or other information; paragraphs [0036], [0029]-[0030], and operation 404 includes dynamically receiving one or more DAGs 112 at the meta pre-processor 104. The DAGs 112 can be received dynamically insofar as each DAG 112 can be associated with a control flow graph 110 of an application 102, and the application 102 can implement a control flow graph 110 according to execution path dependencies and/or environmental triggers; paragraphs [0046] and [0058]); (2) identifying a task of the application program instructions to be executed on a plurality of heterogeneous processor cores (Operation 406 includes determining ranked tasks 116 for one or more tasks in one or more received DAGs 112; paragraph [0047] and [0026]); (3) applying the feature values to a decision tree (the learning agent 200 can include any number of machine learning algorithms such as, but not limited to, natural language processing (NLP), natural language understanding (NLU), decision tree learning; paragraph [0035]) according to a function and the feature values to identify to a PE associated with a cluster (After training the learning agent 200, the learning agent 200 can ingest one or more of a DAG 112, a ready queue 130, completed queue 136, heuristics 138, and/or processing elements 108 and generate ranked tasks 116 and/or PE indicators 126. The ranked tasks 116 can then be provided to the scheduler 128; paragraph [0037] and Operation 406 includes determining ranked tasks 116 for one or more tasks in one or more received DAGs 112. In some embodiments, operation 406 includes determining ranked tasks 116 for at least one task in a critical path 114 of the DAG 112. Operation 406 can determine a rank in ranked tasks 116 by dividing a priority 118 of a task by the slack 120 in the DAG 112. The slack 120, meanwhile, can be calculated by subtracting a computational cost 124 from a sub-deadline 122. The sub-deadline 122 can be determined using a kernel execution time table 312. The slack 120 can be specific to a single node or summed for each node remaining in a critical path 114 of the DAG 112.; paragraphs [0047], [0048] and [0065]); and (4) assigning the task to the cluster identified by the identified PE (Operation 408 includes providing one or more of the ranked tasks 116 to the scheduler 128 for execution by one or more of the processing elements 108 according to the rank. In some embodiments, operation 408 includes sending a highest ranked, or a set of highest ranked tasks to the scheduler 128. In some embodiments, for each task sent to the scheduler 128, a PE indicator 126 corresponding to the task is also provided to the scheduler 128. In some embodiments, upon receiving the ranked tasks 116 (and optionally the PE indicators 126) the scheduler 128 can place the ranked tasks 116 in a ready queue 130 where each task 132 is associated with a PE assignment 134, and where the PE assignment 134 can be assigned by the scheduler 128 or based on the PE indicator 126 (if a PE indicator 126 is provided); paragraph [0050]). Vega does not teach a decision tree providing a set of nodes selecting among branches to other nodes to identify to a leaf node associated with a cluster, and assigning the task to the cluster identified by the identified leaf node. However, Sharaff teaches teach a decision tree providing a set of nodes selecting among branches to other nodes to identify to a leaf node associated with a cluster, and assign the task to the cluster identified by the identified leaf node (this is an example of a binary decision tree in which each node has two children. Other types of decision trees feature nodes with more than two children. The first node 1402 in the decision tree is referred to as the “root node.” Terminal nodes in the decision tree, such as terminal node 1403, are referred to as “leaf nodes.” All other nodes of the decision tree, such as node 1404, are referred to as “internal modes.” The root node and internal nodes each have two children while the leaf nodes have no children. The root nodes and internal modes are each associated with a rule, such as the rule “A=a” 1405 in root node 1402. A VM characterization, comprising a set of attribute values for a set of attributes (1206 in FIG. 12) that characterize a virtual machine to be hosted on a cloud-computing facility or data center, is input to the root node and used to traverse the decision tree to a leaf node. The leaf node contains either an indication of one or more flavors, described below, that represent the true processing-bandwidth and memory needs of a virtual machine characterized by the VM characterization as determined from previous executions or hosting of VMs described the VM characterization or an indication of predicted processing-bandwidth and memory needs for a virtual machine of a type that has not yet been executed; paragraphs [0056]-[0057], [0061]-[0062]). Given the teaching of Vega regarding using decision tree to identify cluster of processor cores to execute a task, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Sharaff to the system of Vega because Sharaff teaches in detail how to select a leaf node from a decision tree that can perform the task, and the teaching employs machine learning to provide accurate estimates of the computational resources for the VMs of a distributed application (abstract). As to claim 12, see rejection of claim 2 above. As to claim 13, see rejection of claim 3 above. Claims 4-5, 10, 14-15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vega et al. (US 2022/0004433 A1 – cited in the IDS) in view of Sharaff et al. (US 2023/0106318 A1) further in view of Jackson (US 2014/0298349 A1). As to claim 4, Vega as modified by Sharaff does not teach receives an objective value indicating desired trade-off between different scheduling objectives and wherein performance value is applied as a feature value to the decision tree. However, Jackson teaches receives an objective value indicating desired trade-off between different scheduling objectives (the intelligent scheduling policy may identify a job or a queue of jobs and execute the most time critical workload during this time period because the time critical workload must be processed and the trade-off is in the balance of processing the workload over paying less money for power consumption. Then, other less critical workload may be processed for example, during a lunch period from 12-1 pm or later in the middle of the night in which less expensive power costs are available; paragraph [0051]) and wherein performance value is considered when assigning tasks to resources (Aspects of the invention enable the reduction of both direct (compute nodes) and indirect (chiller, support server, etc.) power consumption while maintaining either full cluster performance or adequate service level agreement (SLA)-based cluster performance; paragraph [0018]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Jackson to the system of Vega as modified by Sharaff because Jackson teaches a method for managing the use and consumption of compute resources, reservations and/or jobs within a compute environment such as a grid or a cluster to reduce power consumption, one embodiment is the compute environment itself that runs jobs according to the principle disclosed (paragraph [0017]). As to claim 5, Vega as modified by Sharaff and Jackson teaches the computer architecture of claim 4 wherein the objective value indicates a desired balance between energy-power consumption of the computer and execution speed of the application program (see Jackson: a system 304 performs the steps of managing power consumption in the compute environment 300 by receiving data regarding the current state of the compute environment, and analyzing workload to be consumed in the compute environment 300. The system predicts at least one power consumption saving action based on the current state and analyzed workload and implements the predicted at least one power consumption saving action in the compute environment. The power consumption saving action may be one of the following: powering down a node, powering down memory such as RAM, spinning down a disk, lowering a clock speed of a processor, powering down a hard drive or placing a resource in a low power consumption mode. Other power saving steps may occur as well; paragraphs [0047] and [0083]). As to claim 10, Vega as modified by Sharaff and Jackson teaches the computer architecture of claim 9 wherein the training employs multiple different application programs and multiple objective values selected from the group consisting of: computer energy usage and application program execution time (see Jackson: a system 304 performs the steps of managing power consumption in the compute environment 300 by receiving data regarding the current state of the compute environment, and analyzing workload to be consumed in the compute environment 300. The system predicts at least one power consumption saving action based on the current state and analyzed workload and implements the predicted at least one power consumption saving action in the compute environment; paragraphs [0047] and [0083]). As to claim 14, see rejection of claim 4 above. As to claim 15, see rejection of claim 5 above. As to claim 20, see rejection of claim 10 above. Claims 6, 7, 9, 16, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Vega et al. (US 2022/0004433 A1 – cited in the IDS) in view of Sharaff et al. (US 2023/0106318 A1) further in view of Yoon et al. (US 2023/0274154 A1). As to claim 6, Vega as modified by Sharaff does not teach the computer architecture of claim 1 wherein the decision tree is differentiable. However, Yoon teaches the decision tree is differentiable (The DIAD system 100 can initialize a generalized additive model (GAM) using neural trees to learn feature functions, according to block 110. For example, the DIAD system 100 can initialize the GAM with random weights. The GA.sup.2M includes differentiable decision trees; paragraph [0035]). It would have been obvious to one of ordinary skill in the art to apply the teaching of Yoon to the system of Vega as modified by Sharaff because Vega teaches training decision tree and utilizing the trained decision tree to identify a processing element/core to perform a task, and Yoon teaches a method to detect anomaly when training the decision tree (abstract). By applying the teaching of Yoon to the system of Vega and Sharaff, the decision tree is trained to minimize anomaly, which when used to identify a node/PE/core to execute a task, it would give a better results. As to claim 7, Vega as modified by Sharaff and Yoon teaches the computer architecture of claim 1 wherein at least some node functions are differentiable functions of multiple feature values (see Yoon: initializing, by the one or more processors, a generalized additive model (GAM), the GAM including one or more neural decision trees including leaves and that are differentiable with respect to weight parameters for the GAM; and training, by the one or more processors, the GAM to receive tabular data as input and to generate an anomaly score and an explanation of the anomaly score, wherein in training the GAM. the one or more processors are configured to: training the GAM using unlabeled data and a loss function measuring the sparsity of data represented by leaves of the one or more neural decision trees; and training the GAM using labeled data.; paragraphs [0008]-[0009]). As to claim 9, Vega as modified by Sharaff and Yoon teaches the computer architecture of claim 1 wherein the node functions include multiple weight values trained using a simulation of the computer (In some examples, the GAM uses a temperature annealed entmoid function, instead of an indicator function. An entmoid function … b.sub.i and t.sub.i are trainable weight parameters for thresholds and scales, respectively. As shown and described with reference to appendix A, the use of temperature annealing (also referred to as simulated annealing) can improve training the GAM for anomaly detection This is at least because, during initial training, the decision boundary is left rough, before sharpening the boundary later on. Temperature annealing can help to increase the sharpness of the decision boundary during the training process and to improve training stability; paragraphs [0038] and [0054]). As to claim 16, see rejection of claim 6 above. As to claim 17, see rejection of claim 7 above. As to claim 19, see rejection of claim 9 above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Vega et al. (US 2022/0004433 A1 – cited in the IDS) in view of Sharaff et al. (US 2023/0106318 A1) and Yoon et al. (US 2023/0274154 A1) further in view of Cock et al. (Efficient and Private scoring of decision trees, support vector machines and logistic regression models based on pre-computation). As to claim 8, Vega as modified by Sharaff and Yoon does not teach the computer architecture of claim 7 wherein at least some node functions are a vector multiplication of a weight factor times a vector of feature values. However, Cock teaches some node functions are a vector multiplication of a weight factor times a vector of feature values (“Decision trees are non-parametric, discriminative classifiers. Alice holds an input vector x = (x1; : : : ; xt) ϵ Rt consisting of t features. The classification algorithm consists of a mapping C: Rt{f1; : : : ; ck} on x. The result of the classification C(x) is one of the k possible classes c1; : : : ; ck. The model is a tree structure and is held by Bob. Each internal node of the tree structure tests the value of a particular feature against a corresponding threshold and branches according to the results. Each leaf node specifies one of the k classes. The result of the classification is the class associated with the leaf reached from traversing the tree” and “When the response is a binary variable with class labels c+ and c-, then for a new input instance x, a trained logistic regression model outputs the probabilities … where the weight vector a and the real number b are learned during the logistic regression model training process. The class decision for the given probability is then made based on a threshold value which is often set to … then we predict that the instance belongs to the positive class, and otherwise we predict the instance belongs to the negative class. In this case the classification can be done by computing sign (<x, a> + b)”; page 5, section “III. MACHINE LEARNING CLASSIFIERS”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Cock to the system of Vega as modified by Sharaff and Yoon because Cock teaches a novel protocol for privacy preserving classification of decision trees, a popular machine learning model in these scenarios the protocols for privacy-preserving classification lead to more efficient results from the point of view of computational and communication complexities (abstract). As to claim 18, see rejection of claim 8 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC January 8, 2026
Read full office action

Prosecution Timeline

Aug 22, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month