Prosecution Insights
Last updated: April 19, 2026
Application No. 18/316,256

EFFICIENT, SECURE AND LOW-COMMUNICATION VERTICAL FEDERATED LEARNING METHOD

Non-Final OA §101§103
Filed
May 12, 2023
Examiner
NGUYEN, CHAU T
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
ZHEJIANG UNIVERSITY
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
372 granted / 549 resolved
+12.8% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
31 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 549 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-6 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/12/2023 and 05/11/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 1-6 are objected to because of the following informalities: Claim 1: Line 4 recites “the selected features”, which is lack of antecedent basis. Lines 5-6 recites “the selected samples”, which is lack of antecedent basis. Line 8 recites “by all participants”, which should be rewritten as “by all the participants”. Line 8 recites “data indexes”, which should be rewritten as “the data indexes”. Line 11 recites “by all participants”, which should be rewritten as “by all the participants”. Line 13 recites “by all participants”, which should be rewritten as “by all the participants”. Claim 2: Line 2 recites “all participants”, which should be rewritten as “all the participants”. Claim 3: Line 2 recites “the data feature set”, which is lack of antecedent basis. Claim 4: Line 2 recites “BlinkML method”, wherein ML is an acronym and should be spelled out specific term accordingly for this acronym. Claim 5: Line 2 recites “the BlinkML method”, which is lack of antecedent basis. Part (d): Line 1 of Part (d) recites “calculating L”, but there is no description of what L stands for. Lines 2-3 of Part (d) recites “the value of the rth element on the diagonal of the matrix Λ” (emphasis added), which is lack of antecedent basis. Lines 3-4 of Part (d) recites “the rth singular value”, which is lack of antecedent basis. Part (e): there is no description of “L L T ” Part (f): Line 1 of Part (f) recites “calculating p”, but there is no description of “p” Claim 6 recites “disc(S(sub)p, S(sub)q) and there is description of this limitation. Similarly, Claim 6 also recites “er”, which is lack of description. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding independent claim 1 Step 1 – whether the claim falls within any statutory category. See MPEP 2106.03 Claim 1 is drawn to a method claim reciting a series of steps and, therefore, claim 1 falls under a process/method. Step 2A Prong 1 – whether the claim recites a judicial exception. See MPEP 2106.04, subsection II. Regarding independent claim 1, the claim is directed to an efficient, secure and low-communication vertical federated learning method comprising: step (1) selecting, by all participants, part of features of a held data feature set, adding noise satisfying differential privacy to part samples of the selected features and send the selected part of features to other participants together with data indexes of the selected samples, wherein the held data feature set comprises feature data and label data: this limitation encompasses the mental process of selecting a dataset, encompasses mathematical calculations of adding data noise to the dataset and the mental process of distributing to other users/participants, which is an evaluation practically capable of being performed in the human mind using mathematical calculations with the assistance of paper and pen. step (2) aligning, by all participants, data according to data indexes, taking received feature data as a label, taking each missing feature as a learning task, and training a model for each task with feature data originally held in a same data index: this limitation encompasses the mental process of aligning data based on data indexes and learning to label a feature based on the received label. This mental process can be performed in the human mind including an observation, and evaluation. step (3) predicting, by all participants, data corresponding to other data indexes with multiple models trained in the step (2) to complete missing feature data: this limitation encompasses the mental process of predicting a label of a missing feature data based on data trained above, which is an evaluation practically capable of being performed in the human mind wit the assistance of paper and pen. step (4) obtaining, by all participants, a final trained model by jointly using horizontal federated learning method: this limitation encompasses the mental process of obtaining/collecting data by performing mathematical calculations. These above limitations are directed towards the abstract idea of mental process and mathematical calculations. (see MPEP § 2106.04(a)(2), subsection I, A). Step 2A Prong 2 -- whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). Regarding Claim 1, the claim recites additional elements of “vertical federated learning method” and “horizontal federated learning method” which is used to generally apply the abstract idea without limiting how the horizontal federated learning method functions. In the Specification, Applicant describes “In the horizontal federated learning, the data distributed in different devices have the same features, but belong to different users. In the vertical federated learning, the data distributed in different devices belong to the same user, but have different features.” Thus, these additional elements are recited at such a high level of generality that they represent no more than mere instructions to apply the abstract idea on a computer. See MPEP 2106.05(f). The claim as a whole does not integrate the judicial exception into a practical application. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately or in combination, they do not constitute an inventive concept. The additional elements listed above do not amount to significantly more than the abstract ideas. Therefore, the claim is subject matter ineligible. Regarding claim 2: Step 1: A process, as above Step 2A Prong 1: The claim recites, inter alia: wherein when all participants hold label data, the held data feature set only consists of feature data: this limitation encompasses the mental process of expressing the type of data, which is an evaluation practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2: Nothing in the claim integrates the abstract idea into a practical application. There is no additional element. Thus, the claim is directed to the abstract idea. Step 2B: The claim does not include additional element that is sufficient to amount to significantly more than the judicial exception because when considered separately or in combination, they do not constitute as inventive concept. Therefore, the claim is subject-matter ineligible. Regarding claim 3: Step 1: A process, as above Step 2A Prong 1: The claim recites, inter alia: wherein in the step (1), the data feature set is personal privacy information: this limitation encompasses the mental process of expressing the type of data, which is an evaluation practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2: Nothing in the claim integrates the abstract idea into a practical application. There is no additional element. Thus, the claim is directed to the abstract idea. Step 2B: The claim does not include additional element that is sufficient to amount to significantly more than the judicial exception because when considered separately or in combination, they do not constitute as inventive concept. Therefore, the claim is subject-matter ineligible. Regarding claim 4: Step 1: A process, as above Step 2A Prong 1: The claim recites, inter alia: wherein in the step (1), each participant uses BlinkML method to determine an optimal sample number of each selected feature sent to each of the other participants, and then adds noise satisfying differential privacy to part of the samples of each selected feature according to the determined optimal sample number, and sends the part of the samples to other corresponding participants together with the data indexes of the selected samples: this claim recites several abstract ideas comprising: step of “determine an optimal sample”, which may be performed by in human mind using observation, evaluation, judgment, and opinion, step of “adds noise to the feature” encompasses mathematical calculations of adding data noise to the dataset and the step of “sends the part of the samples…” encompasses the mental process of distributing data to other users/participants, which is an evaluation practically capable of being performed in the human mind using mathematical calculations with the assistance of paper and pen. Step 2A Prong 2: The claim recites additional elements of “BlinkML method” which is used to generally apply the abstract idea without limiting how the BlinkML functions. Thus, these additional elements are recited at such a high level of generality that they represent no more than mere instructions to apply the abstract idea on a computer. See MPEP 2106.05(f). The claim as a whole does not integrate the judicial exception into a practical application. Step 2B: The claim does not include additional element that is sufficient to amount to significantly more than the judicial exception because when considered separately or in combination, they do not constitute as inventive concept. Therefore, the claim is subject-matter ineligible. Regarding claim 5: Step 1: a process, as above. Step 2A Prong 1: The claim 5 recites the steps of “selecting”, “aligning”, “constructing”, “calculating”, and “obtaining” are practically performed in the human mind using mathematical calculations with the help of paper and pen. These limitations fall within the “mathematical concepts” grouping of abstract ideas. Step 2A Prong 2: The claim recites additional elements of “BlinkML method” which is used to generally apply the abstract idea without limiting how the BlinkML functions. Thus, these additional elements are recited at such a high level of generality that they represent no more than mere instructions to apply the abstract idea on a computer. See MPEP 2106.05(f). The claim as a whole does not integrate the judicial exception into a practical application. Step 2B: The claim does not include additional element that is sufficient to amount to significantly more than the judicial exception because when considered separately or in combination, they do not constitute as inventive concept. Therefore, the claim is subject-matter ineligible. Regarding claim 6: Step 1: a process, as above. Step 2A Prong 1: The claim 6 recites the steps of “dividing”, “calculating”, and “minimizing” are practically performed in the human mind using mathematical calculations with the help of paper and pen. These limitations fall within the “mathematical concepts” grouping of abstract ideas. Step 2A Prong 2: Nothing in the claim integrates the abstract idea into a practical application. There is no additional element. Thus, the claim is directed to the abstract idea. Step 2B: The claim does not include additional element that is sufficient to amount to significantly more than the judicial exception because when considered separately or in combination, they do not constitute as inventive concept. Therefore, the claim is subject-matter ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Vivona et al. (Vivona), US Patent NO. 12,039,012, in view of Zaccak et al. (Zaccak), US Patent Application Publication No. US 2021/0360010A1, and further in view of NPL “BlinkML: Efficient Maximum Likelihood Estimation with Probabilistic Guarantee” by Park et al. (Park). As to independent claim 1, Vivona discloses an efficient, secure and low-communication vertical federated learning method, comprising: step (1) selecting, by all participants, part of features of a held data feature set, and send the selected part of features to other participants together with data indexes of the selected samples, wherein the held data feature set comprises feature data and label data (col. 6, lines 16-54: the training datasets are arranged in tabular format, which can have columns and rows represent sample and feature dimension respectively and using federated learning techniques to evaluate the training datasets; col. 13, lines 5-21: different computing devices (participants) can have tabular data with shared features (overlapping column in the table), training a generator using a first computing device (or a first participant) and then transferring the trained generator to a second computing device (or a second participant); col. 13, line 33 – col. 14, line 25: allowing selection of one or more columns in the table for conditional data generation, the model constructs a probability mass function (PMF) across the range of values for the selected column, and a particular value for the selected column can be selected according to the PMF constructed above to set as a condition for generating the synthetic row data (samples), and wherein the one or more columns are shared features between datasets D(sub)a and D(sub)b, that is accessible to the first computing device and the second computing device respectively); step (2) aligning, by all participants, data according to data indexes, taking received feature data as a label, taking each missing feature as a learning task, and training a model for each task with feature data originally held in a same data index (col. 5, lines 1-16: in horizontal federated learning, all computing devices have data with the same set of features and the feature space across multiple devices is aligned; col. 10, lines 26-34: federated learning techniques exchange embedding across the two clients (all participants), and can be done through assigning embeddings vectors into a matrix of embedding vector, and upon the model synchronization phase between the clients, the embedding matrices are exchanges between the two clients); step (3) predicting, by all participants, data corresponding to other data indexes with multiple models trained in the step (2) to complete missing feature data (col. 12, lines 26-46: federated learning is based on Generative adversarial Networks or GANs, where generative models can be trained on unlabeled data (missing feature data); and Vivona, however, does not disclose adding noise satisfying differential privacy to part samples of the selected features and step (4) obtaining, by all participants, a final trained model by jointly using horizontal federated learning method. In the same field of endeavor, Zaccak discloses maintaining privacy of a user’s data is critical in federated learning, and Differential Privacy can be used to achieve privacy and avoid data leakage, and providing a privacy-reserving interface that can help a model owner to manage various parameters to achieve a desired balance between data privacy and model performance (paragraph [0044]). Zaccak further discloses the differential privacy can be defined as a randomized mechanism that satisfies (epsilon ϵ, sigma σ) – differential privacy if for any two adjacent datasets X, X’ that differ only by the addition of a single unit, wherein sigma σ is the standard deviation of the noise added to the data signal, and as we add more noise, the data becomes more private, and thus epsilon (the information leakage) decreases (paragraphs [0047]-[0053]). In addition, Park discloses BlinkML system for fast, quality-guaranteed ML training, wherein BlinkML allows users to make error-computation tradeoffs instead of training a model on their full data (i.e., full model), and BlinkML can quickly train an approximate model with quality guarantees using a sample (Abstract). Park further discloses Coordinator obtains a size n(sub)o sample D(sub)o for the training set D, then invokes Model Trainer to train an initial model m(sub)o on D(sub)o and subsequently invokes Model Accuracy Estimator to estimate the accuracy epsilon ϵ(sub)o of m(sub)o. The Coordinator prepares to train a second model, called the final model m(sub)n (Section 2.3 System Workflow). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system of Vivona to incorporate adding noise satisfying differential privacy to part samples of the selected features, as taught by Zaccak, and obtaining, by all participants, a final trained model by jointly using horizontal federated learning method, as taught by Park. Zaccak suggests that Differential Privacy can be used to achieve privacy and avoid data leakage while Park’s system uses BlinkML to automatically and efficiently infer the appropriate sample size for satisfying an error tolerance. As to dependent claim 2, Vivona, Zaccak and Park disclose secure and low-communication vertical federated learning method according to claim 1, wherein when all participants hold label data, the held data feature set only consists of feature data (Vivona, col. 5, lines 17-38; Park, section 2.1). As to dependent claim 3, Vivona, Zaccak and Park disclose secure and low-communication vertical federated learning method according to claim 1, wherein in the step (1), the data feature set is personal privacy information (Vivona, col. 5, line 64 – col. 6, line 10). As to dependent claim 4, Vivona, Zaccak and Park disclose secure and low-communication vertical federated learning method according to claim 1, wherein in the step (1), each participant uses BlinkML method to determine an optimal sample number of each selected feature sent to each of the other participants, and then adds noise satisfying differential privacy to part of the samples of each selected feature according to the determined optimal sample number, and sends the part of the samples to other corresponding participants together with the data indexes of the selected samples (Vivona, col. 13, lines 5-21: different computing devices (participants) can have tabular data with shared features (overlapping column in the table), training a generator using a first computing device (or a first participant) and then transferring the trained generator to a second computing device (or a second participant); Zaccak, paragraphs [0044], [0047]-[0053]; Park, Section 5 Experiments: BlinkML’s estimated minimum sample sizes are close to optimal. As to independent claim 5, Vivona, Zaccak and Park disclose wherein each participant uses the BlinkML method to determine an optimal sample number of each selected feature sent to each of the other participants, comprising: (a) selecting, by each participant uniformly and randomly, n₀ sample data for each selected feature i, adding differential privacy noise to the n₀ sample data, and then sending the n₀ sample data to other participants together with the data indexes of the selected samples (Zaccak, paragraphs [0047]-[0053]); (b) aligning, by a participant j receiving the data, the data according to the data indexes, and taking the received feature data i as a label, and training and obtaining a model Mᵢ,j by using feature data originally held in the same data index (Vivona, col. 5, lines 1-16); (c) constructing a matrix Q, wherein each row of Q comprises n₀ parameter gradients obtained by updating a model parameter Θᵢ,ⱼ of Mᵢ,j of each sample (Park, Section 3.4 Computing Necessary Statistics); (d) calculating L = UA, wherein U is a matrix of size n₀ X n₀ after singular value decomposition of the matrix Q; A is a diagonal matrix, the value of the rth element on the diagonal of the matrix A is Sr is the rth singular value in Σ, ß is a regularization coefficient; and Σ is a singular value matrix of matrix Q (Park, Section 3.2 Model Parameter Distribution and Section 3.4 Computing Necessary Statistics); (e) obtaining Oi,j,n,,J,o,k by sampling from a normal distribution and then obtaining θᵢ,j,N,k by sampling from a normal distribution N repeating K times to obtain K pairs θᵢ,j,N,k), where k represents sampling sample number; wherein ∝ 1 = 1 n o - 1 n i , j , o , ∝ 2 = 1 n i , j , o - 1 N , n i , j , o =   1 2 n 0 + N ,     n i , j , o   represents a candidate sample number of an ith feature sent to the participant j; and N is a total number of samples for each participant (Park, Section 3.2 Model Parameter Distribution; Section 4.1 Quality Estimation sans Training; Section 4.3 Optimizations for Fast Sampling); (f) calculating p = 1 K ∑ k = 1 K 1   [ E x ϵ D (1[ M i , j (x; ϴ i , j , n 1 , j , 0 ,   k )     ≠ M i , j (x; ϴ i , j , n 1 , j , N ,   k )   < €];where M (x; ϴ i , j , n 1 , j , N ,   k )   represents that the participant j takes feature data held by a sample X as an input; ϴ i , j , n 1 , j , 0 ,   k )   is a model parameter; an output of the model Mᵢ,j is a predicted feature data i; D is a sample set, E(*) is an excepted value; and E is a real number that represents a threshold; if p > 1 - δ, letting and if p<1-8, letting n i , j , 1 = 1 2 ( n i , j , o +   n i , j , o ) , and if p< 1- δ, letting n i , j , 1 = 1 2 ( N +   n i , j , o ) , δ represents a threshold, which is a real number; carrying out the process according to the step (e) and the step (f) for multiple times until an optimal candidate sample number ni,j,1 that is to be selected for each feature is obtained through convergence (Park, Section 2.1 User interface and Section 3.3 Error Bound on Approximate Model); and (g) a number of samples randomly selected by the each participant to participant j of feature i being n₁,₁ (Vivona, col. 13, lines 45-62). As to dependent claim 6, Vivona, Zaccak and Park disclose wherein in the step (2), when each participant has a missing feature which does not receive the data, using labeled – unlabeled multitask learning method to obtain a model of the missing feature with unreceived data, comprising: (a) dividing, by a participant, existing data of the participant into m data sets S which correspond to training data of each missing feature, respectively, wherein m is a number of missing features of the participant, and I is a set of labeled tasks in the missing features (Vivona, col. 11, line 30 – col. 12, line 25); (b) calculating a difference between the data sets according to the training data: disc( S p ,     S q ) ,   p ,   q       ϵ {1,…,m}, p ≠ q, disc ( S p ,     S q       ) = 0 (Park, Section 2.1 User Interface and Section 2.2 Supported Models and Abstraction); (c) minimizing, for each unlabeled task, 1 m ∑ q = 1 m ∑ p ϵ i δ p disc ( S p ,     S q       ) and obtaining a weight σᵀ = { δ 1 , … δ m } } , where 1 m ∑ q = 1 m δ p =1 (Park, Section 3.3 Error Bound on Approximate Model); and (d) a model Mₜ of each unlabeled task is obtained by minimizing a convex combination of training errors of labeled tasks, where T ϵ {1,…,m}/I; e r δ T ( M T ) = ∑ p ϵ i δ p e r p ( M T ), w h e r e   e r p ( M T ) = 1 n s p ∑ ( x , y ) ϵ S p ,   p ϵ i     L ( M T   x ,   y ) ;   where L(*) is a loss function of a model in which a sample of a data set Sₚ is taken as an input; nsₚ represents a sample number of a data set S p ,     x is a sample feature of the input; and y is a label (Park, Sections 2.1, 2.2 and 3.3). Conclusion Any inquiry concerning this communication should be directed to CHAU T NGUYEN at telephone number (571)272-4092. The examiner can normally be reached on M-F from 8am to 5pm (PT). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula, can be reached at telephone number 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /CHAU T NGUYEN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596765
GENERATION AND USE OF CONTENT BRIEFS FOR NETWORK CONTENT AUTHORING
2y 5m to grant Granted Apr 07, 2026
Patent 12591795
METHOD FOR PROVIDING EXPLAINABLE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 31, 2026
Patent 12585722
IMAGE GENERATION SYSTEM, COMMUNICATION APPARATUS, METHODS OF OPERATING IMAGE GENERATION SYSTEM AND COMMUNICATION APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579356
MATHEMATICAL CALCULATIONS WITH NUMERICAL INDICATORS
2y 5m to grant Granted Mar 17, 2026
Patent 12547825
WHITELISTING REDACTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 549 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month