Prosecution Insights
Last updated: April 19, 2026
Application No. 18/173,648

TRAINING ARIMA TIME-SERIES MODELS UNDER FULLY HOMOMORPHIC ENCRYPTION USING APPROXIMATING POLYNOMIALS

Non-Final OA §101§102§103
Filed
Feb 23, 2023
Examiner
GIROUX, GEORGE
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
4y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
401 granted / 612 resolved
+10.5% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
28 currently pending
Career history
640
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 612 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Drawings The applicant’s submitted drawings appear to be acceptable for examination purposes. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the drawings. Information Disclosure Statement As required by M.P.E.P. 609(c), the applicant's submission of the Information Disclosure Statements, dated 23 February 2023 and 2 January 2026, are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy of the PTOL-1449 forms, initialed and dated by the examiner, are attached to the instant office action. Claim Objections Claim 17 is objected to because of the following informalities: “a theta parameters” appears as though it should be “a theta parameter. Appropriate correction is required. Claim 19 is objected to because of the following informalities: “A computer program product for” appears as though it should be “A computer program product . Appropriate correction is required. Claim 20 depends upon claim 19, and thus includes the aforementioned limitation(s). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, according to the description given in the specification, in paragraph [0016], the broadest reasonable interpretation of “computer-readable storage medium” covers transitory propagating signals, which are non-statutory. To overcome this rejection, applicant should insert --non-transitory-- before “computer-readable storage medium.” Such an amendment is not considered new matter. See the "Subject Matter Eligibility of Computer Readable Media" memo dated January 26, 2010 (OG Cite: 1351 OG 212; OG Date: 23 Feb 2010). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 4-9, 14, 15, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Troncoso Pastoriza (EP 3461054 A1). As per claim 1, Troncoso Pastoriza teaches a system, comprising a processor [a server including a processing module (fig. 1; etc.)] to: receive a ciphertext comprising a fully homomorphic encrypted (FHE) time series from a client device [a server receives “somewhat practical fully homomorphic” encrypted ciphertext from a client device (paras. 0018-22; figs. 1, 3A; etc.); which is fully homomorphic encrypted (FHE) time series data]; train an ARIMA model on the ciphertext using an estimated error and approximating polynomials [an Auto-Regressive Integrated Moving Average (ARIMA) model (para. 0025, etc.) is trained on multiple training datasets and used for predictions (paras. 0020, 0086-91; etc.) using estimated residual scores (error) (paras. 0089-95, etc.) and iteratively approximating polynomials (paras. 0036, 0078-81, etc.)]; and generate an encrypted model and send the encrypted model to the client device [The method also comprises the second entity (server) computing an encrypted model for prediction; computing an encrypted prediction result for the input data using the encrypted model for prediction and the encrypted input data; and transmitting the encrypted prediction result to the first entity (client) (paras. 0012-15, etc.) and can include sharing the encrypted model with the client device via an interactive protocol (para. 0026, etc.)]. As per claim 4, Troncoso Pastoriza teaches wherein the encrypted model comprises encrypted parameters for the ARIMA model [the encrypted model includes encrypted model parameters (of the ARIMA model) (para. 0026; fig. 6; etc.)]. As per claim 5, Troncoso Pastoriza teaches wherein the processor is to compute a predetermined number of differences based on a difference parameter of the ARIMA model [the homomorphic ARIMA model includes differencing, which determines a number of differences recursively, as many times as a specified d value (difference parameter) (paras. 0158-160, etc.)]. As per claim 6, Troncoso Pastoriza teaches wherein the ARIMA model comprises a moving average (MA) order having a value of one [the ARIMA model includes a moving average MA(q) process where ϕ(z) = 1 (the MA order has a value of one) (para. 0133, etc.)]. As per claim 7, Troncoso Pastoriza teaches wherein the ciphertext is encrypted under fully homomorphic encryption, and the ARIMA model is trained and the encrypted model generated under fully homomorphic encryption [the ciphertext is “somewhat practical fully homomorphic” encrypted (paras. 0018-22, etc.) and sent to the server to train the ARIMA model (paras. 0086-88; figs. 5-6; etc.)]. As per claim 8, Troncoso Pastoriza teaches a computer-implemented method, comprising: receiving, via a processor, a fully homomorphic encryption (FHE) encrypted time series [a server receives “somewhat practical fully homomorphic” encrypted ciphertext from a client device (paras. 0018-22; figs. 1, 3A; etc.); which is fully homomorphic encrypted (FHE) time series data]; computing, under FHE, a predetermined number of differences based on a difference parameter of an ARIMA model to be used to model the FHE encrypted time series [the homomorphic ARIMA model includes differencing, which determines a number of differences recursively, as many times as a specified d value (difference parameter) (paras. 0158-160, etc.)]; computing, under FHE, model parameters of the ARIMA model using approximating polynomials [an Auto-Regressive Integrated Moving Average (ARIMA) model (para. 0025, etc.) is trained on multiple training datasets and used for predictions (paras. 0020, 0086-91; etc.) using iteratively approximated polynomials (paras. 0036, 0078-81, etc.)]; and outputting, via the processor, a trained model comprising the computed model parameters [The method also comprises the second entity (server) computing an encrypted model for prediction; computing an encrypted prediction result for the input data using the encrypted model for prediction and the encrypted input data; and transmitting the encrypted prediction result to the first entity (client) (paras. 0012-15, etc.) and can include sharing the encrypted model with the client device via an interactive protocol (para. 0026, etc.)]. As per claim 9, Troncoso Pastoriza teaches computing, via the processor, an estimated error for the ARIMA model and predicting, via the processor, a future prediction value for the FHE encrypted time series using the estimated error [estimated residual scores (error) are computed (paras. 0089-95, etc.) and the method also comprises the second entity (server) computing an encrypted model for prediction; computing an encrypted prediction result for the input data using the encrypted model for prediction and the encrypted input data; and transmitting the encrypted prediction result to the first entity (client) (paras. 0012-15, etc.)]. As per claim 14, Troncoso Pastoriza teaches wherein computing the model parameters comprises computing, under FHE, a covariance of time series values with corresponding values one entry into the past in the FHE encrypted time series [the computation of the ARIMA model includes calculation of the autocovariance matrix of the time series of n values (which includes corresponding values one entry into the past) (paras. 0134, 0143-146, etc.)]. As per claim 15, Troncoso Pastoriza teaches wherein computing the model parameters comprises constructing a plurality of equations with a plurality of unknowns using computed variance and covariance values, and solving a set of equations under FHE to compute a phi parameter of the ARIMA model [In order to do this, we first obtain φ (phi parameter) from the last q equations (para. 0153, etc.), which include computed variance and covariance values (paras. 0148-151, etc.)]. As per claim 18, Troncoso Pastoriza teaches wherein computing the model parameters comprises computing, under FHE, an expected prediction error using a computed variance of the FHE encrypted time series, a covariance of the FHE encrypted time series, and a computed theta value for the ARIMA model [the estimated residuals (prediction error) can be calculated (paras. 0155-157, etc.) from both φ and θ parameters of the model (paras. 0152-154, etc.) and the variance and covariance of the (FHE encrypted) time series data (paras. 0143-148, etc.)]. As per claim 19, see the rejection of claim 8, above, wherein Troncoso Pastoriza also teaches a computer program produce for, the computer program product comprising a computer-readable storage medium having program code embodied therewith, the program code executable by a processor to cause the processor to: [perform the method] [the method may be implemented in cloud-bases software (para. 0031, etc.); which is a computer program stored in a computer-readable storage medium and executed by at least one processor]. As per claim 20, see the rejection of claim 9, above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2, 3, 10, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Troncoso Pastoriza (EP 3461054 A1) in view of Lam (US 2023/0108963). As per claim 2, Troncoso Pastoriza teaches the system of claim 1, as described above. While Troncoso Pastoriza teaches that the process estimates the error for training the ARIMA model (see above), it has not been relied upon for teaching wherein the processor is to estimate the error for training the ARIMA model using a partial subset of recent values in the ciphertext. Lam wherein the processor is to estimate the error for training the ARIMA model using a partial subset of recent values in the ciphertext [an ARMA model is used to calculate an ARMA regression error based on historical data, using ordinary least squared with respect to first and second subsets of the data (paras. 0015, 0024, 0090, etc.); for training the ARIMA model with recent values in the ciphertext of the system of Troncoso Pastoriza, above]. Troncoso Pastoriza and Lam are analogous art, as they are within the same field of endeavor, namely training and utilizing Auto-Regressive-Moving-Average models to make predictions. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to estimate the error for the ARMA model using partial subsets of recent values, as taught by Lam, to estimate the error for the ARIMA model on subsets of recent values in the ciphertext taught by Troncoso Pastoriza. Lam provides motivation as [by utilizing subsets of the historical data, the model speed, efficiency and accuracy can be improved (paras. 0006-9, etc.)]. As per claim 3, Troncoso Pastoriza/Lam teaches wherein the processor is to estimate the error during training using a plurality of partial subsets of recent values in the ciphertext [the ARMA model is used to calculate an ARMA regression error based on historical data, using ordinary least squared with respect to first and second subsets of the data (Lam: paras. 0015, 0024, 0090, etc.) with somewhat practical fully homomorphic” encrypted ciphertext from a client device (Troncoso Pastoriza: paras. 0018-22; figs. 1, 3A; etc.)], send the client device a plurality of associated encrypted predictions [The method also comprises the second entity (server) computing an encrypted model for prediction; computing an encrypted prediction result for the input data using the encrypted model for prediction and the encrypted input data; and transmitting the encrypted prediction result to the first entity (client) (Troncoso Pastoriza: paras. 0012-15, etc.)], and receive a selected partial subset of the plurality of partial subsets to use for training the ARIMA model [the ARMA model is used to calculate an ARMA regression error based on historical data, using ordinary least squared with respect to first and second subsets of the data (Lam: paras. 0015, 0024, 0090, etc.)]. As per claim 10, Troncoso Pastoriza teaches the computer-implemented method of claim 9, as described above. While Troncoso Pastoriza teaches that the process estimates the error for training the ARIMA model (see above), it has not been relied upon for teaching wherein computing the estimated error comprises using a partial subset of historical values in the FHE encrypted time series. Lam teaches wherein computing the estimated error comprises using a partial subset of historical values in the FHE encrypted time series [an ARMA model is used to calculate an ARMA regression error based on historical data, using ordinary least squared with respect to first and second subsets of the data (paras. 0015, 0024, 0090, etc.); for training the ARIMA model with recent values in the FHE encrypted time series of the system of Troncoso Pastoriza, above]. Troncoso Pastoriza and Lam are analogous art, as they are within the same field of endeavor, namely training and utilizing Auto-Regressive-Moving-Average models to make predictions. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to estimate the error for the ARMA model using partial subsets of recent values, as taught by Lam, to estimate the error for the ARIMA model on subsets of recent values in the FHE encrypted time series data taught by Troncoso Pastoriza. Lam provides motivation as [by utilizing subsets of the historical data, the model speed, efficiency and accuracy can be improved (paras. 0006-9, etc.)]. As per claim 11, see the rejection of claim 3, above. Claim(s) 12, 13, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Troncoso Pastoriza (EP 3461054 A1) in view of Brockwell et al. (Introduction to Time Series and Forecasting, Second Edition, 2002, pgs. 0-434). As per claim 12, Troncoso Pastoriza teaches the computer-implemented method of claim 8, as described above. While Troncoso Pastoriza teaches computing model parameters (see above) as well as using a set mean of the time series data (see, e.g., Troncoso Pastoriza: paras. 0147-149, for zero-mean series), it has not been relied upon for teaching wherein computing the model parameters comprises computing a mean of the FHE encrypted time series under FHE. Brockwell teaches wherein computing the model parameters comprises computing a mean of the FHE encrypted time series under FHE [the mean (μ), autocovariance, and variance are calculated from samples of the time series data (pgs. 58-59, section 2.4.1; etc.), for the FHE encrypted time series of Troncoso Pastoriza, above]. Troncoso Pastoriza and Brockwell are analogous art, as they are within the same field of endeavor, namely training and utilizing ARIMA models for time series predictions. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include computing the mean of the time series data in computing model parameters of the ARIMA model, as taught by Brockwell, for computing the model parameters of the ARIMA model from the FHE encrypted time series data in the system taught by Troncoso Pastoriza. Troncoso Pastoriza provides motivation as [according to another embodiment where an Auto-Regressive Integrated Moving Average (ARIMA) model [17] is used, encryptions of the corresponding ARIMA coefficients are computed; in these scenarios several models can be calculated depending on the number of variables which are taken into account (para. 0025), For more details on ARIMA models, we refer to [17] (para. 0133), it can be computed more efficiently by resorting to the Levinson-Durbin recursion (see [17]) (para. 0145) and we first obtain φ from the last q equations, and afterwards, θ can be easily derived (for more details see [17]) (para. 0153); where [17] refers to Brockwell (see pg. 4, lines 20-21 for [17])]. As per claim 13, Troncoso Pastoriza/Brockwell teaches wherein computing the model parameters comprises computing a variance of the FHE encrypted time series under FHE based on the computed mean [the variance of the time series can be computed based on the computed mean (Brockwell: pg. 59, section 2.4.1; etc.) for the FHE ciphertext time series data received from a client device (Troncoso Pastoriza: paras. 0018-22; figs. 1, 3A; etc.)]. As per claim 16, Troncoso Pastoriza/Brockwell teaches wherein computing the model parameters comprises computing a mu parameter of the ARIMA model using a mean of the FHE encrypted time series and a computed phi parameter [the mean (μ), autocovariance, and variance are calculated from samples of the time series data (Brockwell: pgs. 58-59, section 2.4.1; etc.) from which a φ parameter can be calculated using the Yule-Walker equations (Brockwell: pg. 140, 5.1.7; Troncoso Pastoriza: paras. 0143, 0155, etc.) and afterwards, θ can be easily derived (Troncoso Pastoriza: para. 0153; Brockwell: pg. 55, section 2.3; etc.)]. Examiner’s Note: the reasoning and motivation for the combination is provided, above, in the rejection of claim 12. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Troncoso Pastoriza (EP 3461054 A1), in view of Brockwell et al. (Introduction to Time Series and Forecasting, Second Edition, 2002, pgs. 0-434), and further in view of Gurnani et al. (Forecasting of sales by using fusion of Machine Learning techniques, Feb 2017, pgs. 93-101). As per claim 17, Troncoso Pastoriza/Brockwell teaches wherein computing the model parameters comprises computing a series as predicted with computed mu and phi parameters, computing variance and covariance of values [the mean (μ), autocovariance, and variance are calculated from samples of the time series data (Brockwell: pgs. 58-59, section 2.4.1; Troncoso Pastoriza: pars. 0143-148; etc.) from which a φ parameter can be calculated using the Yule-Walker equations (Brockwell: pg. 140, 5.1.7; Troncoso Pastoriza: paras. 0143, 0155, etc.)], and computing a theta parameters for the ARIMA model using the computed covariance values [the mean (μ), autocovariance, and variance are calculated from samples of the time series data (Brockwell: pgs. 58-59, section 2.4.1; etc.) from which a φ parameter can be calculated using the Yule-Walker equations (Brockwell: pg. 140, 5.1.7; Troncoso Pastoriza: paras. 0143, 0155, etc.) and afterwards, θ can be easily derived (Troncoso Pastoriza: para. 0153; Brockwell: pg. 55, section 2.3; etc.)]. Examiner’s Note: the reasoning and motivation for the combination of Troncoso Pastoriza and Brockwell is provided in the rejection of claim 12, above. While Troncoso Pastoriza/Brockwell teaches computing the model parameters (see above), it has not been relied upon for teaching wherein computing the model parameters comprises computing a residue series comprising residues of the FHE encrypted time series, and computing values of the residue series. Gurnani teaches computing a residue series comprising residues of the FHE encrypted time series, and computing values of the residue series [linear models are not able to capture nonlinear patterns accurately, hence to improve the prediction result, their residue (which contains nonlinear pattern) is forecasted by nonlinear ARIMA (pg. 95, section II.B; etc.)]. Troncoso Pastoriza/Brockwell and Gurnani are analogous art, as they are within the same field of endeavor, namely training and utilizing ARIMA models for predictions. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to compute the residue series of the data with the ARIMA model, as taught by Gurnani, for the computing of the ARIMA model parameters in the system taught by Troncoso Pastoriza/Brockwell. Gurnani provides motivation as [Residue obtained by ARIMA is applied to nonlinear models like Neural Network, XGBoost and SVM to obtain forecast of nonlinear patterns missed by ARIMA (pg. 95, section II.B; etc.)]. Conclusion The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-20 are rejected. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ruiz (US 2018/0089577 and US 2016/0203413) – disclose forecasting systems utilizing approximation of polynomials and error estimation. Huang (US 11,295,224) – discloses a system/method for metric prediction using dynamic confidence, including a moving window (subset) of the history of prediction error values. The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEORGE GIROUX/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Feb 23, 2023
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §102, §103
Mar 30, 2026
Interview Requested
Apr 08, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572807
Neural Network Methods for Defining System Topology
2y 5m to grant Granted Mar 10, 2026
Patent 12572818
DEVICE AND METHOD FOR RANDOM WALK SIMULATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554986
WEIGHT QUANTIZATION IN NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Patent 12554983
MACHINE LEARNING-BASED SYSTEMS AND METHODS FOR IDENTIFYING AND RESOLVING CONTENT ANOMALIES IN A TARGET DIGITAL ARTIFACT
2y 5m to grant Granted Feb 17, 2026
Patent 12541696
ENHANCED VALIDITY MODELING USING MACHINE-LEARNING TECHNIQUES
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
93%
With Interview (+27.1%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 612 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month