Prosecution Insights
Last updated: April 19, 2026
Application No. 17/660,512

MANAGING ALEATORIC AND EPISTEMIC UNCERTAINTY IN REINFORCEMENT LEARNING, WITH APPLICATIONS TO AUTONOMOUS VEHICLE CONTROL

Final Rejection §103§112
Filed
Apr 25, 2022
Examiner
OVALLE JR., DAVID MESQUITI
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Volvo Autonomous Solutions AB
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
4 granted / 4 resolved
+48.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
16.9%
-23.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims 2. This Office Action is in response to the Applicant’s filing on 10/13/2025. Claims 1-15 were previously pending, of which claims 1, 11, 14-15 have been amended, and new claim 16 has been newly added. Accordingly, claims 1-16 are currently pending and are being examined below. Response to Arguments 3. With respect to the Applicant’s remarks, see pages 8-14, filed on 10/13/2025; Applicant’s “Amendment and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented. 4. With respect to the rejection under 35 U.S.C. 112, the amendment has been fully considered and has rendered the rejection moot. Therefore, the rejection under 35 U.S.C. 112 is withdrawn. However, claim 16 was not able to be found in the specification so rejection under 35 U.S.C. 112 still stands. 5. With respect to the rejection under 35 U.S.C. 103, applicant’s Amendment and Remarks” have been fully considered and are persuasive. The prior art of record does not appear to disclose the limitations of claim 1. However, due to the nature of the applicant’s amendments, the scope of the applicant’s prior art and further search found that Palanisamy, Yuanda, Will, Carl, and Ahuja all disclose the limitation of claim 1 as mapped in the final office action below. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 16 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 16 for interpretation purposes will be examined as a reinforcement learning method that consists of an environment that is being used repeatedly but with a different value, number, or type of data with every repeated usage of that environment. 8. Claim(s) 1 - 3, 14 - 15 are rejected under 35 U.S.C. 103 as being unpatentable over US20190278282A1 (hereinafter, “Palanisamy”), and further in view of NPL – Implicit Quantile Networks (hereinafter, “Will”), and further in view of NPL – Ensemble Quantile Networks (hereinafter, “Carl”), and further in view of NPL – Exploring Uncertainty in Deep Learning (hereinafter, “Yuandu”), and further in view of US20200226430A1 (hereinafter, “Ahuja”). 9. Regarding claims 1 - 2, 14 - 15 Palanisamy teaches a method of controlling actuators in an autonomous vehicle using a reinforcement learning, RL, agent, the method comprising: [0008], [0024], [0065] A system that uses reinforcement learning to generate a control algorithm for autonomous vehicle control [0065] by training an agent [0008]. Palanisamy also teaches on a controller (22) which includes an automated driving system (ADS) (24) that can control actuators in the vehicle [0024]. a plurality of training sessions, [0065], [0067] Palanisamy teaches on curriculum-based reinforcement learning which implies that a plurality of training sessions must occur since these tasks are of different difficulties with each task being is own training session. Therefore, since we have multiple tasks of different difficulties, different training sessions are occurring. in which the RL agent interacts with an environment including the autonomous vehicle, wherein in each training session the environment has a different initial value… [0008], [0064] – [0065] The RL agent interacts with the environment because the agent as it is performing its tasks sends a control signal to the vehicle to interact with the environment [0008]. This means that both the vehicle and RL agent are interacting with the environment due to the processor (310). The processor (310) is operative to generate an action policy to control the vehicle and environmental state information. Since Palanisamy teaches on a curriculum-based RL with each task/training session the environment has being that of a different difficulty. Each task/training session being that of a different difficulty, can be interpreted as having a different initial value since no one task is the same as the other. decision-making, in which the RL agent outputs at least one tentative decision relating to control of the autonomous vehicle [0065]; Palanisamy teaches on reinforcement learning for an agent to generate a control algorithm for autonomous vehicle control. These RL algorithms required tentative decisions to be made because these RL algorithms learn based on errors and trials made during the training process. For a error or trial to be made initially in the decision making process, a decision of not being certain or confident would have to have been made. 10. Palanisamy further does not explicitly teach …and yielding a state-action quantile function dependent on state and action; However, Will teaches …and yielding a state-action quantile function dependent on state and action [3. Implicit Quantile Networks]; Will teaches on an Implicit Quantile Network (IQN) that uses a quantile function that is dependent on a state-action quantiles. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy with the teachings of Will, to further capture uncertainty and provide a more comprehensive understanding of potential rewards or risks. 11. Palanisamy further does not explicitly teach a first uncertainty estimation on the basis of a variability measure, relating to a variability with respect to quantile, of an average of the plurality of state-action quantile functions evaluated for a state-action pair corresponding to the tentative decision; However, Carl in the same field of endeavor, teaches a first uncertainty estimation on the basis of a variability measure, relating to a variability with respect to quantile, of an average of the plurality of state-action quantile functions evaluated for a state-action pair corresponding to the tentative decision (II. Approach B. Aleatoric Uncertainty Estimation); A first uncertainty estimation (aleatoric uncertainty) is derived from a quantile function that relates to variability. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy with the teachings of Carl, to have an aleatoric uncertainty estimation to be used in order to determine inherent randomness that is unavoidable and cannot be reduced. 12. Palanisamy further does not explicitly teach a second uncertainty estimation on the basis of a variability measure, relating to an ensemble variability, for the plurality of state-action quantile functions evaluated for a state-action pair corresponding to the tentative decision; and However, Yuandu teaches a second uncertainty estimation on the basis of a variability measure, relating to an ensemble variability, for the plurality of state-action quantile functions evaluated for a state-action pair corresponding to the tentative decision [3.2.1 Hybrid Loss Function for Prediction Intervals Fig. 1]; and Yuandu determines a second uncertainty (epistemic uncertainty) using ensemble variability. Each neural network containing some diversity as the estimate of the epistemic uncertainty is determined. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy with the teachings of Yuandu, to have an epistemic uncertainty estimation be used in order to determine uncertainty from a lack of knowledge or imperfect understanding. 13. Palanisamy further does not explicitly teach …vehicle control, wherein the at least one tentative decision is executed in dependence of the first and/or second estimated uncertainty. However, Ahuja in the same field of endeavor, teaches …vehicle control, wherein the at least one tentative decision is executed in dependence of the first and/or second estimated uncertainty [0126]. Ahuja teaches on capturing uncertainty estimates to increase the trust in the model for decision making for the advanced driving assistance system (ADAS). This helps in avoiding overconfident decisions in scenarios which means that decisions are made based on either epistemic or aleatoric uncertainty [0078] – [0079]. Helping avoid overconfident decisions implies that a decision process is a gating mechanism. Decisions remain tentative until uncertainty is quantified and deemed acceptable. Such gating whether via a probabilistic variance or some sort of distributional return converts uncertainty from a passive decision into an active check that influences whether and when to act. This aligns where it is preferable to withhold commitment rather than risk acting on unreliable or low-confidence decisions. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy with the teachings of Ahuja, to further create more higher confidence decisions based on decisions that aren’t fully final or of low-confidence. 14. Regarding claim 2 & 15, a continuation of claim 2 & 15 from above, Palanisamy teaches additional training, in which the RL agent interacts with a second environment including the autonomous vehicle, wherein the second environment differs from the first environment by an increased exposure to a subset of state-action pairs for which the first and/or second estimated uncertainty is relatively higher [0064] - [0065], [0067]. Palanisamy teaches on curriculum-based reinforcement learning which implies that a plurality of training sessions must occur since these tasks are of different difficulties with each task being is own training session. Therefore, since we have multiple tasks of different difficulties, different training sessions are occurring with different environments where the RL agent is interacting with the different environments [0035], [0037]. Since these tasks are of varying difficulty, the tasks that are of higher difficulty can be known as tasks with relatively higher uncertainties as the tasks scale up. 15. Regarding claim 3, Palanisamy teaches the method of claim 1, wherein the RL agent includes at least one neural network ([0072] Fig. 4). Figure 4 incorporates a neural network to train the action policy (445). 16. Claim(s) 6 - 8 are rejected under 35 U.S.C. 103 as being unpatentable over US20190278282A1 (hereinafter, “Palanisamy”), and further in view of NPL – Implicit Quantile Networks (hereinafter, “Will”), and further in view of NPL – Ensemble Quantile Networks (hereinafter, “Carl”), and further in view of NPL – Exploring Uncertainty in Deep Learning (hereinafter, “Yuandu”), and further in view of US20200226430A1 (hereinafter, “Ahuja”), and further in view of NPL – Estimating Risk and Uncertainty in Deep Reinforcement Learning (hereinafter, “Clements”). 17. Regarding claim 6, Palanisamy as modified by Yuandu, Will, Carl, and Ahuja does not explicitly teach the method of claim 1, wherein the uncertainty estimations relate to a combined aleatoric and epistemic uncertainty. However, Clements in the same field of endeavor, teaches the method of claim 1, wherein the uncertainty estimations relate to a combined aleatoric and epistemic uncertainty (see Section 2.3 Fig. 1 “In figure 1, we provide an illustration of the uncertainties measured with ~_epistemic and ~_aleatoric on a toy dataset.”). Both epistemic and aleatoric uncertainties are measured together and are used to derive an uncertainty estimate in relation to both the epistemic and aleatoric measurements. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Clements, to figure out a total uncertainty estimate which takes into account both epistemic and aleatoric uncertainties to account for all possible random scenarios that may occur when an autonomous vehicle has to make a decision (see Section 2.3 Fig. 1). 18. Regarding claim 7, Palanisamy does not explicitly teach the method of claim 1, wherein the variability measure used in the second uncertainty estimation is applied to sampled expected values of the respective state- action quantile functions. However, Clements in the same field of endeavor, teaches the method of claim 1, wherein the variability measure used in the second uncertainty estimation is applied to sampled expected values of the respective state- action quantile functions (see Section 2.3: Approximate Uncertainties Para. 1). Apples both aleatoric and epistemic uncertainties to two samples from the quantile functions. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Clements, to further reduce the second uncertainty estimation by further training by applying it to samples of quantile functions. 19. Regarding claim 8, Palanisamy does not explicitly teach the method of claim 1, wherein the variability measure is one or more of: a variance, a range, a deviation, a variation coefficient, an entropy. However, Clements in the same field of endeavor, teaches the method of claim 1, wherein the variability measure is one or more of: a variance, a range, a deviation, a variation coefficient, an entropy (see Section 3.1). Variability measure is one of a variance. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Clements, to have the learning agent under more situations to learn from during training due to their being variances in certain scenarios. 20. Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over US20190278282A1 (hereinafter, “Palanisamy”), and further in view of NPL – Implicit Quantile Networks (hereinafter, “Will”), and further in view of NPL – Ensemble Quantile Networks (hereinafter, “Carl”), and further in view of NPL – Exploring Uncertainty in Deep Learning (hereinafter, “Yuandu”), and further in view of US20200226430A1 (hereinafter, “Ahuja”), and further in view of US20200364557A1 (hereinafter, “Ostrovski”). 21. Regarding claim 13, Palanisamy does not explicitly teach the method of claim 1, wherein the decision-making is based on a central tendency of weighted averages of the respective state-action quantile functions. However, Ostrovski in the same field of endeavor, teaches the method of claim 1, wherein the decision-making is based on a central tendency of weighted averages of the respective state-action quantile functions (see [0054] – [0056]). A central tendency is measured when determining an action for a quantile function. Central tendency can be a mean, mode, or median, which are all considered an average. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Ostrovski, to make more confident decisions. 22. Claim(s) 4 – 5, 9 – 10, & 12 are rejected under 35 U.S.C. 103 as being unpatentable over US20190278282A1 (hereinafter, “Palanisamy”), and further in view of NPL – Implicit Quantile Networks (hereinafter, “Will”), and further in view of NPL – Ensemble Quantile Networks (hereinafter, “Carl”), and further in view of NPL – Exploring Uncertainty in Deep Learning (hereinafter, “Yuandu”), and further in view of US20200226430A1 (hereinafter, “Ahuja”), and further in view of NPL - Auto-Driving Policies in Highway (hereinafter, “Molaie), and further in view of NPL – Tactical Decision-Making in Autonomous Driving (hereinafter, “Hoel”). 23. Regarding claim 4, Palanisamy does not explicitly teach the method of claim 1, wherein each of the training sessions employs an implicit quantile network, IQN, from which the RL agent is derivable. However, Molaie in the same field of endeavor, teaches the method of claim 1, wherein each of the training sessions employs an implicit quantile network, IQN, from which the RL agent is derivable (see Section I Introduction Para. 5). Incorporates an Implicit Quantile Network (IQN) into the training of a RL agent. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Molaie, to improve the performance of the RL agent in an autonomous vehicle when it comes to making decisions. 24. Regarding claim 5, Palanisamy does not explicitly teach the method of claim 4, wherein the initial value of a training session corresponds to a randomized prior function, RPF. However, Hoel in the same field of endeavor, teaches the method of claim 4, wherein the initial value of a training session corresponds to a randomized prior function, RPF (see Section II – B. “A better Bayesian posterior is obtained if a randomized prior function (RPF) is added to each ensemble member [27].”). Incorporates a randomized prior function (RPF). One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Hoel, to more efficiently distribute returns/rewards for each action performed by the RL agent. 25. Regarding claim 9, Palanisamy does not explicitly teach the method of claim 1, wherein the tentative decision is executed only if the first and second estimated uncertainties are less than respective predefined thresholds. However, Hoel in the same field of endeavor teaches the method of claim 1, wherein the tentative decision is executed only if the first and second estimated uncertainties are less than respective predefined thresholds (see Section II – C). A predefined threshold is used as a scale of confidence when the agent has to make its decision. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Hoel, to have a scale of confidence in order for the agent to make more confident decisions. 26. Regarding claim 10, Palanisamy as modified by Yuandu, Will, Carl, and Ahuja does not explicitly teach the method of claim 9, wherein: the decision-making includes the RL agent outputting multiple tentative decisions; and the vehicle control includes sequential evaluation of the tentative decisions with respect to their estimated uncertainties. However, Hoel in the same field of endeavor, teaches the method of claim 9, wherein: the decision-making includes the RL agent outputting multiple tentative decisions (see Section II – C); and Agent goes through testing episodes which inherently means that the agent is outputting multiple tentative decisions during its training. the vehicle control includes sequential evaluation of the tentative decisions with respect to their estimated uncertainties (see Section II – C). It is inherent that when the vehicle is making decisions in order to perform training that it goes through multiple tentative decisions in a sequential manner under their respective estimated uncertainties. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Hoel, to have the agent go through multiple tentative decisions in order to train the agent to make more confident decisions. 27. Regarding claim 12, Palanisamy as modified by Yuandu, Will, Carl, and Ahuja does not explicitly teach the method of claim 1, wherein the decision-making includes tactical decision- making. However, Hoel in the same field of endeavor, teaches the method of claim 1, wherein the decision-making includes tactical decision- making (see Section III). Makes tactical decisions. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, Ahuja with the teachings of Hoel, to have the vehicle use tactical decisions in order to lane change into another lane or stop at an intersection. 28. Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over US20190278282A1 (hereinafter, “Palanisamy”), and further in view of NPL – Implicit Quantile Networks (hereinafter, “Will”), and further in view of NPL – Ensemble Quantile Networks (hereinafter, “Carl”), and further in view of NPL – Exploring Uncertainty in Deep Learning (hereinafter, “Yuandu”), and further in view of US20200226430A1 (hereinafter, “Ahuja”), and further in view of NPL - Auto-Driving Policies in Highway (hereinafter, “Molaie), and further in view of NPL – Tactical Decision-Making in Autonomous Driving (hereinafter, “Hoel”), and further in view of US20220055689A1 (hereinafter, “Mandlekar”). 29. Regarding claim 11, Palanisamy as modified by Yuandu, Will, Carl, and Ahuja does not explicitly teach the method of claim 10, wherein a backup decision, which is optionally based on a backup policy, is executed if the sequential evaluation does not return a tentative decision to be executed. However, Mandlekar in the same field of endeavor, teaches the method of claim 10, wherein a backup decision, which is optionally based on a backup policy, is executed if the sequential evaluation does not return a tentative decision to be executed (see [0074], [0230]). GPU of the infotainment SoC (1430) system may perform some self-driving functions such as putting the vehicle (1400) into a safe stop mode which constitutes as a backup decision which is representative of safe behavior. This only activates when a primary controller fails which can be considered not returning a tentative decision due to the failure of the primary controller which inherently means that a tentative decision won’t be executed. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Mandlekar, to have a safety precaution where incase a tentative decision isn’t executed that the agent controlling the autonomous vehicle has a backup plan in order to safely control the vehicle from any disaster in that event. 30. Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over US20190278282A1 (hereinafter, “Palanisamy”), and further in view of NPL – Implicit Quantile Networks (hereinafter, “Will”), and further in view of NPL – Ensemble Quantile Networks (hereinafter, “Carl”), and further in view of NPL – Exploring Uncertainty in Deep Learning (hereinafter, “Yuandu”), and further in view of US20200226430A1 (hereinafter, “Ahuja”), and further in view of US20170278018A1 (hereinafter, “Mnih”). 31. Regarding claim 16, Palanisamy as modified by Yuandu, Will, Carl, and Ahuja does not explicitly teach the method of claim 1, wherein, the environment is identical but has a different initial value in each of the training sessions. However, Mnih teaches the method of claim 1, wherein, the environment is identical but has a different initial value in each of the training sessions ([0048], [0064], [0094] Fig. 2). Mnih teaches an agent that interacts with the environment [0048] and also incorporates a training procedural loop [0064]. Step S212 loops back to S202 where new experience data is stored and older experience data is discarded. This entire training procedure is within a neural network (environment). This constitutes as starting in the same environment but having a different value every time we are looped back to S202. One of ordinary skill in the art, before the effective filing date of the instant application with a reasonable expectation of success, would have been motivated to modify the disclosure of Palanisamy as modified by Yuandu, Will, Carl, and Ahuja with the teachings of Mnih, to further train the neural network effectively. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID MESQUITI OVALLE JR. whose telephone number is (571)272-6229. The examiner can normally be reached Monday - Friday 7:30am - 5pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached on (571) 270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. /DAVID MESQUITI OVALLE/ Examiner, Art Unit 3669 /Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Apr 25, 2022
Application Filed
Jul 21, 2025
Non-Final Rejection — §103, §112
Oct 13, 2025
Response Filed
Dec 01, 2025
Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month