Prosecution Insights
Last updated: April 19, 2026
Application No. 18/116,364

MIXED-FORMAT LABELS FOR PATHOLOGY DETECTION AND LOCALIZATION IN MAGNETIC RESONANCE (MR) IMAGING

Final Rejection §103
Filed
Mar 02, 2023
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Covera Health
OA Round
4 (Final)
82%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims: claims 1-16 are examined below. Response to Arguments Applicant's arguments filed 11/5/2025 have been fully considered but they are not persuasive. Applicant’s remark – (pages 7-10) Applicant argued the lack of teaching of the new claim amendment to independent claim. Please see the Remarks for more detail. Examiner response – Examiner respectfully disagree. An updated search found that Kiraly et al (US 2018/0240233) teaches generated convoluted map, Gaussian kernel, and peak finding on generated map to find candidate anatomical defect location of interest in claims 2, 10 and 15. The combine teaching of Zhou et al (US 2017/0263023) in view of Nagenthiraja et al (US 2013/0266197) and Kiraly et al teaches the new claim amendment of claim 1. Please see the Office Action below for more detail. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-6, and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 2017/0263023) in view of Nagenthiraja et al (US 2013/0266197) and Kiraly et al (US 2018/0240233). Claim 1: Zhou et al (US 2017/0263023) teaches the following subject matter: A method comprising: obtaining a radiological image, the radiological image corresponding to an imaged anatomical area (figure 3 and 0018 teaches MRI volume/3D imaging (radiological image), where figure 3 show the target anatomical structure is imaged in 3D medical volume); generating, based on processing the radiological image using a semantic segmentation neural network a target map corresponding to a plurality of candidate anatomical defect locations within the cropped radiological image (figure 3 step 304 teaches segment target structure in 3D medical volume, and 0019 teaches detail of step 304 with the use of segmentation with machine learning (neural network) and figure 3 part 308 teaches highlight location of synopsis volume corresponding to location in the original 3D medical volume (target map with cropped region of interest); figure 5 and 0029 teaches target anatomical object from tapestry image on the 2D images (target map)) comprising a subset of the radiological image (figure 3 part 308 teaches highlight location of synopsis volume corresponding to location in the original 3D medical volume (target map with cropped region of interest), where the cropped region of interest is the subset of the radiological image), wherein the target map comprises predicted positional information generated as a prediction output of the semantic segmentation neural network (0018-0020, specifically 0020 teach the machine learning (MSL) with segment (semantic segmentation neural network) the target anatomical structure (target map) with MSL-based 3D object outputting estimating (prediction) of position (positional information)); generating at least one volume of interest (VOI) centered around a particular candidate anatomical defect location within the cropped radiological image (figure 3 part 308 teaches highlight location of synopsis volume corresponding to location in the original 3D medical volume (target map with cropped region of interest, where cropped ROI is VOI)); and classifying, using a classification neural network, the particular candidate anatomical defect location within the cropped radiological image, wherein classifying the particular candidate anatomical defect location includes determining categorical information corresponding to a predicted a pathology associated with the particular candidate anatomical defect location (0020-0022 teaches a separate machine learning classifier for 3D object detection by position, orientation, scale of the object with the segmented chambers of the 3D image; 0028-0029 teaches classifier of machine learning (neural network) for cropped target anatomical such as lesion (anatomical defect) within the 2D and 3D by means of landmark, classifier score, probability score and/or relative position of the live due to specific predetermined patterns; 0028 teaches object with landmarks to determine abnormality of lesion and tumors such as liver and lungs (categorical information of predicted pathology)), the categorical information indicative of a predicted defect type of the predicted pathology and a predicted severity grade for the predicted defect type (0028 teaches object with landmarks to determine (predict) abnormality of lesion and tumors such as liver and lungs (categorical information of predicted pathology), and 0020 teach the use of MSL with detection estimate the position, orientation and scale of the target to determine object localization stage due to transformation and mean shape of object (one ordinary skill in the art would view stage on context of tumor of liver/lung as stage of cancer/severity of defect type)). Zhou et al do not teach the following subject matter: generating, using a Gaussian kernel of a morphological peak-finding algorithm, a convoluted map including a respective peak for each candidate anatomical defect location of the target map; identifying, using the morphological peak-finding algorithm, a particular candidate anatomical defect location from the plurality of candidate anatomical defect locations, wherein location is associated with a greatest peak value in the convoluted map. Nagenthiraja et al (US 2013/0266197) teaches the following subject matter: generating, using a Gaussian kernel of a morphological peak-finding algorithm, a convoluted map including a respective peak for each candidate anatomical defect location of the target map (0187 teaches image/map convolution by Gaussian kernel with morphological greyscale reconstruction with high peaks determined by kernel for application for lesion (defect/target) segmentation); identifying, using the morphological peak-finding algorithm, a particular candidate anatomical defect location from the plurality of candidate anatomical defect locations of the target map, wherein the particular candidate anatomical defect location is associated with a greatest peak value in the convoluted map (0042-0045 teaches operations (peak-finding) for erosion, dilation, opening/closing and high peaks for corresponding to a maximum of a tissue (lesion/target/anatomical) concentration curve measured in a given position (defect location)). Zhou et al and Nagenthiraja et al are both in the field of image analysis, especially classification and location of anatomical defect in three-dimensional (3D, voxel or volume) image data such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al by Nagenthiraja et al where such use of Gaussian kernel with morphological peak enable suitable automatic lesion segmentation as disclosed by Nagenthiraja et al in paragraph 0187. Zhou et al and Nagenthiraja et al do not teach the following subject matter: wherein generating the convoluted map includes convolving the target map with the Gaussian kernel to obtain the respective peak for each candidate anatomical defect location; and the convoluted map; the identifying includes selecting, from the convoluted map. Kiraly et al (US 2018/0240233) teaches the following subject matter: wherein generating the convoluted map includes convolving the target map with the Gaussian kernel to obtain the respective peak for each candidate anatomical defect location; and the convoluted map; the identifying includes selecting, from the convoluted map (claims 1, 10 and 15 detail map generated by convolutional (generating convoluted map) encoder-decoder with a Gaussian distribution (Gaussian kernel) in the vicinity (target map) of detected location (anatomical defect location identify) due to intensity value at peak). Zhou et al and Nagenthiraja et al and Kiraly et al in the field of image analysis, especially classification and location of anatomical defect in three-dimensional (3D, voxel or volume) image data such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al by Kiraly et al regarding generating map with Gaussian kernel in area of interest/target due to peak detected is advantageous to train convolutional to output map to teach tumor classes for corresponding tumor class as disclosed by Kiraly et al in paragraph 0023. Claim 3: Zhou et al teach: The method of claim 1, wherein: the classification neural network is trained using a plurality of training radiological images; and each respective training radiological image of the plurality of training radiological images is associated with a categorical label indicative of an anatomical defect category present in the respective training radiological image (0020 teaches learning by training with annotated data (category label) with the use of a separate machine learning classifier; 0027-0028 teaches anatomical object of interest are annotated training data, where 0028 detail such anatomical object of interest includes landmark object as well as lesion, abnormalities, tumors (defect category)). Claim 4: Zhou et al teach: The method of claim 3, wherein: the plurality of training radiological images are associated with categorical labels and positional labels; the semantic segmentation neural network is trained based on the plurality of training radiological images and the positional labels (0020 detail the segmentation of target anatomical structure is estimated by position, orientation and scale in the medical image data using a series of detectors); and the classification neural network is trained based on the plurality of training radiological images and the categorical labels (0020 and 0027 teaches a separate machine learning classifier is trained by on annotated training data (categorical labels), where above teaches used on MRI medical images (radiological images)). Claim 5: Zhou et al teach: The method of claim 4, wherein: the classification neural network is trained based on the plurality of training radiological images, the categorical labels, and the positional labels (0020 teaches separate machine learning classifier is trained based on annotated training data for each of these steps. For example, separate probabilistic boosting tree (PBT) classifiers can be trained for position estimation, position-orientation estimation, and position-orientation-scale estimation. This object localization stage results in an estimated transformation (position, orientation, and scale) of the object, and a mean shape of the object (i.e., the mean shape of the target anatomical object the annotated training images) is aligned with the 3D volume). Claim 6: Zhou et al teach: The method of claim 1, wherein the at least one VOI is generated centered around a candidate anatomical defect coordinate determined based on the target map (0023 teaches segmented target anatomical structures. In a possible implementation, the bounding box for each target anatomical structure can be defined as a bounding box centered at the center of mass of the segmented structure and having an orientation and scale such that the entire segmented structure is encompassed in the smallest possible bounding box.). Claim 11: Zhou et al teach: The method of claim 1, wherein: the semantic segmentation neural network is trained using a plurality of training radiological images, wherein each respective training radiological image of the plurality of radiological images is associated with a positional label indicative of a position of an anatomical defect within the respective training radiological image (0020 detail the segmentation of target anatomical structure is estimated by position, orientation and scale in the medical image data using a series of detectors). Claim 12: Zhou et al teach: The method of claim 11, wherein the positional label comprises one or more of a point- landmark or a bounding box associated with the position of the anatomical defect within the respective training radiological image (0023 teaches bounding box is determined for each of the segmented target anatomical structures in the 3D medical volume in the region of interest in reposition of the position). Claim 13: Zhou et al teach: The method of claim 1, further comprising: generating a cropped radiological image based on cropping the radiological image around an anatomical area of interest; and generating the target map based on processing the cropped radiological image using the semantic segmentation neural network (0020-0022 teaches a separate machine learning classifier for 3D object detection by position, orientation, scale of the object with the segmented chambers of the 3D image; 0028-0029 teaches classifier of machine learning (neural network) for cropped target anatomical such as lesion (anatomical defect) within the 2D and 3D by means of landmark, classifier score, probability score and/or relative position of the live due to specific predetermined patterns). Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 2017/0263023), Nagenthiraja et al (US 2013/0266197) and Kiraly et al (US 2018/0240233) in view of Keieger et al (US 2020/0194117). Claim 2: Zhou et al (US 2017/0263023) and Nagenthiraja et al and Kiraly et al teaches all the subject matter, but not the following which is taught by Keieger et al: The method of claim 1, wherein the classification neural network comprises a three- dimensional (3D) ResNet50 convolutional neural network (CNN) (abstract teaches of three-dimensional model processed by machine learning model with segmentation the image to identify region of interest (paragraph 0100) where the neural network is ResNet50 in paragraph 0206). Zhou et al and Nagenthiraja et al and Kiraly et al and Keieger et al are both in the field of image analysis, especially the use for machine learning for segmentation for 3D medical images for region of interest such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al and Kiraly et al by Keieger et al where the use of ResNet50 to achieve a faster loss convergence and improve the learning rate by 3.6 times as disclosed by Keieger at al in 0206. Claim 9: Zhou et al (US 2017/0263023) and Nagenthiraja et al and Kiraly et al teaches all the subject matter, but not the following which is taught by Keieger et al: The method of claim 1, wherein the semantic segmentation neural network comprises a Residual UNet neural network (abstract teaches of three-dimensional model processed by machine learning model with segmentation the image to identify region of interest (paragraph 0100) where the neural network is ResNet50 in paragraph 0206). Zhou et al and Nagenthiraja et al and Kiraly et al and Keieger et al are both in the field of image analysis, especially the use for machine learning for segmentation for 3D medical images for region of interest such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al and Kiraly et al by Keieger et al where the use of ResNet50 to achieve a faster loss convergence and improve the learning rate by 3.6 times as disclosed by Keieger at al in 0206. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 2017/0263023) and Nagenthiraja et al (US 2013/0266197) and Kiraly et al (US 2018/0240233) in view of Madabhushi et al (US 2021/0169349): Claim 7: Zhou et al and Nagenthiraja et al and Kiraly et al teaches the following subject matter: The method of claim 1, further comprising: analyzing, using the morphological peak-finding algorithm, each candidate anatomical defect location of the plurality of candidate anatomical defect locations (Nagenthiraja et al teaches in paragraph 0187 as cited above); identifying, using the morphological peak-finding algorithm, the particular candidate anatomical defect location as a best candidate from the plurality of candidate anatomical defect locations, based on having the greatest peak value (Nagenthiraja et al teaches in paragraph 0187 as cited above where the candidate is the lesion target for segmentation). Zhou et al and Nagenthiraja et al and Kiraly et al do not teach the following: generating the VOI centered at the predicted positional information predicted by the semantic segmentation neural network for the particular candidate anatomical defect location identified by the morphological peak-finding algorithm as the best candidate. Madabhushi et al teaches the following subject matter: generating the VOI centered at the predicted positional information predicted by the semantic segmentation neural network for the particular candidate anatomical defect location identified by the peak-finding algorithm as the best candidate (0032 detail the medical image such as MRI (3D,volume) of a tumor for segmentation scanning; 0047 teaches use of peak enhancement by tumor for discrimination feature use by classifier and with segmentation for region of area with those peaks; figure 21 and 0142 detail use of machine learning classifier on voxel due to these intensity (peak); 0129 teaches rotation of that tumor center of the X,Y,Z position (VOI center) where the X,Y,X is the positional information). Zhou et al and Nagenthiraja et al and Kiraly et al and Madabhushi et al are both in the field of image analysis, especially the use for machine learning for segmentation for 3D medical images for region of interest such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al and Kiraly et al by Madabhushi et al where the use would help determining a therapy response and/or prognosis of a tumor based on morphological and/or functional TAV features and/or training a model to determine such a therapy response and/or prognosis as disclosed by Madabhushi et al on paragraph 0142. Claim 8: Zhou et al teach: The method of claim 7, wherein the classification neural network receives a corresponding VOI centered around each candidate anatomical defect location of the plurality of candidate anatomical defect locations during training (0020-0023, where 0020 detail classifier (neural network) with 3D object (VOI) detection in valid space region for target anatomical object (candidate anatomical defect) by shape, and 0023 teaches entire segmentation structure encompassed in the smallest bound box (center) where the target anatomical structure centered). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 2017/0263023) and Nagenthiraja et al (US 2013/0266197) and Kiraly et al (US 2018/0240233) in view of NIELSEN et al (US 2023/0252622). Claim 10: Zhou et al (US 2017/0263023) and Nagenthiraja et al and Kiraly et al teaches all the subject matter such as defect location and target map below, but not the following which is taught by NIELSEN et al: The method of claim 1, wherein: the target map is indicative of the plurality of candidate anatomical defect locations; and each candidate anatomical defect location of the plurality of candidate anatomical defect locations is associated with an isotropic Gaussian sphere (0167 teaches use of isotropic Gaussian blob/sphere for voxel classification map to identify defect of specific pathologies, where 0083 teaches the use of machine learning for voxel image data). Zhou et al and Nagenthiraja et al and Kiraly et al and NIELSEN et al are both in the field of image analysis, especially the use for machine learning for 3D medical images for region of interest such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al and Kiraly et al by NIELSEN et al where the use of isotropic Gaussian assist in eature detection can be used on the voxel classification map to identify defects indicative of specific pathologies. In addition, defects relating to image quality issues may also be characterised and flagged as disclosed by NIELSEN et al in 0166-0167. Claim 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 2017/0263023) and Nagenthiraja et al (US 2013/0266197) and Kiraly et al (US 2018/0240233) in view of Odry et al (US 2019/0172207). Claim 14: Zhou et al (US 2017/0263023) and Nagenthiraja et al and Kiraly et al teaches all the subject matter such as cropping VOI center around area of interest of radiological image below, but not the following which is taught by NIELSEN et al: The method of claim 13, wherein cropping the radiological image around the anatomical area of interest comprises: using a deep reinforcement learning (DRL) machine learning network (figure 12 and 0058 teaches use of deep reinforcement learning network to identify anatomical of interest for defects). Zhou et al and Nagenthiraja et al and Kiraly et al and Odry et al are both in the field of image analysis, especially the use for machine learning for 3D medical images for region of interest such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al and Kiraly et al by Odry et al such would provide increase accuracy of landmark identification as disclosed by Odry et al in 0058. Claim 16: Zhou et al further teaches: The method of claim 14, wherein the cropped VOI includes all of the anatomical area of interest (figure 3 part 308 teaches highlight location of synopsis volume corresponding to location in the original 3D medical volume (target map with cropped region of interest, where cropped ROI is VOI); 0028-0029 teaches classifier of machine learning (neural network) for cropped target anatomical such as lesion (anatomical defect) within the 2D and 3D by means of landmark, classifier score, probability score and/or relative position of the live due to specific predetermined patterns). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 2017/0263023) and Nagenthiraja et al (US 2013/0266197) and Kiraly et al (US 2018/0240233) in view of SENANAYAKE et al (US 2017/0027501) Claim 15: Zhou et al (US 2017/0263023) and Nagenthiraja et al teaches all the subject matter such as cropping VOI center around area of interest of radiological image below, but not the following which is taught by SENANAYAKE et al: The method of claim 13, wherein the anatomical area of interest comprises an anterior cruciate ligament (ACL) or a medial compartment cartilage (MCC) represented in the radiological image (figure 1 and 0024 teaches classification of patient limb for ACL; 0053 teaches the use of neural network learn with given data set, where 0072-0077 detail processing of 3D image data with data segmentation for ACL injury with recovery classification the object of interest). Zhou et al and Nagenthiraja et al and Kiraly et al and SENANAYAKE et al are both in the field of image analysis, especially the use for machine learning for 3D medical images for region of interest such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Zhou et al and Nagenthiraja et al and Kiraly et al by SENANAYAKE et al such these assessment tool help in reduction of duration and cost by providing accurate and timely information about the subjects' knee functionality as disclosed by SENANAYAKE et al in 0117. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kaditz et al (US 2020/0265328) teaches MODEL PARAMETER DETERMINATION USING A PREDICTIVE MODEL - analysis engine 122 may classify or segment one or more anatomical structures in sample 112 using the determined model parameters and a third predetermined predictive model (such as a third machine-learning model and/or a third neural network). For example, using the simulated or predicted response of sample 112 at the voxel level or the determined model parameters at the voxel level, the third predictive model may output the locations of different anatomical structures and/or may output classifications of different voxels (such as a type of organ, whether they are associated with a particular disease state, e.g., a type of cancer, a stage of cancer, etc.) (0056). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Mar 02, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §103
Jul 28, 2025
Response Filed
Aug 04, 2025
Final Rejection — §103
Nov 05, 2025
Request for Continued Examination
Nov 14, 2025
Response after Non-Final Action
Nov 17, 2025
Non-Final Rejection — §103
Feb 16, 2026
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month