Prosecution Insights
Last updated: April 19, 2026
Application No. 18/705,213

IRIS RECOGNITION APPARATUS, IRIS RECOGNITION SYSTEM, IRIS RECOGNITION METHOD, AND RECORDING MEDIUM

Non-Final OA §103
Filed
Apr 26, 2024
Examiner
KAUR, JASPREET
Art Unit
2662
Tech Center
2600 — Communications
Assignee
The University of Electro-Communications
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
13 granted / 16 resolved
+19.3% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of this application being the National Stage application of the PCT Application No. PCT/SE2021/050685, filed on December 22, 2021 under 35 USC 119(a)-(d) or (f). Information Disclosure Statement The information disclosure statement (“IDS”) filed on 04/26/2024 and 07/23/2025 have been reviewed and the listed references have been considered. Drawings The 13-page drawings have been considered and placed on record in the file. Status of Claims Claims 1-13 are pending. Claim 14 is cancelled. Claim Objections Claims 7, 9-10, and 12 are objected to because of the following informalities: Claim 7 and 9 recites "…by one or more filter processings…" should be "…by one or more processing filters…" Claim 10 recites “The tris recognition apparatus…” should be “The iris recognition apparatus…” Claim 10 recites “…determine a person to be who claims to be…” should be “…determine a person to be who the person claims to be…” Claim 10 recites “a degree of similarity between the post-transform feature vector extracted by the post-transform feature vector extraction unit…” should be “a degree of similarity between the extracted post-transform feature vector by the Claim 12 recites "…qcquire an iris image…" should be "…acquire an iris image…" Appropriate corrections are required. Applicant is advised that should claim 1 be found allowable, claim 12 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 10, and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Xiong et al. (CN113191260A - Translation from Espacenet) in view of Tsukizawa et al. (US 2013/0170754 A1). Regarding claim 1, Xiong teaches “An iris recognition apparatus (Xiong paragraph [n0001] "an iris verification method and system") comprising: at least one memory that is configured to store instructions; and at least one processor that is configured to execute the instructions (Xiong paragraph [n0033] "The processor is used to read executable instructions stored in the computer-readable storage medium and execute the iris verification") to: extract a (Xiong paragraph [n0009] "the iris bounding rectangle, which is then input into the trained iris feature extraction network to obtain the iris embedding vector, and stored in the form of identity identifier-iris embedding vector"). However, Xiong does not teach “acquire an iris image including an iris of a living body; calculate a scale factor for the iris image, from a size of an iris area included in the iris image and from a desired size; generate a resolution-converted image in which resolution of the iris image is converted in accordance with the scale factor; and extract a post-transform feature vector”. In an analogous field of endeavor, Tsukizawa teaches “acquire an iris image including an iris of a living body (Tsukizawa paragraph [0061] "In step S204, eye area determination section 123 determines an eye area from the face image acquired from face detection section 121 and the facial part group acquired from facial part detection section 122"); calculate a scale factor for the iris image (Tsukizawa paragraph [0075] "In step S208, eye area image normalization section 107 calculates a scale-up/ scale-down factor on the basis of the resolution target value calculated in necessary resolution estimation section 105 and the eye area actual scale value calculated in eye area actual size calculation section 102"), from a size of an iris area included in the iris image (Tsukizawa paragraph [0067] "pupil state prediction section 103 predicts an actual size of a pupil included in the eye area image acquired from eye area image acquisition section 101 on the basis of a past actual size pupil diameter retained in actual size pupil diameter storage section 104") and from a desired size (Tsukizawa paragraph [0071] "In step S207, necessary resolution estimation section 105 calculates a resolution target value on the basis of the actual scale prediction value calculated in pupil state prediction section 103 […] This resolution table associates an actual scale value candidate group of the pupil diameter and image resolutions necessary for the pupil detection in the respective actual scale value candidates"); generate a resolution-converted image in which resolution of the iris image is converted in accordance with the scale factor (Tsukizawa paragraph [0076] "eye area image normalization section 107 normalizes the eye area image acquired from eye area image acquisition section 101 on the basis of the calculated scaleup/scale-down factor. This scale-up/scale-down processing uses a method used in typical image processing such as a bilinear method and a bicubic method"); and extract a post- transform (Tsukizawa paragraph [0078] "In step S209, pupil detection section 108 detects a pupil image from the normalized eye area image acquired from eye area image normalization section 107")”. It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine an iris verification system as taught by Xiong to use pupil detection and scaling of the eye image as taught by Tsukizawa. The suggestion/motivation for doing so would have been “In a typical pupil detection method based on an image, there is utilized a first feature that brightness of a pupil part is lower than brightness of a periphery of the pupil part, or a second feature that a pupil has a circular or ellipsoidal shape. However, when resolution cannot be secured sufficiently, sometimes the contour of the pupil on the image has a polygonal shape and does not have the circular or ellipsoidal shape. When the pupil detection is performed in this situation using the above described second feature, error detection occurs frequently" as noted by the Tsukizawa disclosure in paragraph 3. Therefore, it would have been obvious to combine the disclosure of Xiong with the Tsukizawa disclosure to obtain the invention as specified in claim 1 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 2, the combination of Xiong and Tsukizawa teaches “The iris recognition apparatus according to claim 1, wherein the size of the iris area is smaller than the desired size, the scale factor is magnification (Tsukizawa paragraph [0075] " In step S208, eye area image normalization section 107 calculates a scale-up/ scale-down factor on the basis of the resolution target value calculated in necessary resolution estimation section 105 and the eye area actual scale value calculated in eye area actual size calculation section 102"), the at least one processor is configured to execute the instructions to generate, as the resolution-converted image, a super-resolution image in which the resolution of the iris image is enhanced, in accordance with the magnification (Tsukizawa paragraph [0076] "eye area image normalization section 107 normalizes the eye area image acquired from eye area image acquisition section 101 on the basis of the calculated scale up/scale-down factor. This scale-up/scale-down processing uses a method used in typical image processing such as a bilinear method and a bicubic method").” The proposed combination as well as the motivation for combining Xiong and Tsukizawa references presented in the rejection of claim 1, applies to claim 2. Finally the apparatus recited in claim 2 is met by Xiong and Tsukizawa. Regarding claim 10, the combination of Xiong and Tsukizawa teaches “The tris recognition apparatus according to claim 1,wherein the at least one processor is configured to execute the instructions to determine a person to be who claims to be (Xiong paragraph [n0010] "For the user to be verified, their identity identifier and iris image are collected") when a matching score indicating a degree of similarity between the post-transform feature vector extracted by the post-transform feature vector extraction unit and a feature vector prepared in advance, is greater than or equal to a threshold (Xiong paragraph [n0010] "The iris embedding vector to be verified and the entered iris embedding vector are compared for similarity. The similarity threshold is used to determine whether they belong to the same person, thereby realizing identity verification"); and adjust the threshold (Xiong paragraph [n0054] "Step S30: The embedding vectors of different irises are compared by distance, and a threshold is used to determine whether different irises belong to the same person, thereby realizing the function of identity recognition") in accordance with the scale factor (Xiong paragraph [n0013] "The size adjustment module is used to adjust the iris circumscribed rectangle to an iris tensor of uniform size").” The proposed combination as well as the motivation for combining Xiong and Tsukizawa references presented in the rejection of claim 1, applies to claim 10. Finally the apparatus recited in claim 10 is met by Xiong and Tsukizawa. Claim 12 recites a system with elements corresponding to the elements recited in apparatus claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding elements of apparatus claim 1. Additionally, the rationale and motivation to combine the Xiong and Tsukizawa references, presented in rejection of claim 12 apply to this claim. Claim 13 recites a method with steps corresponding to the elements recited in apparatus claim 1. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements of apparatus claim 1. Additionally, the rationale and motivation to combine the Xiong and Tsukizawa references, presented in rejection of claim 13 apply to this claim. Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Xiong and Tsukizawa in view of Hao et al. (CN114663280A - Translation from Espacenet). Regarding claim 3, the combination of Xiong and Tsukizawa teaches the apparatus of claim 1. However, the combination of Xiong and Tsukizawa does not teach “acquire a learning image including the iris area of the desired size; and generate an input image in which resolution of the learning image is converted in accordance with an inverse of an arbitrary scale factor, wherein generate a resolution-converted input image of the same resolution as that of the learning image in which resolution of the input image is converted in accordance with the arbitrary scale factor; and allow learning of a method of generating the resolution-converted image, on the basis of a loss function in which a loss increases as the learning image and the resolution- converted input image become less similar.” Hao teaches “acquire a learning image including the iris area of the desired size (Hao paragraph [0094] "The cropped iris image is then saved as a high resolution iris image I<sub>hr</sub>"); and generate an input image in which resolution of the learning image is converted in accordance with an inverse of an arbitrary scale factor (Hao paragraph [n0039 (0098)] "the high-resolution iris image I<sub>hr</sub> is downsampled by a bicubic interpolation factor of n to obtain a low-resolution iris image I<sub>Ir</sub> corresponding to a high-resolution iris image I<sub>hr</sub>, which is used to train the super resolution reconstruction model. In this embodiment, a downsampling factor of 2 is used as an example. It should be noted that n > 1, and each high-resolution iris image I<sub>hr</sub> corresponds to a low-resolution iris image I<sub>Ir</sub>"), wherein generate a resolution-converted input image of the same resolution as that of the learning image in which resolution of the input image is converted in accordance with the arbitrary scale factor (Hao paragraph [0102] "the iris image pairs in the training set are input into a super-resolution reconstruction model of distant iris images with convolution weights. The low-resolution iris image I<sub>Ir</sub> in the multiple iris image pairs is forward-propagated in the network structure of the neural network to obtain multiple super-resolution images I<sub>sr</sub> that correspond one-to-one with the low-resolution iris image I<sub>Ir</sub>"); and allow learning of a method of generating the resolution-converted image, on the basis of a loss function in which a loss increases as the learning image and the resolution- converted input image become less similar (Hao paragraph [n0042 (0103)] "Step S104: Calculate the error value between the high-resolution iris image I<sub>hr</sub> and the corresponding super-resolution image I<sub>sr</sub> in the iris image pair in the training set using the loss function").” It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine an iris verification system that scales the iris image as taught by Xiong and Tsukizawa to include training to generate a high resolution image using loss function as taught by Hao. The suggestion/motivation for doing so would have been that there is a need in iris recognition to improve identifying iris, “in some special locations, such as border checkpoints, airports, and security checkpoints, it is usually necessary to collect the iris image of the person being identified from a relatively far distance. At this time, the size of the obtained iris image is relatively small, which is not conducive to the recognition of the iris image. If the acquired iris image is simply enlarged, it will affect the texture information of the iris and reduce the resolution of the iris image, making it difficult to extract the features of the iris image" as noted by the Hao disclosure in paragraph [n0003]. Therefore, it would have been obvious to combine the disclosure Xiong and Tsukizawa with the Hao disclosure to obtain the invention as specified in claim 1 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 4, the combination of Xiong, Tsukizawa, and Hao teaches “The iris recognition apparatus according to claim 1,wherein the at least one processor is configured to execute the instructions to: acquire a learning image including the iris area of the desired size (Hao paragraph [0094] "The cropped iris image is then saved as a high resolution iris image I<sub>hr</sub>"); generate an input image in which resolution of the learning image is converted in accordance with an inverse of an arbitrary scale factor (Hao paragraph [n0039 (0098)] "the high-resolution iris image I<sub>hr</sub> is downsampled by a bicubic interpolation factor of n to obtain a low-resolution iris image I<sub>Ir</sub> corresponding to a high-resolution iris image I<sub>hr</sub>, which is used to train the super resolution reconstruction model. In this embodiment, a downsampling factor of 2 is used as an example. It should be noted that n > 1, and each high-resolution iris image I<sub>hr</sub> corresponds to a low-resolution iris image I<sub>Ir</sub>"); generate a resolution-converted input image of the same resolution as that of the learning image in which resolution of the input image is converted in accordance with the arbitrary scale factor (Hao paragraph [0102] "the iris image pairs in the training set are input into a super-resolution reconstruction model of distant iris images with convolution weights. The low-resolution iris image I<sub>Ir</sub> in the multiple iris image pairs is forward-propagated in the network structure of the neural network to obtain multiple super-resolution images I<sub>sr</sub> that correspond one-to-one with the low-resolution iris image I<sub>Ir</sub>"); extract a learning feature vector that is a feature vector of the learning image and an input feature vector that is a feature vector of the resolution-converted input image (Xiong paragraph [n0009] "the iris bounding rectangle, which is then input into the trained iris feature extraction network to obtain the iris embedding vector, and stored in the form of identity identifier-iris embedding vector");- and allow learning of a method of generating the resolution-converted image, on the basis of a loss function in which a loss increases as the learning feature vector and the input feature vector (Xiong paragraph [n0025] "Using the iris triplet dataset, train the iris feature extraction network using deep metric learning based on the triplet loss function until the triplet loss function value of deep metric learning converges, and obtain the trained iris feature extraction network") become less similar (Hao paragraph [n0045 (0109)] "Step S105: Update the hyperparameters of the super-resolution reconstruction model for distant iris images based on the error value; specifically, backpropagate the error value to the network structure of the super-resolution reconstruction model for distant iris images, and calculate the partial derivative of each node based on the error value; adjust the weights of the convolution with the learning rate as the step size based on the partial derivatives, and update the adjusted weights of the convolution to the weights of the super-resolution reconstruction model").” The proposed combination as well as the motivation for combining Xiong, Tsukizawa, and Hao references presented in the rejection of claim 3, applies to claim 4. Finally the apparatus recited in claim 4 is met by Xiong, Tsukizawa, and Hao. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Xiong and Tsukizawa in view of Lui et al. (CN112927134A - Translation from Espacenet). Regarding claim 7, the combination of Xiong and Tsukizawa teaches the apparatus of claim 1. However, the combination of Xiong and Tsukizawa does not teach “extract a pre-transform feature vector that is a feature vector of the iris image; generate one or more transformation filters for transforming the pre-transform feature vector, in accordance with the scale factor; and generate the resolution-converted image by transforming the pre-transform feature vector by one or more filter processings using the one or more transformation filters.” Lui teaches “extract a pre-transform feature vector that is a feature vector of the iris image (Lui paragraph [n0051] "Step 3.2: The low-resolution image obtained in step 2 is processed by a single-scale feature mapping module, and the resulting feature map is denoted as F<sub>ss</sub>"); generate one or more transformation filters for transforming the pre-transform feature vector, in accordance with the scale factor (Lui paragraph [n0051] "As shown in Figure 3, the single-scale feature mapping module includes interpolation amplification processing and 8 layers of convolution operations with ReLU activation functions"); and generate the resolution-converted image by transforming the pre-transform feature vector by one or more filter processings using the one or more transformation filters (Lui paragraph [n0053] "Step 3.3: Input the cross-scale mapping feature map F<sub>cs</sub> obtained in step 3.1 and the single-scale mapping feature map F<sub>ss</sub> obtained in step 3.2 into the multiscale feature fusion module for feature fusion to obtain a multi-scale feature map, denoted as F<sub>ms</sub>" and paragraph [n0057] "Step 3.4: Input the multi-scale feature map F<sub>ms</sub> obtained in step 3.3 and the feature map F<sub>1</sub> obtained by interpolation and magnification in step 3.2 into the global feature extraction module. Add F<sub>ms</sub> and F<sub>1</sub> to obtain the global feature map F<sub>gf</sub>. Then, use a 3×3 convolution kernel to extract features from the global feature F<sub>gf</sub> to obtain the super-resolution image SR").” It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine an iris verification system that scales the iris image as taught by Xiong and Tsukizawa to include feature map scaling as taught by Lui. The suggestion/motivation for doing so would have been that “Super-resolution reconstruction technology was proposed in this context. It uses specific algorithms to reconstruct low-resolution images, thereby obtaining high-resolution images that can be used by us. Since its emergence, super-resolution reconstruction technology has been widely used in various fields. Examples include biometric identification, which assists machines in facial recognition, fingerprint recognition, and iris recognition; video enhancement, which restores old movies and helps us convert standard definition videos into high definition videos; and medical diagnosis, which helps doctors make more accurate judgments” as noted by the Lui disclosure in paragraph n0002. Therefore, it would have been obvious to combine the disclosure of Xiong and Tsukizawa with the Lui disclosure to obtain the invention as specified in claim 7 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 8, the combination of Xiong, Tsukizawa, and Lui teaches “The iris recognition apparatus according to claim 1,wherein the at least one processor is configured to execute the instructions to: extract a pre-transform feature vector that is a feature vector of the iris image (Lui paragraph [n0051] "Step 3.2: The low-resolution image obtained in step 2 is processed by a single-scale feature mapping module, and the resulting feature map is denoted as F<sub>ss</sub>"); extract a scale factor feature vector that is a feature vector of the scale factor (Lui paragraph [n0053] "Step 3.3: Input the cross-scale mapping feature map F<sub>cs</sub> obtained in step 3.1 and the single-scale mapping feature map F<sub>ss</sub> obtained in step 3.2 into the multiscale feature fusion module for feature fusion to obtain a multi-scale feature map, denoted as F<sub>ms</sub>"); and generate the resolution-converted image by synthesizing the pre-transform feature vector and the scale factor feature vector and transforming the pre-transform feature vector (Lui paragraph [n0057] "Step 3.4: Input the multi-scale feature map F<sub>ms</sub> obtained in step 3.3 and the feature map F<sub>1</sub> obtained by interpolation and magnification in step 3.2 into the global feature extraction module. Add F<sub>ms</sub> and F<sub>1</sub> to obtain the global feature map F<sub>gf</sub>. Then, use a 3×3 convolution kernel to extract features from the global feature F<sub>gf</sub> to obtain the super-resolution image SR").” The proposed combination as well as the motivation for combining Xiong, Tsukizawa, and Hao references presented in the rejection of claim 7, applies to claim 8. Finally the apparatus recited in claim 8 is met by Xiong, Tsukizawa, and Lui. Allowable Subject Matter Dependent claims 5-6, 9 and 11 are objected to as being dependent upon rejected base claim, but would be allowable if: (i) rewritten in independent form including all the limitations of the base claim and any intervening claims; and (ii) regarding claim 9, the objection is overcome. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent 7,697,735 B2 to Adam et al. an iris recognition system that compares extract feature vectors and compares the feature vectors to a reference to determine a match. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASPREET KAUR/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596301
RETICLE INSPECTION AND PURGING METHOD AND TOOL
2y 5m to grant Granted Apr 07, 2026
Patent 12555199
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM, WITH SYNTHESIS OF TWO INFERENCE RESULTS ABOUT AN IDENTICAL FRAME AND WITH INITIALIZING OF RECURRENT INFORMATION
2y 5m to grant Granted Feb 17, 2026
Patent 12513319
END-TO-END INSTANCE-SEPARABLE SEMANTIC-IMAGE JOINT CODEC SYSTEM AND METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12427606
SYSTEMS AND METHODS FOR NON-DESTRUCTIVELY TESTING STATOR WELD QUALITY AND EPOXY THICKNESS
2y 5m to grant Granted Sep 30, 2025
Patent 12421641
LAUNDRY TREATMENT APPLIANCE AND METHOD OF USING THE SAME ACCORDING TO MATCHED LAUNDRY LOADS
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+30.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month