Prosecution Insights
Last updated: April 19, 2026
Application No. 18/810,974

PREDICTION BLOCK GENERATION AT VIDEO FRAME BOUNDARY

Final Rejection §103
Filed
Aug 21, 2024
Examiner
UHL, LINDSAY JANE KILE
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Ewha University - Industry Collaboration Foundation
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
89%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
324 granted / 404 resolved
+22.2% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
38 currently pending
Career history
442
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
65.4%
+25.4% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 404 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to the response filed on December 30, 2025. Claims 1-20 are pending and are examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Argument Applicant's arguments and amendments received December 30, 2025 have been fully considered. With regard to 35 U.S.C. § 103, Applicant argues that the cited prior art fails to disclose “setting weights for the prediction blocks that are in the bi-direction manner; and generating a final prediction block of the current block by weighted-summing the prediction block using the weights.” Specifically, Applicant alleges that JVET merely discusses whether certain samples are valid based on reference picture boundaries and excluding OOB samples. Applicants claims require using motion information to generate bi-directional prediction blocks, setting weights for these bi-directional prediction blocks, and generating a final prediction block by weighted summing the prediction blocks. The claims do not limit what constitutes a “weight”, nor the process required to “set” such a weight. JVET describes the generation of bi-directional MC, i.e., prediction, blocks, and that some of these prediction blocks may include OOB portions (see, e.g., Figs. 1, 2, “Introduction”). JVET proposes that, in order to generate a final predictor, the predictors are summed in the following manner: PNG media_image1.png 200 400 media_image1.png Greyscale It can be seen from these formulas that the final prediction is a sum of the prediction blocks – this is shown by the “else” statement. However, if either of these prediction blocks is OOB, the OOB prediction block is discarded, i.e., given a weight of 0, and the other prediction block is used, i.e., given a weight of 1. This is not a stretch of interpretation – Applicant’s own specification recites this exact formula in its Equation 8 (see portion after ¶204), as an embodiment of its invention. Further, Applicant specifically describes that the weights used in its invention are values from 0 to 1, which, when summed equal 1 (see ¶208). Applicant gives specific examples in which OOB blocks are weighted as 0 and non-OOB blocks are weighted as 1 (see ¶¶209, 217) and specifically describes in its specification that “In one example, the video decoding device may utilize only the prediction blocks in the L0 direction located in the inner region to generate the final prediction block. In other words, the weight of the prediction blocks in the L1 direction may be set to 0” (see ¶217). Thus, Applicant can be directly quoted as specifically defining the use of only one prediction block as setting the other prediction block to a weight of 0, i.e., so that it is not considered in the weighted sum of the final prediction. Claims are interpreted broadly and in light of the specification, thus Applicant cannot now argue, against its own definition, that the JVET disclosure of the exact formula and of its description of using only the prediction block that is non-OOB cannot be defined as setting the OOB prediction block to a weight of zero and summing them. If Applicant would like to limit its claims to non-zero weights, Applicant is encouraged to do so. Allowable Subject Matter Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The cited prior art fails to disclose wherein setting the weights comprises, in response to the outer region of the second prediction block being padded with pixels present in a region other than a boundary that is regionally contiguous with the second reference picture, setting, for samples in the outer region, weights that are common to the first prediction block. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen, Yi-Wen, et al., “AHG12: Enhanced bi-directional motion compensation”, Input document to JVET, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, 25th Meeting, by teleconference 12-21 January 2022 (Jan. 6, 2022) (“JVET-Y0125”) in view of U.S. Patent Publication No. 2024/0388731 (“Deng”), which corresponds to a priority application dated Jan. 19, 2022. With respect to claim 1, JVET-Y0125 discloses the invention substantially as claimed, including A method of video decoding, the method comprising: [] motion information of a current block…, the motion information including reference pictures that are in a bi-direction manner, and the motion information also including motion vectors that are in the bi-direction manner (see Fig. 2, “Proposal”, describing the use of motion information of a current block, that the motion information includes List0 and List1 bi-directional reference pictures and bi-directional List0 and List1 motion vectors); generating prediction blocks that are in the bi-direction manner by using the motion information, wherein the prediction blocks include a first prediction block that is located inside a corresponding first reference picture, wherein the prediction blocks include a second prediction block that is a remnant of the first prediction block, wherein the second prediction block includes an inner region located inside a second reference picture that is relevant, and wherein the second prediction block includes an outer region located outside the second reference picture (see Fig. 2, “Proposal”, describing that the predictors/List0 and List1 reference blocks, i.e., prediction blocks, may be generated bi-directionally using the List0 and List1 reference pictures and motion vectors, i.e., the motion information, that these blocks include a first prediction block located inside a first List1 reference picture and a second prediction block that is a remnant of the first prediction block including an inner region located inside the List0 reference picture that is relevant and an outer region located outside the List0 reference block); setting weights for the prediction blocks that are in the bi-direction manner (see “Abstract” and “Proposal”, describing that the final predictor is generated by weighted averaging two MC predictors, but proposing discarding the OOB predictor samples and using only non-OOB predictor samples, i.e., setting the weight of the OOB predictor samples to zero and the weight of the non-OOB predictor samples to 1); and generating a final prediction block of the current block by weighted-summing the prediction blocks by using the weights (see citations with respect to element above, describing generating a final prediction block of the current block by weighted summing (with the non-OOB block weighted at 1 and the OOB block weighted at 0) the prediction block). JVET-Y0125 does not explicitly disclose decoding motion information of a current block from a bitstream. However, in the same field of endeavor, Deng discloses that it was known to signal such motion information in a bitstream: decoding motion information of a current block from a bitstream (see Fig. 3, item 301, ¶82, describing that it was known to decode motion information, e.g., motion vectors reference picture indexes, and other motion information, from a bitstream and decode it). As evidenced by Deng, video coding standards have evolved through ITU-T and ISO/IEC standards, including JVET (see 90). At the time of filing, one of ordinary skill would have been familiar with coding standards like JVET, including prediction standard submissions like JVET-Y0125, and their uses in prediction coding systems. Such a person would have understood that, as evidenced by Deng, such prediction coding systems include encoders that predict and encode motion information in a bitstream and decoders that decode motion information from a bitstream for use in prediction. Accordingly, to one of ordinary skill in the art at the time of filing, using the encoder or decoder of Deng for encoding into a bitstream or decoding from a bitstream motion information of a current block for prediction as described in JVET-Y0125 would have represented nothing more than the combination of prior art elements according to predictable results and/or the simple substitution of one known element for another to obtain predictable results. Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing to include a mechanism for encoding into a bitstream or decoding from a bitstream motion information of the current block for use in the prediction coding process of JVET-Y0125 as taught by Deng. With respect to claim 2, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: wherein each of the weights has a value that ranges from 0 to 1, and a sum of the weights is equal to 1 (see citations and arguments with respect to claim 1 above, describing that where one reference prediction block is OOB and one is non-OOB, the weight of one is 1 and the other 0, i.e., ranges from 0 to 1 and a sum equal to 1). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 2. With respect to claim 3, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: wherein setting the weights comprises setting, for samples in the inner region of the second prediction block, weights that are common to the first prediction block (see citations and arguments with respect to claim 1 above, describing that setting the weights comprises setting the weights as 1 for non-OOB samples, i.e., for non-OOB samples in the inner region of the second prediction block, weights that are common to the first prediction block when it is non-OOB). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 3. With respect to claim 4, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: wherein setting the weights comprises setting, for samples in the outer region of the second prediction block, weights smaller than weights for the first prediction block or setting the weights to zero for the samples in the outer region of the second prediction block (see citations and arguments with respect to claim 1 above, including JVET-Y0125 Fig. 2 and “Proposal”, describing setting the weights comprises setting the weights to zero for the samples in the outer region of the prediction block). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 4. With respect to claim 6, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: further comprising: refining the motion vectors; and generating the prediction blocks that are in the bi-direction manner by using the refined motion vectors, wherein the first prediction block of the prediction blocks is located inside the corresponding first reference picture, wherein the second prediction block that is the remnant of the first prediction block includes the inner region located inside the second reference picture that is relevant, and wherein the second prediction block includes the outer region located outside the second reference picture (see citations and arguments with respect to claim 1 above, describing generating prediction blocks in a bi-direction manner using motion vectors, wherein the first prediction block of the prediction blocks is inside the first reference picture, wherein the second prediction block is a remnant of the first prediction block and includes the inner region inside the second reference picture that is relevant and wherein the second prediction block includes the outer region located outside the second reference picture and Deng ¶¶634-637 and 644-645, describing that it was known to refine the motion vectors prior to such a prediction, i.e., refined to point to a different/new prediction block/subblock). Deng describes that it may be beneficial to refine the motion vectors and generate the prediction blocks based on refined vectors (see Deng 347, 390, 398, 634-637 and 643-645, describing that doing so may impact efficiency and coding gain). At the time of filing, one of ordinary skill would have been familiar with the benefit of improving efficiency and coding gain. Accordingly, such a person would have understood the benefit of refining motion vectors and generating prediction blocks therefrom to improve efficiency and gain. Moreover, to one of ordinary skill in the art at the time of filing, doing so would have represented nothing more than the combination of prior art elements according to predictable results and/or the simple substitution of one known element for another to obtain predictable results. Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing to include the refinement of motion vectors and generation of prediction blocks therefrom in the prediction coding process of JVET-Y0125/Deng as taught by Deng. With respect to claim 7, L JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 6. JVET-Y0125/Deng additionally discloses: wherein refining the motion vectors comprises excluding the motion vectors corresponding to the second prediction block from candidate motion vectors for refining (see citations and arguments with respect to claims 1 and 6 above, including Deng ¶643, describing that the motion vectors corresponding to the OOB portion of the second prediction block may be replaced/not used, i.e., excluded for refining, to generate the final prediction block). The reasons for combining the cited prior art with respect to claims 1 and 6 also apply to claim 7. With respect to claim 8, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 6. JVET-Y0125/Deng additionally discloses: wherein refining the motion vectors comprises excluding the motion vector corresponding to both the first prediction block and the second prediction for candidate motion vectors for refining (see citations and arguments with respect to claims 1 and 6 above, including Deng ¶¶634-637 and 643-645, describing that the refinement may include replacing the OOB motion vectors with a zero motion vector, a collocated block, or a modified/shifted motion compensated block, i.e., the vectors corresponding to the first or second prediction blocks are not used/excluded). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 8. With respect to claim 9, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 6. JVET-Y0125/Deng additionally discloses: wherein refining the motion vectors comprises constraining the motion vector corresponding to the second prediction block to be located in the inner region of the second reference picture (see citations and arguments with respect to claims 1 and 6 above, including Deng ¶¶634-637 and 643-645, describing that the refinement may include not using samples in the outer region of the second reference picture and replacing them with samples in the inner region, i.e., constraining the prediction block to be located in the inner region). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 9. With respect to claim 10, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 6. JVET-Y0125/Deng additionally discloses: wherein setting the weights comprises setting, for samples in the outer region of the second prediction block, second weights smaller than first weights for the first prediction block or setting the second weights to zero for the samples in the outer region of the second prediction block (see citations and arguments with respect to claim 1 above, describing that setting weights may include that for samples in the outer region/OOB region of the second prediction block to be zero). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 10. With respect to claim 11, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 6. JVET-Y0125/Deng additionally discloses: wherein setting the weights comprises setting a second weight of the second prediction block to zero (see citations and arguments with respect to claims 1 and 6 above, including JVET-Y0125 “Proposal” and Deng ¶615, describing that the weight, i.e., second weight, of the second prediction block may be set to zero where it includes OOB samples). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 11. With respect to claim 12, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: A method of video encoding for inter-predicting a current block, the method comprising: determining motion information of the current block, the motion information including reference pictures that are in a bi-direction manner, and the motion information also including motion vectors that are in the bi-direction manner (see citations and arguments with respect to corresponding element of claim 1 above and Deng ¶82, describing that the motion information may include reference indices and motion vectors and JVET-Y0125, Fig. 2, showing that these reference pictures and motion vectors are bi-directional); generating prediction blocks that are in the bi-direction manner by using the motion information, wherein the prediction blocks include a first prediction block that is located inside a corresponding first reference picture, wherein the prediction blocks include a second prediction block that is a remnant of the first prediction block, wherein the second prediction block includes an inner region located inside a second reference picture that is relevant, and wherein the second prediction block includes an outer region located outside the second reference picture (see citations and arguments with respect to corresponding element of claim 1 above); setting weights for the prediction blocks that are in the bi-direction manner (see citations and arguments with respect to corresponding element of claim 1 above); and generating a final prediction block of the current block by weighted-summing the prediction blocks by using the weights (see citations and arguments with respect to corresponding element of claim 1 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 12. With respect to claim 13, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 12. JVET-Y0125/Deng additionally discloses: further comprising encoding the motion information (see citations and arguments of claim 1 above, describing that the motion information may be encoded by an encoder). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 13. With respect to claim 14, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 12. JVET-Y0125/Deng additionally discloses: further comprising: refining the motion vectors; and generating the prediction blocks that are in the bi-direction manner by using the refined motion vectors, wherein the first prediction block of the prediction blocks is located inside the corresponding first reference picture, and wherein the second prediction block that is a remnant of the first prediction block includes the inner region located inside the second reference picture that is relevant, and the second prediction block includes the outer region located outside the second reference picture (see citations and arguments with respect to claim 6 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 14. With respect to claim 15, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: A method for providing video data to a video decoding device, a method comprising: encoding the video data into a bitstream (see citations and arguments with respect to claims 12-13 above); and transmitting the bitstream to the video decoding device (see Deng Figs. 1-3, items 130 and “encoded bitstream”, ¶¶5, 53, 82, describing transmitting the bitstream to the decoder, i.e., video decoding device); wherein encoding the video data comprises: determining motion information of a current block, the motion information including reference pictures that are in a bi-direction manner, and the motion information also including motion vectors that are in the bi-direction manner (see citations and arguments with respect to claim 12 above); generating prediction blocks that are in the bi-direction manner by using the motion information, wherein the prediction blocks include a first prediction block that is located inside a corresponding first reference picture, wherein the prediction blocks include a second prediction block that is a remnant of the first prediction block, wherein the second prediction block includes an inner region located inside a second reference picture that is relevant, and wherein the second prediction block includes an outer region located outside the second reference picture (see citations and arguments with respect to claim 12 above); setting weights for the prediction blocks that are in the bi-direction manner (see citations and arguments with respect to claim 12 above); and generating a final prediction block of the current block by weighted-summing the prediction blocks by using the weights (see citations and arguments with respect to claim 12 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 15. With respect to claim 16, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 1. JVET-Y0125/Deng additionally discloses: wherein encoding the video data further comprises encoding the motion information (see citations and arguments with respect to claim 13 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 16. With respect to claim 17, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of independent claim 15. JVET-Y0125/Deng additionally discloses: wherein encoding the video data further comprise: refining the motion vectors; and generating the prediction blocks that are in the bi-direction manner by using the refined motion vectors, wherein the first prediction block of the prediction blocks is located inside the corresponding first reference picture, wherein the second prediction block that is the remnant of the first prediction block includes the inner region located inside the second reference picture that is relevant, and wherein the second prediction block includes the outer region located outside the second reference picture (see citations and arguments with respect to claim 6 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 17. With respect to claim 18, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 17. JVET-Y0125/Deng additionally discloses: wherein refining the motion vectors comprises excluding the motion vectors corresponding to the second prediction block from candidate motion vectors for refining (see citations and arguments with respect to claim 7 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 18. With respect to claim 19, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 17. JVET-Y0125/Deng additionally discloses: wherein refining the motion vectors comprises excluding the motion vectors corresponding to both the first prediction block and the second prediction block from candidate motion vectors for refining (see citations and arguments with respect to claim 8 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 19. With respect to claim 20, JVET-Y0125 discloses the invention substantially as claimed. As described above, JVET-Y0125 in view of Deng discloses all the elements of dependent claim 17. JVET-Y0125/Deng additionally discloses: wherein refining the motion vectors comprises constraining the motion vector corresponding to the second prediction block to be located in the inner region of the second reference picture (see citations and arguments with respect to claim 9 above). The reasons for combining the cited prior art with respect to claim 1 also apply to claim 20. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDSAY JANE KILE UHL whose telephone number is (571)270-0337. The examiner can normally be reached 8:30 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LINDSAY J UHL Primary Examiner Art Unit 2481 /LINDSAY J UHL/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Aug 21, 2024
Application Filed
Oct 15, 2025
Non-Final Rejection — §103
Dec 30, 2025
Response Filed
Feb 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604000
SYSTEMS AND METHODS FOR PARTITION-BASED PREDICTION MODE REORDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12604030
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Apr 14, 2026
Patent 12598329
SYNTAX DESIGN METHOD AND APPARATUS FOR PERFORMING CODING BY USING SYNTAX
2y 5m to grant Granted Apr 07, 2026
Patent 12593032
METHOD AND DEVICE FOR PROCESSING VIDEO SIGNAL BY USING INTER PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12587636
GEOMETRIC PARTITION MODE WITH MOTION VECTOR REFINEMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
89%
With Interview (+8.7%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 404 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month