Prosecution Insights
Last updated: April 19, 2026
Application No. 18/778,798

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Final Rejection §102§103
Filed
Jul 19, 2024
Examiner
ABOUZAHRA, HESHAM K
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
324 granted / 402 resolved
+22.6% vs TC avg
Minimal +2% lift
Without
With
+2.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
39 currently pending
Career history
441
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 402 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 and 18-19 have been amended. Claims 1-20 are pending for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/21/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments Applicant's arguments filed 11/07/2025 have been fully considered but they are not persuasive. Applicant argues that: Alshin keeps entirely silent on usage of the weighted pixels to generate a prediction. Those skilled in the art would understand determining the displacement vector based on the weighted pixels is not equivalent to generating the prediction based on the weighted samples described in feature of Claim 1. Examiner respectfully disagrees. Alshin discloses obtaining a prediction block of the current block by performing block-unit motion compensation and pixel group unit motion compensation. a pixel group unit motion compensator and the pixel group unit motion-compensated value refers to a value generated by performing the pixel group unit motion compensation, wherein the block-unit motion-compensated value may be an average value or a weighted sum with respect to reference pixels [0188]. [0200] The pixel group unit motion compensator 120 may calculate a weighted average value for the current pixel … the weight may be determined based on whether the pixel is located in the inside of the boundary or outside the boundary [0200]. Therefore, the prediction block is generated based on weighed samples and also based on the location of the pixel. Both block-unit motion compensation and pixel group unit motion compensation are used to generate a prediction. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 10, 14-15, and 17-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Alshin (US 20200029090 A1). Regarding claim 1, Alshin teaches a method of video processing, comprising: determining, during a conversion between a video unit of a video and a bitstream of the video, whether at least one of: a first set of samples or a second set of is outside a boundary associated with the video unit (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); applying a weighting process to the first set of samples and the second set of samples based on the determining (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); generating a prediction based on the weighted first and second sets of samples ([0472] As described with reference to FIGS. 9G to 9I, the video decoding apparatus 100 may determine values (s1 to s6 of Equation 33) for determining a horizontal direction displacement vector and a vertical direction displacement vector with respect to each pixel by using a pixel value and a gradient value of pixel located in a reference block, without using a pixel value and a gradient value of a pixel outside the reference block corresponding to a current block, by allocating different weights to pixels in a window, according to locations of a current pixel.); and performing the conversion based on the prediction ([0215] In operation S130, the video decoding apparatus 100 may reconstruct the current block based on the prediction block and the residual block.). Regarding claim 2, Alshin teaches the method of claim 1, wherein if both motion compensated samples are outside the boundary, a final prediction value is generated without blending the motion compensated samples ([0471] Referring to FIG. 9H, when a current pixel 981 is near a boundary of a current block 980 in the current block 980 (when the current pixel 981 is distant from the boundary by one pixel), the video decoding apparatus 100 may determine a weight for pixels 982 to be 0,). Regarding claim 3, Alshin teaches the method of claim 2, wherein if a non-outside boundary sample is closer to the boundary, the non-outside boundary sample is used to generate the final prediction value, or wherein the final prediction value is generated based on a non-outside boundary prediction sample inside a current blended block, according to a rule, or wherein the final prediction value is generated by weighted blending in a same way as blending samples inside the boundary (Fig. 9I: [0471] Referring to FIG. 9H, when a current pixel 981 is near a boundary of a current block 980 in the current block 980 (when the current pixel 981 is distant from the boundary by one pixel), the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1.). Regarding claim 4, Alshin teaches the method of claim 1, wherein a value of an outside boundary sample of a motion compensated block/subblock is set according to a predetermined rule ([0159] The pixel group unit motion compensator 120 may generate the pixel group unit motion-compensated value by performing pixel group unit motion compensation on the current block, based on an optical flow of pixel groups of a first reference picture and a second reference picture.). Regarding claim 5, Alshin teaches the method of claim 4, wherein the outside boundary sample refers to the outside boundary sample after a motion compensation process, or wherein the outside boundary sample refers to the outside boundary sample after a bi-directional optical flow (BDOF) and before the weighting process ([0159] The pixel group unit motion compensator 120 may generate the pixel group unit motion-compensated value by performing pixel group unit motion compensation on the current block, based on an optical flow of pixel groups of a first reference picture and a second reference picture. ). Regarding claim 6, Alshin teaches the method of claim 4, wherein the predetermined rule is based on a non- outside boundary motion compensated sample values inside the boundary (Fig. 9I: [0471] Referring to FIG. 9H, when a current pixel 981 is near a boundary of a current block 980 in the current block 980 (when the current pixel 981 is distant from the boundary by one pixel), the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1.).. Regarding claim 10, Alshin teaches the method of claim 1, wherein if a prediction block or subblock pointed by a motion vector is outside the boundary, a new prediction block or subblock is generated according to a predetermined rule ([0452] Referring to FIG. 9F, for a pixel located outside a boundary of a reference block 955, the video decoding apparatus 100 may adjust a location of the pixel to a location of an available pixel that is closest to the pixel and is from among pixels located within the boundary of the reference block 955, and may determine a pixel value and a gradient value of the pixel located outside the boundary to be a pixel value and a gradient value of the available pixel at the closest location. In this regard, the video decoding apparatus 100 may adjust the location of the pixel located outside the boundary of the reference block 955 to the location of the available pixel at the closest location according to an equation). Regarding claim 14, Alshin teaches the method of claim 1, wherein an outside boundary check is based on motion vectors after a certain stage of a DMVR based motion refinement, or wherein an outside boundary check is based on motion vectors after all stages of a DMVR based motion refinement, or wherein an outside boundary check is based on motion vectors after TM based motion refinement ([0271] The pixel group unit motion compensator 120 may calculate a weighted average value for the current pixel which is required to calculate a displacement vector per unit time in a horizontal direction, by using the value about the current pixel, the values about the corresponding neighboring pixels, and a weight. In this regard, the weight may be determined based on a distance between the current pixel and the neighboring pixel, a distance between a pixel and a boundary of a block, the number of pixels located outside the boundary, or whether the pixel is located in the inside of the boundary or outside the boundary.). Regarding claim 15, Alshin teaches the method of claim 1, wherein blending weights for outside boundary samples are determined based on the outside boundary check (Fig. 9I & 9H). Regarding claim 17, Alshin teaches the method of claim 1, wherein the conversion includes encoding the video unit into the bitstream, or wherein the conversion includes decoding the video unit from the bitstream ([0012] According to various embodiments, a video decoding method may include obtaining, from a bitstream, motion prediction mode information about a current block in a current picture;). Regarding claim 18, Alshin teaches an apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor ([0009] Provided is a computer-readable recording medium having recorded thereon a program for executing a method according to various embodiments.), cause the processor to: determine, during a conversion between a video unit of a video and a bitstream of the video, whether at least one of: a first set of samples or a second set of is outside a boundary associated with the video unit (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); apply a weighting process to the first set of samples and the second set of samples based on the determining (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); generate a prediction based on the weighted first and second sets of samples ([0472] As described with reference to FIGS. 9G to 9I, the video decoding apparatus 100 may determine values (s1 to s6 of Equation 33) for determining a horizontal direction displacement vector and a vertical direction displacement vector with respect to each pixel by using a pixel value and a gradient value of pixel located in a reference block, without using a pixel value and a gradient value of a pixel outside the reference block corresponding to a current block, by allocating different weights to pixels in a window, according to locations of a current pixel.); and perform the conversion based on the prediction ([0215] In operation S130, the video decoding apparatus 100 may reconstruct the current block based on the prediction block and the residual block.). Regarding claim 19, Alshin teaches a non-transitory computer-readable storage medium storing instructions that cause a processor ([0009] Provided is a computer-readable recording medium having recorded thereon a program for executing a method according to various embodiments.) to: determine, during a conversion between a video unit of a video and a bitstream of the video, whether at least one of: a first set of samples or a second set of is outside a boundary associated with the video unit (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); apply a weighting process to the first set of samples and the second set of samples based on the determining (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); generate a prediction based on the weighted first and second sets of samples ([0472] As described with reference to FIGS. 9G to 9I, the video decoding apparatus 100 may determine values (s1 to s6 of Equation 33) for determining a horizontal direction displacement vector and a vertical direction displacement vector with respect to each pixel by using a pixel value and a gradient value of pixel located in a reference block, without using a pixel value and a gradient value of a pixel outside the reference block corresponding to a current block, by allocating different weights to pixels in a window, according to locations of a current pixel.); and perform the conversion based on the prediction ([0215] In operation S130, the video decoding apparatus 100 may reconstruct the current block based on the prediction block and the residual block.). Regarding claim 20, Alshin teaches a non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus ( A bit stream generated by a method, the method comprising… is a product by process claim limitation where the product is the bit stream and the process is the method steps to generate the bitstream. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps”. Thus, the scope of the claim is the storage medium storing the bitstream (with the structure implied by the method steps). The structure includes the information and samples manipulated by the steps. “To be given patentable weight, the printed matter and associated product must be in a functional relationship. A functional relationship can be found where the printed matter performs some function with respect to the product to which it is associated”. MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists. MPEP §2111.05(III). The storage medium storing the claimed bitstream in claim 20 merely services as a support for the storage of the bitstream and provides no fictional relationship between the stored bitstream and storage medium. Therefor the structure bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data and is anticipated by Alshin which recites a storage medium storing a bitstream ([0009] [0012]). ), wherein the method comprises: determining whether at least one of: a first set of samples or a second set of is outside a boundary associated with a video unit of the video (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); applying a weighting process to the first set of samples and the second set of samples based on the determining (Figs. 9H & 9I: the video decoding apparatus 100 may determine a weight for pixels 982 to be 0, the pixels being located outside the boundary of the current block 980, and may determine a weight for pixels 983 to be 2, the pixels being immediately adjacent to the boundary of the current block 980. The video decoding apparatus 100 may determine a weight for other pixels 984 to be 1. [0471]); generating a prediction based on the weighted first and second sets of samples ([0472] As described with reference to FIGS. 9G to 9I, the video decoding apparatus 100 may determine values (s1 to s6 of Equation 33) for determining a horizontal direction displacement vector and a vertical direction displacement vector with respect to each pixel by using a pixel value and a gradient value of pixel located in a reference block, without using a pixel value and a gradient value of a pixel outside the reference block corresponding to a current block, by allocating different weights to pixels in a window, according to locations of a current pixel.); and generating a bitstream of the video unit based on the prediction ([0215] In operation S130, the video decoding apparatus 100 may reconstruct the current block based on the prediction block and the residual block.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7-9, 11-13, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Alshin in view of Chen (US 20200029090 A1). Regarding claim 7, Alshin teaches the method of claim 6. Alshin does not explicitly teach the following limitations, however, in an analogous art, Chen teaches wherein the non- outside boundary motion compensated sample values locating at a first row inside of the boundary are copied above for above outside boundary samples, or wherein the non- outside boundary motion compensated sample values locating at a first column inside of the boundary are copied left for the left outside boundary samples, or wherein the non- outside boundary motion compensated sample values locating at a top-left corner inside of the boundary are copied for top-left outside boundary samples (For the remaining steps in the BDOF process, if any sample and gradient value outside the boundaries of CU 1008 are needed, they can be padded (or repeated) from their nearest neighbors. [0126]). It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Chen and apply them to Alshin. One would be motivated as such as it improves the accuracy of affine motion compensated prediction by refining the sub-block based affine motion compensated prediction with optical flow. Regarding claim 8, Alshin teaches the method of claim 4. Alshin does not explicitly teach the following limitations, however, in an analogous art, Chen teaches wherein the rule is based on non-outside boundary BDOF refined sample values inside the boundary ([0023] FIG. 10 is a schematic diagram of an example of extended coding unit (CU) region used in bi-directional optical flow (BDOF), according to some embodiments of the present disclosure.). It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Chen and apply them to Alshin. One would be motivated as such as it improves the accuracy of affine motion compensated prediction by refining the sub-block based affine motion compensated prediction with optical flow. Regarding claim 9, Alshin in view of Chen teaches the method of claim 8. Chen teaches wherein the non-outside boundary BDOF refined sample values locating at a first row inside of the boundary are copied above for above outside boundary samples, or wherein the non-outside boundary BDOF refined sample values locating at a first column inside of the boundary are copied left for left outside boundary samples, or wherein the non-outside boundary BDOF refined sample values locating at a top-left corner inside of the boundary are copied for top-left outside boundary samples (For the remaining steps in the BDOF process, if any sample and gradient value outside the boundaries of CU 1008 are needed, they can be padded (or repeated) from their nearest neighbors. [0126]). The same motivation used to combine Alshin in view of Chen in claim 12 is applicable. Regarding claim 11, Alshin teaches the method of claim 10. Alshin does not explicitly teach the following limitations, however, in an analogous art, Chen teaches wherein the new prediction block or subblock is generated based on a zero motion vector, or wherein the new prediction block or subblock is replaced by a collocated block, or wherein the new prediction block or subblock is replaced by a non- outside boundary prediction block or subblock that is nearest to the outside boundary block or subblock (These extended sample values can be used in gradient calculation only. For the remaining steps in the BDOF process, if any sample and gradient value outside the boundaries of CU 1008 are needed, they can be padded (or repeated) from their nearest neighbors. [0126]). It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Chen and apply them to Alshin. One would be motivated as such as it improves the accuracy of affine motion compensated prediction by refining the sub-block based affine motion compensated prediction with optical flow. Regarding claim 12, Alshin teaches the method of claim 1. Alshin does not explicitly teach the following limitations, however, in an analogous art, Chen teaches wherein an outside boundary check is based on a motion vector before a decoder side motion refinement (Process 500 can perform BM based DMVR to determine a first candidate reference block 518 in first reference picture 504, a second candidate reference block 520 in second reference picture 506, a first candidate MV 522 connecting current block 510 and first candidate reference block 518, and a second candidate MV 524 connecting current block 510 and second candidate reference block 516. [0100]). It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Chen and apply them to Alshin. One would be motivated as such as it improves the accuracy of affine motion compensated prediction by refining the sub-block based affine motion compensated prediction with optical flow. Regarding claim 13, Alshin in view of Chen teaches the method of claim 12. Chen teaches wherein the decoder side motion refinement comprises at least one of: a template matching based motion refinement, or a bilateral matching based motion refinement (Process 500 can perform BM based DMVR to determine a first candidate reference block 518 in first reference picture 504, a second candidate reference block 520 in second reference picture 506, a first candidate MV 522 connecting current block 510 and first candidate reference block 518, and a second candidate MV 524 connecting current block 510 and second candidate reference block 516. [0100]). The same motivation used to combine Alshin in view of Chen in claim 12 is applicable. Regarding claim 16, Alshin teaches the method of claim 1. Alshin does not explicitly teach the following limitations, however, in an analogous art, Chen teaches wherein whether to deblock is controlled at coding tree unit (CTU) or coding unit (CU) level (An original MV (e.g., first initial MV 508 or second initial MV 514) can be used in a deblocking process and in spatial MV prediction for future CU coding within a current picture (e.g., current picture 502). [0102]). It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Chen and apply them to Alshin. One would be motivated as such as it improves the accuracy of affine motion compensated prediction by refining the sub-block based affine motion compensated prediction with optical flow. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HESHAM K ABOUZAHRA whose telephone number is (571)270-0425. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 57127227384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HESHAM K ABOUZAHRA/ Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Aug 05, 2025
Non-Final Rejection — §102, §103
Nov 07, 2025
Response Filed
Feb 15, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594889
VEHICLE DISPLAY INCLUDING AN OFFSET CAMERA VIEW
2y 5m to grant Granted Apr 07, 2026
Patent 12593034
METHODS AND DEVICES FOR DECODER-SIDE INTRA MODE DERIVATION
2y 5m to grant Granted Mar 31, 2026
Patent 12593048
METHODS AND APPARATUS ON PREDICTION REFINEMENT WITH OPTICAL FLOW
2y 5m to grant Granted Mar 31, 2026
Patent 12587654
DETECTION OF AMOUNT OF JUDDER IN VIDEOS
2y 5m to grant Granted Mar 24, 2026
Patent 12581087
ENCODING METHOD, DECODING METHOD, BITSTREAM, ENCODER, DECODER AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
83%
With Interview (+2.3%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 402 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month