Prosecution Insights
Last updated: April 19, 2026
Application No. 19/056,938

Video Encoder, Video Decoder, and Corresponding Method

Non-Final OA §102§103§DP
Filed
Feb 19, 2025
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 10 and 18 are objected to because of the following informalities: “or” before “when the plurality of preset conditions is not satisfied, skipping performing …” should be removed to make the scope of the claimed invention clear. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-25 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. US 12028527 B2. Although the claims at issue are not identical, they are not patentably distinct from each other. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-9 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kadono et al. (Pub. No. US 2004/0076237 A1). Regarding claims 1-9, Kadono discloses One or more memory or storage devices having stored thereon a program ([0247] recording a program implementing the steps of … method to a floppy disk or other computer-readable data recording medium; [0251]; [0257] The software for … can be stored to any computer-readable data recording medium (such as a CD-ROM disc, floppy disk, or hard disk drive)). See MPEP 2111.05 (III), when determining the scope of the claims, “a bitstream” is not given patentable weight, because “a bitstream” is non-functional descriptive material. It is merely static data that imparts no function (unlike an executable computer program which performs a function). It does not have any functional relationship with the intended computer system. Thus, the computer-readable data recording medium disclosed in Kadono meets claims 1-9. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-25 are rejected under 35 U.S.C. 103 as being unpatentable over PARK et al. (US 20230269392 A1) in view of Bross et al. (Versatile Video Coding (Draft 5)). Regarding claim 1. PARK discloses A non-transitory computer-readable medium storing a bitstream that, when decoded by a coding device, is used by the coding device to generate a video (abstract, A video decoding method), the bitstream comprising information for use in decoding the video ([0057] the predictor may generate various information related to prediction, such as prediction mode information, and transmit the generated information to the entropy encoder 240. The information on the prediction may be encoded in the entropy encoder 240 and output in the form of a bitstream), wherein the information for use in decoding the video comprises: an information about a prediction mode of a current picture block and residual information of the current picture block (figure 2, [0057] the generated residual signal is transmitted to the transformer 232, the predictor may generate various information related to prediction, such as prediction mode information, and transmit the generated information to the entropy encoder 240. The information on the prediction may be encoded in the entropy encoder 240 and output in the form of a bitstream), wherein, when the prediction mode of the current picture block is inter prediction, an inter prediction operation is performed for the current picture block ([0002] performing an inter prediction based on a Decoder-side Motion Vector Refinement (DMVR) and/or Bi-directional optical flow (BDOF)), and wherein the inter prediction operation comprises: when a plurality of preset conditions is satisfied for the current picture block, performing bi-directional optical flow (BDOF) processing on the current picture block to obtain predicted sample values of the current picture block (abstract, applying BDOF to the current block based on BDOF flag information, the BDOF flag information is derived based on a predetermined BDOF application condition; [0015] various application conditions are proposed in applying a DMVR and/or BDOF; [0146] Table 3, When all conditions listed below are satisfied, the BDOF may be applied), wherein the plurality of preset conditions comprises: first prediction direction indication information (predFlagL0) corresponding to a first list (list1) is equal to 1 and second prediction direction indication information (predFlagL1) corresponding to a second list (list0) is equal to 1, wherein the predFlagL0 and the predFlagL1 being equal to 1 indicate that bi-directional prediction is applied to the current picture block ([0146] Table 3, predFlagL0 and predFlagL1 are both equal to 1: Bilateral prediction); a motion model index for motion compensation (MotionModelIdc) being is equal to 0, wherein the MotionModelIdc being equal to 0 indicates that a motion model for a motion compensation of the current picture block is a translational motion ([0146] Table 3, MotionModelIdc is equal to 0: When not Affine); a merge_subblock_flag is equal to 0, wherein the merge_subblock_flag being equal to 0 indicates that a subblock merge mode is not applied to the current picture block ([0146] Table 3, merge_subblock_flag is equal to 0; [0151] when merge_subblock_flag is 0 (i.e., when the merge mode is not applied in units of the subblocks)); a bcwIdx is equal to 0, wherein the bcwIdx indicates a bi-directional prediction weight index for the current picture block ([0146] Table 3, GbiIdx is equal to 0: When GBi index is default; [0282] when the GBi index is not default (e.g., when GbiIdx is not 0), two reference blocks may have different weighting factors; [0285] when the value of the GBi index (e.g., GbiIdx) is not 0, different weights are applied to two reference blocks (i.e., a reference block referred for L0 prediction and a reference block referred for L1 prediction)); a cIdx is equal to 0, wherein the cIdx represents a color component index of the current picture block ([0146] Table 3, cIdx is equal to 0: Applied only to luma); a height (H) of the current picture block is greater than or equal to 8, wherein H is equal to 2.sup.n, and wherein n is an integer ([0146] Table 3, h>=8); a width (W) of the current picture block is greater than or equal to 8, wherein W is equal to 2.sup.n ([0146] Table 3, w>=8); a product of W and H is greater than 128 ([0146] Table 3, w>=8 && h>=8); and a luma_weight_l0_flag[refIdxL0] and a luma_weight_l1_flag[refIdxL1] are both equal to 0, wherein the luma_weight_l0_flag[refIdxL0] being equal to 0 indicates that first weighting factors for a first luma component of list0 prediction are not present, and wherein the luma_weight_l1_flag[refIdxL1] being equal to 0 indicates that second weighting factors for a second luma component of list 1 prediction are not present ([0299] when the GBi index is not default (e.g., when GbiIdx is not 0) and a weighting flag by an explicit weight prediction is not 0; [0308] a method for determining whether to apply the BDOF by considering the GBi index and the weighting flag of the explicit weight prediction; [0309] Table 35, luma_weight_l0_flag[ refIdxL0 ] and luma_weight_l1_flag[ refIdxL1 ] are equal to 0; [0310]-[0312] when the weight prediction is not explicitly applied to the L0 and L1 predictions, it may be determined that the BDOF is applied); or when the plurality of preset conditions is not satisfied, skipping performing of the BDOF processing on the current picture block, and obtaining predicted sample values of the current picture block through prediction based on reference sample values corresponding to the first list and reference sample values corresponding to the second list according to a decoder-side motion vector refinement (DMVR) technology ([0137] Table 2, merge_flag is equal to 1: Applied in MERGE/SKIP, predFlagL0[0][0]=1 and predFlagL0[1][1]=1: Bilateral prediction; [0139] Whether to apply the DMVR may be determined based on flag information (e.g., merge_flag) representing whether the inter prediction is performed by using the merge mode/skip mode; [0141] Whether to apply the DMVR may be determined based on whether the bilateral prediction (bi-prediction) is used). However, PARK doesn’t explicitly disclose the plurality of preset conditions comprises: a sym_mvd_flag is equal to 0, wherein the sym_mvd_flag being equal to 0 indicates that an mvd_coding syntax structure is present for the current picture block. Bross discloses the plurality of preset conditions comprises: a sym_mvd_flag is equal to 0, wherein the sym_mvd_flag being equal to 0 indicates that an mvd_coding syntax structure is present for the current picture block (page 214 line 3 next to last - page 215 line 10, sym_mvd_flag[ xCb ][ yCb ] is equal to 0). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of PARK and Bross, to comprise a sym_mvd_flag that is equal to 0 in the plurality of preset conditions, in order to eliminate BDOF in SMVD to reduce computational complexity. Regarding claim 2. PARK discloses The non-transitory computer-readable medium of claim 1, wherein the information for use in decoding the video comprises an index of target candidate motion information ([0059] The motion information may include a motion vector and a reference picture index; [0337] The bi-predictive (B) slice may mean a slice decoded based on an inter prediction using one or more, e.g., two motion vectors and reference picture indexes), and wherein the target candidate motion information comprises: target candidate motion vectors comprising a first motion vector corresponding to a first list and a second motion vector corresponding to a second list ([0088] The motion information may further include L0 motion information and/or L1 motion information according to the inter-prediction type (L0 prediction, L1 prediction, Bi prediction, etc.). A L0-direction motion vector may be referred to as an L0 motion vector or MVL0 and an L1-direction motion vector may be referred to as an L1 motion vector or MVL1. A prediction based on the L0 motion vector may be referred to as an L0 prediction, a prediction based on the L1 motion vector may be referred to as an L1 prediction, and a prediction based on both the L0 motion vector and the L1 motion vector may be referred to as a bi-prediction); reference frame indices comprising a first reference frame index corresponding to the first list and a second reference frame index corresponding to the second list ([0337] The bi-predictive (B) slice may mean a slice decoded based on an inter prediction using one or more, e.g., two motion vectors and reference picture indexes; [0368] the motion information may include an L0 reference picture index and an L0 reference picture indicated by the L0 reference picture index in an L0 reference picture list and an L1 reference picture index and an L1 reference picture indicated by the L1 reference picture index in an L1 reference picture list); and prediction direction indication information comprising the predFlagL0 and the predFlagL1 and that indicates the bi-directional prediction is applied to the current picture block ([0146] Table 3, predFlagL0 and predFlagL1 are both equal to 1: Bilateral prediction). Regarding claim 3. PARK discloses The non-transitory computer-readable medium of claim 1, wherein the information for use in decoding the video comprises: first indices indicating target candidate motion vector predictors, wherein the target candidate motion vector predictors comprise a first motion vector predictor corresponding to a first list and a second motion vector predictor corresponding to a second list ([0059] The motion information may include a motion vector and a reference picture index. The motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information, the inter predictor 221 may configure a motion information candidate list based on neighboring blocks and generate information indicating which candidate is used to derive a motion vector and/or a reference picture index of the current block; [0088] The motion information may further include L0 motion information and/or L1 motion information according to the inter-prediction type (L0 prediction, L1 prediction, Bi prediction, etc.). A L0-direction motion vector may be referred to as an L0 motion vector or MVL0 and an L1-direction motion vector may be referred to as an L1 motion vector or MVL1. A prediction based on the L0 motion vector may be referred to as an L0 prediction, a prediction based on the L1 motion vector may be referred to as an L1 prediction, and a prediction based on both the L0 motion vector and the L1 motion vector may be referred to as a bi-prediction; [0337] The bi-predictive (B) slice may mean a slice decoded based on an inter prediction using one or more, e.g., two motion vectors and reference picture indexes; [0368] When the bi-prediction is applied to the current block, the motion information may include an L0-direction motion vector (L0 motion vector) and an L1-direction motion vector (L1 motion vector)); a motion vector difference (MVD) comprising a first MVD corresponding to the first list or a second MVD corresponding to the second list ([0099] A motion vector difference (MVD) which is a difference obtained by subtracting the mvp from the motion vector of the current block may be derived. In this case, the information on the MVD may be signaled to the decoding apparatus); second indices indicating reference frames of the current picture block, wherein the reference frames comprise a first reference frame corresponding to the first list and a second reference frame corresponding to the second list ([0059] The motion information may include a motion vector and a reference picture index; [0337] The bi-predictive (B) slice may mean a slice decoded based on an inter prediction using one or more, e.g., two motion vectors and reference picture indexes; [0368] the motion information may include an L0 reference picture index and an L0 reference picture indicated by the L0 reference picture index in an L0 reference picture list and an L1 reference picture index and an L1 reference picture indicated by the L1 reference picture index in an L1 reference picture list); and prediction direction indication information comprising the first prediction direction indication information (predFlagL0) and the second prediction direction indication information (predFlagL1) indicating that the bi-directional prediction is applied to the current picture block ([0146] Table 3, predFlagL0 and predFlagL1 are both equal to 1: Bilateral prediction), and wherein a first motion vector corresponding to the first list is obtained based on the first motion vector predictor and the first MVD ([0099] A motion vector difference (MVD) which is a difference obtained by subtracting the mvp from the motion vector of the current block may be derived. In this case, the information on the MVD may be signaled to the decoding apparatus; [0228] BDOF and MVD are applied together), and wherein a second motion vector corresponding to the second list is obtained based on the second motion vector predictor and the second MVD ([0099] A motion vector difference (MVD) which is a difference obtained by subtracting the mvp from the motion vector of the current block may be derived. In this case, the information on the MVD may be signaled to the decoding apparatus; [0228] BDOF and MVD are applied together). Regarding claim 4. PARK discloses The non-transitory computer-readable medium of claim 1, wherein the residual information of the current picture block comprises sample residuals based on sample values of the current picture block and the predicted sample values of the current picture block (figure 2, [0057] the generated residual signal is transmitted to the transformer 232; [0054] The residual processor 230 may further include a subtractor 231). Regarding claim 5. PARK discloses The non-transitory computer-readable medium of claim 1, wherein the information for use in decoding the video comprises the merge_subblock_flag (see claim 1). Regarding claim 6. Bross discloses The non-transitory computer-readable medium of claim 1, wherein the information for use in decoding the video comprises the sym_mvd_flag (see claim 1). Regarding claim 7. PARK discloses The non-transitory computer-readable medium of claim 1, wherein the information for use in decoding the video comprises the bcwIdx (bcw_idx) (see claim 1). Regarding claim 8. PARK discloses The non-transitory computer-readable medium of claim 1, wherein the information for use in decoding the video comprises the luma_weight_l0_flag[refIdxL0] or the luma_weight_l1_flag[refIdxL1] (see claim 1). Regarding claim 9. PARK discloses The non-transitory computer-readable medium of claim 1, wherein when the plurality of preset conditions is not satisfied, skipping performing of the BDOF processing on the current picture block, and obtaining the predicted sample value of the current picture block through prediction based on the reference sample values corresponding to the first list and the reference sample values corresponding to the second list according to a decoder-side motion vector refinement (DMVR) technology comprises: obtaining the predicted sample values of the current picture block through prediction based on the reference sample values corresponding to the first list and the reference sample values corresponding to the second list according to the decoder-side motion vector refinement DMVR technology, when a size of the current picture block is a second preset size, wherein the second preset size is 8×8, 4×N, 8×16, or 16×8, wherein 8×8 indicates that the width of the current picture block is 8 samples and the height of the current picture block is 8 samples, wherein 4×N indicates that the width of the current picture block is 4 samples and the height of the current picture block is N samples, wherein 8×16 indicates that the width of the current picture block is 8 samples and the height of the current picture block is 16 samples, and wherein 16×8 indicates that the width of the current picture block is 16 samples and the height of the current picture block is 8 samples, and wherein N is a power of 2 and is greater than or equal to 8 ([0137] Table 2, CbHeight is greater than or equal to 8. : When the length (or size) of the block is larger than a threshold (e.g., 8) (here, the example of the threshold may be diversified) - CbHeight*CbWidth is greater than or equal to 64. : When the length (or size) of the block is larger than a threshold (e.g., 64) (here, the example of the threshold may be diversified); [0143]-[0144]). Regarding claims 10, 18, the same analysis has been stated in claims 1 and 4. Regarding claims 11, 19, the same analysis has been stated in claim 2. Regarding claims 12, 20, the same analysis has been stated in claim 3. Regarding claims 13, 21, the same analysis has been stated in claim 5. Regarding claims 14, 22, the same analysis has been stated in claim 6. Regarding claims 15, 23, the same analysis has been stated in claim 7. Regarding claims 16, 24, the same analysis has been stated in claim 8. Regarding claims 17, 25, the same analysis has been stated in claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Feb 19, 2025
Application Filed
Apr 08, 2025
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month