Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,191

IMAGE FORGERY DETECTION VIA HEADPOSE ESTIMATION

Non-Final OA §103§DP
Filed
Mar 19, 2024
Examiner
LIU, XIAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Paypal Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
257 granted / 290 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
44 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/09/2024 has/have been considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2, 7-9, 12-13, and 18-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 8, 15 of U.S. Patent No. 12067475 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 2, 7-9, 12-13, and 18-20 can be anticipated by claims 1, 3, 8, 15 of U.S. Patent No. 12067475 B2 respectively. Although claims 3-6 and 10-11 are system claims and claim 14 is a method claim of instance application, the subject matters of these claims are disclosed in computer program product of claims 15, 18 and 20 of U.S. Patent No. 12067475 B2. Thus, claims 3-6 , 10-11 and 14 are rejected as well on the ground of nonstatutory double patenting as being unpatentable over claims 15, 18 and 20 of U.S. Patent No. 12067475 B2. Instant Application 18610191 U.S. Patent No. 12067475 B2 2. (New) A system comprising: a processor that executes computer-executable instructions stored in a computer-readable memory, which causes the processor to: determine, by executing at least one of a plurality of trained machine learning models, a pose of an object depicted in a document; and determine, by executing at least one of the plurality of trained machine learning models, whether the document is authentic or forged based on the pose of the object. 7. (New) The system of claim 2, wherein a determination of whether the document is authentic or forged is based on a single pose of the object as depicted in the document. 9. (New) The system of claim 2, wherein a determination of whether the document is authentic or forged is based solely on the pose of the object without comparing the object depicted in the document to another object. 1. A system, comprising: a processor that executes computer-executable instructions stored in a computer-readable memory, which causes the processor to: receive a document from a client device; identify, by executing a first trained machine learning model, an object that is depicted in the document; determine, by executing a second trained machine learning model, a single pose of the object; and determine, by executing a third trained machine learning model, whether the document is authentic or forged based solely on the single pose of the object without comparing the object depicted in the document to another object. 8. (New) The system of claim 2, wherein the computer-executable instructions are further executable to cause the processor to output, by executing at least one of the plurality of trained machine learning models, a cropped image of the object prior to a determination of the pose of the object. 3. The system of claim 1, wherein the first trained machine learning model receives as input the document and produces as output a cropped image of the object. 12. (New) A computer-implemented method, comprising: accessing or receiving, by a device operatively coupled to a processor, an image representative of a document provided by a computing device; detecting, by the device and via execution of at least one of a plurality of trained machine learning models, an object that is depicted in the image; determining, by the device and via execution of at least one of the plurality of trained machine learning models, a pose of the object; and determining, by the device and via execution of at least one of the plurality of trained machine learning models, whether the document is authentic or forged based on the pose of the object. 13. (New) The computer-implemented method of claim 12, wherein the determining whether the document is authentic or forged is based solely on the pose of the object without comparing the object depicted in the document to another object. 8. A computer-implemented method, comprising: accessing, by a device operatively coupled to a processor, a document provided by a computing device; detecting, by the device and via execution of a first trained neural network, an object that is depicted in the document; computing, by the device and via execution of a second trained neural network, a single pose of the object; and labeling, by the device and via execution of a third trained neural network, the document as authentic or forged based solely on the single pose of the object without comparing the object depicted in the document to another object. 18. (New) A computer program product for facilitating image forgery detection via headpose estimation, the computer program product comprising a non-transitory computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: determine, by executing at least one of a plurality of trained machine learning models, a headpose of a face of a person depicted in a document; and determine, by executing at least one of the plurality of trained machine learning models, whether the document is authentic or forged based on the headpose of the face depicted in the document. 19. (New) The computer program product of claim 18, wherein the document comprises a proof- of-identity document. 20. (New) The computer program product of claim 18, wherein a determination that the document is authentic or forged is made without comparison of the document to any other stored document. 15. A computer program product for facilitating image forgery detection via headpose estimation, the computer program product comprising a non-transitory computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: acquire a scanned image of a proof-of-identity document; recognize, by executing a first trained artificial intelligence algorithm, a face depicted in the scanned image of the proof-of-identity document; estimate, by executing a second trained artificial intelligence algorithm, a single headpose of the face; and classify, by executing a third trained artificial intelligence algorithm, an authenticity of the proof-of-identity document based solely on the single headpose of the face without comparing the face depicted in the scanned image to another face. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber (US 20210398135 A1) in view of Wang et al (US 20200394392 A1), hereinafter Wang. -Regarding claim 2, Huber discloses a system comprising (Abstract; FIGS. 1-3; [0083]): a processor that executes computer-executable instructions stored in a computer-readable memory (FIG. 3; [0065]), which causes the processor to (FIG. 1 PNG media_image1.png 458 566 media_image1.png Greyscale ): determine, by executing at least one of the plurality of trained machine learning models, whether the document is authentic or forged based on the pose of the object (FIG. 1, images 102, 115, 144, verification modules 150-1, 150-2, … 150-x; [0022], “one or more machine learning models that are trained to predict a likelihood … an image of a physical identification document depicts a counterfeit document”; [0023]; [0024], “a component of an anticounterfeiting architecture whose presence or absence in an image of a physical document can be detected by a machine learning model trained … detect an absence of a security feature where it should exist, presence of a security feature where it should not exist, incorrect security features … abnormal head pose, abnormal head rotation, non-frontal facial effects, presence of facial occlusions …”; [0043]-[0044]; [0046]). Huber does not disclose determining the head pose of the object using at least one of the plurality of trained machine learning models. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches determining the head pose of the object using at least one of the plurality of trained machine learning models (Wang: FIG. 2, step 203; FIG. 3; FIG. 4, step 403; [0060], “the head pose estimation may be performed using a head pose estimation model based on a convolutional neural network”; [0081]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 12, Huber discloses computer-implemented method, comprising (Abstract; FIGS. 1-3; [0083]): accessing or receiving, by a device operatively coupled to a processor (FIG. 3; [0065]; FIG. 1), an image representative of a document provided by a computing device (FIG. 1); detecting, by the device and via execution of at least one of a plurality of trained machine learning models, an object that is depicted in the image (FIG. 1; [0023], “physical identification document verification module can be trained to detect the presence or absence of security features in an image of the physical identification document”; [0041], “extraction module 130 can extract different portions of the image 115 such as the first image portion 115a, a profile image 144, personal information 146”; [0043]; [0044], “… generated by … a machine learning model”); determine, by executing at least one of the plurality of trained machine learning models, whether the document is authentic or forged based on the pose of the object (FIG. 1, images 102, 115, 144, verification modules 150-1, 150-2, … 150-x; [0022], “one or more machine learning models that are trained to predict a likelihood … an image of a physical identification document depicts a counterfeit document”; [0023]; [0024], “a component of an anticounterfeiting architecture whose presence or absence in an image of a physical document can be detected by a machine learning model trained … detect an absence of a security feature where it should exist, presence of a security feature where it should not exist, incorrect security features … abnormal head pose, abnormal head rotation, non-frontal facial effects, presence of facial occlusions …”; [0043]-[0044]; [0046]). Huber does not disclose determining the head pose of the object using at least one of the plurality of trained machine learning models. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches determining the head pose of the object using at least one of the plurality of trained machine learning models (Wang: FIG. 2, step 203; FIG. 3; FIG. 4, step 403; [0060], “the head pose estimation may be performed using a head pose estimation model based on a convolutional neural network”; [0081]). Wang also teaches detecting a face image using neural network (Wang: FIG. 2, steps 201-202; FIG. 4, steps 401-402; FIG. 5; [0041]; [0045]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 18, Huber computer program product for facilitating image forgery detection via headpose estimation, the computer program product comprising a non-transitory computer-readable memory having program instructions embodied therewith, the program instructions executable by a processor (FIG. 3) to cause the processor to (Abstract; FIGS. 1-3; [0083]): determine, by executing at least one of the plurality of trained machine learning models, whether the document is authentic or forged based on the pose of the object (FIG. 1, images 102, 115, 144, verification modules 150-1, 150-2, … 150-x; [0022], “one or more machine learning models that are trained to predict a likelihood … an image of a physical identification document depicts a counterfeit document”; [0023]; [0024], “a component of an anticounterfeiting architecture whose presence or absence in an image of a physical document can be detected by a machine learning model trained … detect an absence of a security feature where it should exist, presence of a security feature where it should not exist, incorrect security features … abnormal head pose, abnormal head rotation, non-frontal facial effects, presence of facial occlusions …”; [0043]-[0044]; [0046]). Huber does not disclose determining the head pose of the face of a person using at least one of the plurality of trained machine learning models. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches determining the head pose of the face of a person using at least one of the plurality of trained machine learning models (Wang: FIG. 2, step 203; FIG. 3; FIG. 4, step 403; [0060], “the head pose estimation may be performed using a head pose estimation model based on a convolutional neural network”; [0081]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 3, Huber in view of Wang teaches the system of claim 2. The combination further teaches wherein the object comprises a face of a person and wherein the pose comprises a headpose of the person (Huber: FIG. 1; [0024]; See also Wang: FIGS. 2-5). -Regarding claim 4, Huber in view of Wang teaches the system of claim 3. Huber does not disclose determining at least one orientation angle of the face of the person. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches determining at least one orientation angle of the face of the person (Wang: FIG. 3; [0058]; [0063]; [0079]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication.\ -Regarding claim 5, Huber in view of Wang teaches the system of claim 4. Huber does not disclose wherein the at least one orientation angle comprises one or more of a yaw angle, a roll angle, or a pitch angle that collectively define a headpose of the face. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches wherein the at least one orientation angle comprises one or more of a yaw angle, a roll angle, or a pitch angle that collectively define a headpose of the face (Wang: FIG. 3; [0079]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 6, Huber in view of Wang teaches the system of claim 4. Huber does not disclose wherein a determination of whether the document is authentic or forged is further based on the at least one orientation angle of the face of the person. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches wherein a determination of whether the document is authentic or forged is further based on the at least one orientation angle of the face of the person (Wang: FIGS. 2-4; [0064]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 7, Huber in view of Wang teaches the system of claim 2. The combination further teaches wherein a determination of whether the document is authentic or forged is based on a single pose of the object as depicted in the document (Huber: [0024], “abnormal head pose”; See also Wang: FIGS. 2-5). -Regarding claim 8, Huber in view of Wang teaches the system of claim 2. The combination further teaches wherein the computer-executable instructions are further executable to cause the processor to output, by executing at least one of the plurality of trained machine learning models, a cropped image of the object prior to a determination of the pose of the object (Huber: FIG. 1; [0041]; [0043], “The image portions 142, 144, 146, 148 extracted from the image 115, by the extraction module 130, can be provided to the input module 140 as an input …”; FIG. 2, step 220; [0062], “trained to generate output data indicating … a portion of an input image”; [0022]; It is a common practice to generate cropped images using one or more machine learning models before further processing images. See also Wilf et al (US 20190005359 A1): FIG. 3, step 310; [0141]; FIG. 4; [0171]; FIG. 15). -Regarding claims 9, 13 and 20, Huber in view of Wang teaches the system of claim 2, the method of claim 12, and the computer program product of claim 18 . The combination further teaches wherein a determination of whether the document is authentic or forged is based solely on the pose of the object without comparing the object depicted in the document to another object (Huber: [0024], “abnormal head pose”; See also Wang: FIGS. 2-5). -Regarding claims 10 and 19, Huber in view of Wang teaches the system of claim 2 and the computer program product of claim 18. The combination further teaches wherein the document comprises a proof-of-identity document (Huber: [0022], “a physical identification document such as a driver's license, passport, birth certificate, or the like”). -Regarding claim 11, Huber in view of Wang teaches the system of claim 2. The combination further teaches wherein the object depicted in the document comprises a static image of a person depicted in the document (Huber: FIG. 1). -Regarding claim 14, Huber in view of Wang teaches the method of claim 12. The combination further teaches wherein the object comprises a face of a person and wherein a determination that the document is forged comprises determining that the pose of the face is not a forward-facing headpose (Huber: FIG. 1; [0024], “abnormal head pose”; FIG. 1 shows a forward-facing image. It is also a common practice to use a left-facing, right-facing head pose or forward-facing head pose, etc. for authentication purpose. See Nechyba et al (US 20140016837 A1): FIG. 9). -Regarding claim 15, Huber in view of Wang teaches the method of claim 12. Huber does not disclose wherein the determining that the pose of the face is not the forward-facing headpose comprises determining at least one of: (i) that the face has eyes that are not horizontally aligned, (ii) that the face is rolled to a right or left side more than a first threshold amount, (iii) that the face is pitched upward or downward more than a second threshold amount, (iv) that the face is yawed to the right or left by more than a third threshold amount. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches wherein the determining that the pose of the face is not the forward-facing headpose comprises determining at least one of: (i) that the face has eyes that are not horizontally aligned, (ii) that the face is rolled to a right or left side more than a first threshold amount, (iii) that the face is pitched upward or downward more than a second threshold amount, (iv) that the face is yawed to the right or left by more than a third threshold amount (Wang: FIGS. 2-4; [0064]; [0084]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 16, Huber in view of Wang teaches the method of claim 12. Huber does not disclose wherein the object comprises a face of a person and wherein the determining of the pose of the object further comprises identifying at least one facial landmark on the face. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches wherein the object comprises a face of a person and wherein the determining of the pose of the object further comprises identifying at least one facial landmark on the face (Wang: FIG. 2, steps 201-202; FIG. 4, steps 401-402; FIG. 5, steps 501-502). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 17, Huber in view of Wang teaches the method of claim 16. Huber does not disclose wherein the at least one facial landmark on the face comprises at least one of a nose, an eye, an ear, a chin, or a cheek. In the same field of endeavor, Wang teaches a method for detecting a face image (Wang: Abstract; FIGS. 1-6). Wang further teaches wherein the at least one facial landmark on the face comprises at least one of a nose, an eye, an ear, a chin, or a cheek (Wang: FIGS. 2, 4-5; [0044], “facial keypoint may include points on … eyes, nose, lips, and eyebrows”; [0050]; [0053]). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Huber with the teaching of Wang by determining head pose of the person in the document in order to provide object features for document authentication. -Regarding claim 21, Huber in view of Wang teaches the method of claim 12. The combination further teaches wherein a determination that the document is authentic or forged is made without comparison of the document to any other stored document (Huber: FIG. 1; [0024], “abnormal head pose”; FIG. 1 shows a forward-facing image. It is also a common practice to use a left-facing, right-facing head pose or forward-facing head pose, etc. for authentication purpose. See Nechyba et al (US 20140016837 A1): FIG. 9). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAO LIU/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Mar 19, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §DP
Feb 25, 2026
Examiner Interview Summary
Feb 25, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603972
WIRELESS TRANSMITTER IDENTIFICATION IN VISUAL SCENES
2y 5m to grant Granted Apr 14, 2026
Patent 12592069
OBJECT RECOGNITION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579834
Information Extraction Method and Apparatus for Text With Layout
2y 5m to grant Granted Mar 17, 2026
Patent 12576873
SYSTEM AND METHOD OF CAPTIONS FOR TRIGGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12573175
TARGET TRACKING METHOD, TARGET TRACKING SYSTEM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month