Prosecution Insights
Last updated: April 19, 2026
Application No. 18/982,186

INFORMATION DISPLAY DEVICE FOR VEHICLE

Non-Final OA §103
Filed
Dec 16, 2024
Examiner
SARWAR, BABAR
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
893 granted / 1043 resolved
+33.6% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
27 currently pending
Career history
1070
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1043 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-3 are presented for examination. Claims 1-3 are rejected. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims 1-3 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “unit”. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph limitations: “As illustrated in FIG. 1, the vehicle information display device 100 includes an ECU (Electronic Control Unit) 10. ECU 10 is an electronic control unit that comprehensively manages the information display device 100 of vehicles. ECU 10 includes a CPU (Central Processing Unit) and a storage unit. The storage unit includes, for example, ROM (Read Only Memory), RAM (Random Access Memory), and EEPROM (Electrically Erasable Programmable Read-Only Memory). In ECU 10, for example, various functions are realized by executing a program stored in a storage unit by a CPU. ECU 10 may be composed of a plurality of electronic units. ECU 10 is connected to an external camera 1 (external sensor), a radar sensor 2 (external sensor), a user operation accepting unit 3, and a display 4…”, as disclosed in ¶ [0014]-¶ [0020] of the specification. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over PHAN et al. (US Pub. No.: 2024/0087215 A1: hereinafter “PHAN”) in view of Secord et al. (US Pub. No.: 2015/0234580 A1: hereinafter “Secord”). Consider claim 1: PHAN teaches an information display device for a vehicle (See PHAN, e.g., “…A system for generating an advanced surround view for a vehicle includes a plurality of cameras to generate image data. Sensors are configured to determine kinematic data of the vehicle…process image data captured by the plurality of cameras to generate a 360-degree surround view layer representative of a surrounding area…construct an improved bicycle kinematic model for processing kinematic data to generate an under-chassis layer representative of an area under the vehicle…generate a 3D vehicle model layer and overlay objects layer…display a GUI of combined scene of the 360-degree surround view layer, under-chassis layer, 3D vehicle model layer and overlay objects layer as would be viewed from a virtual camera viewpoint”, of ¶ [0012]-¶ [0050], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), displaying an image representing a surrounding environment of an own vehicle on a display based on detection information of an external sensor of the own vehicle (See PHAN, e.g., “…A system for generating an advanced surround view for a vehicle includes a plurality of cameras to generate image data…generate a 3D vehicle model layer and overlay objects layer…display a GUI of combined scene of the 360-degree surround view layer, under-chassis layer, 3D vehicle model layer and overlay objects layer as would be viewed from a virtual camera viewpoint”, of ¶ [0012]-¶ [0050], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), the information display device comprising: a provision information acquisition unit for acquiring provision information that is information provided to a user of the own vehicle with respect to vehicle component of the own vehicle (See PHAN, e.g., “…The 360-degree surround view layer 101 shows a 360-degree background image around the vehicle, wherein the 360-degree background image is constructed from stitching images of a plurality of cameras…shows the vehicle's position relative to the surround landscape, wherein a user can adjust properties of the 3D vehicle model…portion or all of the 3D vehicle model layer 103 is rendered as displayed to be transparent to enable viewing at display screens a portion of the 360-degree surround view layer 101 and a portion of the under-chassis layer 102 and a portion of the overlay objects layer 104…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713); and an image display unit for displaying an own vehicle icon corresponding to the own vehicle, on the display (See PHAN, e.g., “…The overlay objects layer 104 may include, but not limited to, wheels guidelines, turning curves, distance levels indicator and/or collision warning such as icons indicating objects at risk of collision to provide the user with safety information…a head up display (HUD) or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), wherein, when the provision information is acquired, the image display unit displays an information provision display representing the provision information (See PHAN, e.g., “…creating a 3D surround mesh as a geometry of bowl-shape around the vehicle, wherein an under-chassis area of the vehicle is cropped to make the under-chassis layer below visible…creating a 360-degree texture unwrapping on the 3D surround mesh…for each camera in the plurality of cameras, generating 360-degree texture mapping data based on the 3D surround mesh, intrinsic parameters, extrinsic parameters and the virtual camera model, wherein the 360-degree texture mapping data is matched between pixels of cameras images and pixels of the 360-degree texture…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], ¶ [0210], ¶ [0240], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), superimposed on the own vehicle icon, so as to correspond to a position of the vehicle component related to the provision information in the own vehicle (See PHAN, e.g., “…generating an overlay objects layer showing at least one guideline of wheels, at least one distance level map and/or at least one collision warning icon…generating a virtual camera model to provide a point of observation of the user…constructively combining the 360-degree surround view layer, the under-chassis layer, the 3D vehicle model layer, and the overlay objects layer to provide the user with a 360-degree comprehensive and intuitive perspective scene without blind spots…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], ¶ [0210], ¶ [0240], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713). PHAN further teaches “…The overlay objects layer 104 may include, but not limited to, wheels guidelines, turning curves, distance levels indicator and/or collision warning such as icons indicating objects at risk of collision to provide the user with safety information…a head up display (HUD) or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713. However, PHAN does not explicitly teach an in-vehicle component of the own vehicle. In an analogous field of endeavor, Secord teaches an in-vehicle component of the own vehicle (See Secord, e.g., “…displays a representation of the tire pressure for each tire within vehicle status information section 600, where the larger icon for the vehicle includes four smaller icons for the tires, each tire icon shaded or colored to indicate normal tire pressure, in this example, also marked by giving the value of 32 psi per tire…displays a representation of the tire pressure for each tire within vehicle status information section 602…two of the four smaller icons for the tires have a different shade meant to represent an alert condition for low tire pressure at that location…the displayed value for the tire pressure, 20 psi, for each of the tires experiencing lower pressure than the recommended value are shaded or highlighted to alert the driver to a low pressure condition…”, of Abstract, ¶ [0032]-¶ [0033], Figs. 1-6 elements 100-602). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine “…A system for generating an advanced surround view for a vehicle includes a plurality of cameras to generate image data. Sensors are configured to determine kinematic data of the vehicle…process image data captured by the plurality of cameras to generate a 360-degree surround view layer representative of a surrounding area…construct an improved bicycle kinematic model for processing kinematic data to generate an under-chassis layer representative of an area under the vehicle…generate a 3D vehicle model layer and overlay objects layer…display a GUI of combined scene of the 360-degree surround view layer, under-chassis layer, 3D vehicle model layer and overlay objects layer as would be viewed from a virtual camera viewpoint…”, as disclosed in PHAN with “an in-vehicle component of the own vehicle.”, as taught in Secord with a reasonable expectation of success to yield a system, method for efficiently, robustly, and seamlessly “…allows flexibility in terms of displaying more than one type of vehicle status at a time to the driver...”, as taught in ¶ [0001]. Consider claim 2: The combination of PHAN, Secord teaches everything claimed as implemented above in the rejection of claim 1. In addition, PHAN teaches further comprising a virtual space generating unit for generating virtual space (e.g., “…creating a 3D surround mesh as a geometry of bowl-shape around the vehicle…” of Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713) corresponding to the surrounding environment of the own vehicle, based on the detection information of the external sensor (See PHAN, e.g., “…generating an overlay objects layer showing at least one guideline of wheels, at least one distance level map and/or at least one collision warning icon…generating a virtual camera model to provide a point of observation of the user…constructively combining the 360-degree surround view layer, the under-chassis layer, the 3D vehicle model layer, and the overlay objects layer to provide the user with a 360-degree comprehensive and intuitive perspective scene without blind spots…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], ¶ [0210], ¶ [0240], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), wherein the image display unit displays an image, in the virtual space, as viewed from a virtual viewpoint that is operated by the user of the own vehicle, on the display (See PHAN, e.g., “…The system 200 further comprises display devices 204 to display a graphic user interface. The display devices 204 are viewable through the reflective element when the display is activated to display information…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], ¶ [0210], ¶ [0240], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), and the own vehicle icon is a three-dimensional icon corresponding to the own vehicle that is placed in the virtual space (See PHAN, e.g., “…creating a 3D surround mesh as a geometry of bowl-shape around the vehicle, wherein an under-chassis area of the vehicle is cropped to make the under-chassis layer below visible…creating a 360-degree texture unwrapping on the 3D surround mesh…for each camera in the plurality of cameras, generating 360-degree texture mapping data based on the 3D surround mesh, intrinsic parameters, extrinsic parameters and the virtual camera model, wherein the 360-degree texture mapping data is matched between pixels of cameras images and pixels of the 360-degree texture…”, of ¶ [0012]-¶ [0050], ¶ [0072]-¶ [0086], ¶ [0095]-¶ [0118], ¶ [0132]-¶ [0148], ¶ [0210], ¶ [0240], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713). Consider claim 3: The combination of PHAN, Secord teaches everything claimed as implemented above in the rejection of claim 1. In addition, PHAN teaches wherein, when the provision information with respect to the vehicle component, other than an exterior of the own vehicle, is acquired (See PHAN, e.g., “…A system for generating an advanced surround view for a vehicle includes a plurality of cameras to generate image data…generate a 3D vehicle model layer and overlay objects layer…display a GUI of combined scene of the 360-degree surround view layer, under-chassis layer, 3D vehicle model layer and overlay objects layer as would be viewed from a virtual camera viewpoint”, of ¶ [0012]-¶ [0050], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), the image display unit deletes an exterior image of the own vehicle icon, displays an internal structure image corresponding to an internal structure of the own vehicle on the display (See PHAN, e.g., “…generate a 3D vehicle model layer and overlay objects layer…display a GUI of combined scene of the 360-degree surround view layer, under-chassis layer, 3D vehicle model layer and overlay objects layer as would be viewed from a virtual camera viewpoint”, of ¶ [0012]-¶ [0050], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713), and displays the information provision display, superimposed on the internal structure image (See PHAN, e.g., “…A system for generating an advanced surround view for a vehicle includes a plurality of cameras to generate image data…generate a 3D vehicle model layer and overlay objects layer…display a GUI of combined scene of the 360-degree surround view layer, under-chassis layer, 3D vehicle model layer and overlay objects layer as would be viewed from a virtual camera viewpoint…The overlay objects layer 104 may include, but not limited to, wheels guidelines, turning curves, distance levels indicator and/or collision warning such as icons indicating objects at risk of collision to provide the user with safety information…”, of ¶ [0012]-¶ [0050], Fig. 1 elements 100-104, Figs. 2-3 elements 200-318, Figs. 6-9 steps 601-713). Secord teaches an in-vehicle component of the own vehicle (See Secord, e.g., “…displays a representation of the tire pressure for each tire within vehicle status information section 600, where the larger icon for the vehicle includes four smaller icons for the tires, each tire icon shaded or colored to indicate normal tire pressure, in this example, also marked by giving the value of 32 psi per tire…displays a representation of the tire pressure for each tire within vehicle status information section 602…two of the four smaller icons for the tires have a different shade meant to represent an alert condition for low tire pressure at that location…the displayed value for the tire pressure, 20 psi, for each of the tires experiencing lower pressure than the recommended value are shaded or highlighted to alert the driver to a low pressure condition…”, of Abstract, ¶ [0032]-¶ [0033], Figs. 1-6 elements 100-602). Therefore, It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify PHAN with the teachings of Secord so as, with a reasonable expectation of success, to yield a system, method for efficiently, robustly, and seamlessly “…allows flexibility in terms of displaying more than one type of vehicle status at a time to the driver...”, as taught in ¶ [0001]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. GULATI et al. (US Pub. No.: 2020/0198466 A1) teaches “Systems, methods, and devices of the various embodiments enable dynamic configuration of a number of display regions of interest (ROIs) associated with safety critical content presented on a display, such as a vehicle display. Various embodiments may enable verification of data integrity for ROIs on a display. Various embodiments may enable the selection of different sets of display ROIs from a plurality of independent sets of display ROIs each associated with its own set of stored integrity check values (ICVs). Various embodiments may enable stored ICVs to be used to verify the data integrity of ROIs on a display. Various embodiments may enable the set of display ROIs and the associated ICVs for each display ROI to be changed after a number of frames have been displayed.” Watanabe et al. (US Pub. No.: 2020/0167996 A1) teaches “A periphery monitoring device includes: an acquisition unit configured to acquire a captured image from an imaging unit that captures an image of a periphery of a vehicle; a generation unit configured to generate a vehicle surrounding image indicating a situation around the vehicle in a virtual space based on the captured image; and a processing unit configured to display, on a display device, an image in which an own vehicle image is overlapped on the vehicle surrounding image, the own vehicle image indicating the vehicle in which a transmissive state of a constituent plane representing a plane constituting the vehicle is determined according to a direction of the constituent plane, and the vehicle surrounding image being represented based on a virtual viewpoint facing the vehicle in the virtual space.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABAR SARWAR whose telephone number is (571)270-5584. The examiner can normally be reached on Mon-Fri 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached on (313)446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BABAR SARWAR/Primary Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600370
VEHICULAR CONTROL SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602800
TIRE STATE ESTIMATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602933
VEHICULAR SENSING SYSTEM WITH OCCLUSION ESTIMATION FOR USE IN CONTROL OF VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12594947
DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586465
METHOD AND APPARATUS FOR ASSISTING RIGHT TURN OF AUTONOMOUS VEHICLE BASED ON UWB COMMUNICATION AND V2X COMMUNICATION AT INTERSECTION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+20.0%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1043 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month