Prosecution Insights
Last updated: April 19, 2026
Application No. 17/925,108

VEHICLE DEVELOPMENT SUPPORT SYSTEM

Non-Final OA §103§112
Filed
Nov 14, 2022
Examiner
PIERRE LOUIS, ANDRE
Art Unit
2187
Tech Center
2100 — Computer Architecture & Software
Assignee
Subaru Corporation
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
439 granted / 646 resolved
+13.0% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
675
Total Applications
across all art units

Statute-Specific Performance

§101
28.5%
-11.5% vs TC avg
§103
38.6%
-1.4% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 646 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3.0 Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nassar (USPG_PUB No. 2021/0294944), in view of Higuchi et al. (USPG_PUB No. 2009/0312850). 3.1 In considering claim 1, Nassar teaches a vehicle development support system comprising: a visualization apparatus configured to generate a video including an operation apparatus mounted in a vehicle (see para [0127] In some examples, one or more of the camera(s) may be used to perform advanced driver assistance systems (ADAS) functions (e.g., as part of a redundant or fail-safe design). For example, a Multi-Function Mono Camera may be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlamp control. One or more of the camera(s) (e.g., all of the cameras) may record and provide image data (e.g., video) simultaneously. [0128] One or more of the cameras may be mounted in a mounting assembly, such as a custom designed (3-D printed) assembly, in order to cut out stray light and reflections from within the car (e.g., reflections from the dashboard reflected in the windshield mirrors) which may interfere with the camera's image data capture abilities. With reference to wing-mirror mounting assemblies, the wing-mirror assemblies may be custom 3-D printed so that the camera mounting plate matches the shape of the wing-mirror. In some examples, the camera(s) may be integrated into the wing-mirror. For side-view cameras, the camera(s) may also be integrated within the four pillars at each corner of the cabin. [0215]); a virtual operation apparatus configured to display the video including the operation apparatus and generated by the visualization apparatus (see para [0215] The vehicle 102 may further include the infotainment SoC 830 (e.g., an in-vehicle infotainment system (IVI)). The infotainment SoC 830 may include a combination of hardware and software that may be used to provide audio, video (e.g., TV, movies, streaming, etc.). For example, the infotainment SoC 830 may be navigation systems, video players, an HMI display 834, a telematics device, a control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components. The infotainment SoC 830 may further be used to provide information (e.g., visual and/or audible) to a user(s) of the vehicle, such as information from the ADAS system 838, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.), and/or other information.), and to output an operation signal corresponding to a pseudo operation input by an operator on the displayed video of the operation apparatus (see para [0123] One or more of the controller(s) 836 may receive inputs (e.g., represented by input data) from an instrument cluster 832 of the vehicle 102 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (HMI) display 834, an audible annunciator, a loudspeaker, and/or via other components of the vehicle 102. The outputs may include information about objects and status of objects as perceived by the controller(s) 836, etc. For example, the HMI display 834 may display information about driving maneuvers the vehicle has made, is making, or will make (e.g., changing lanes now, taking exit 34B in two miles, etc.). [0084] This data may be used to generate an instance of the simulated environment corresponding to the field of view of a remote operator of the virtual vehicle controlled by the remote operator, and the portion of the simulated environment may be projected on a display (e.g., a display of a VR headset, a computer or television display, etc.) for assisting the remote operator in controlling the virtual vehicle through the simulated environment 610. The controls generated or input by the remote operator using the vehicle simulator component(s) 622 may be transmitted to the simulator component(s) 602 for updating a state of the virtual vehicle within the simulated environment 610. Further [0121] For example, the controller(s) may send signals to operate the vehicle brakes via one or more brake actuators 848, to operate the steering system 854 via one or more steering actuators 856); an electronic control unit configured to output a control signal for controlling an in-vehicle device in response to the operation signal (see para [0121] Controller(s) 836, may provide signals (e.g., representative of commands) to one or more components and/or systems of the vehicle 102. For example, the controller(s) may send signals to operate the vehicle brakes via one or more brake actuators 848, to operate the steering system 854 via one or more steering actuators 856, to operate the propulsion system 850 via one or more throttle/accelerators 852. The controller(s) 836 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving the vehicle 102. [0122] The controller 836 may provide the signals for controlling one or more components and/or systems of the vehicle 102 in response to sensor data received from one or more sensors (e.g., sensor inputs)); a real-time simulator configured to simulate a motion of the in-vehicle device in response to the control signal and to output a simulation result to the visualization apparatus and the electronic control unit (see para [0076]-[0078], The simulator component(s) 602 of the simulation system 600 may communicate with vehicle simulator component(s) 606 over a wired and/or wireless connection. [0077] The simulator component(s) 602 may include one or more GPUs 604 configured to simulate steering of one or more virtual vehicle 102 (e.g., along a desired path or route) when the propulsion system 850 is operating (e.g., when the vehicle is in motion). The steering system 854 may receive signals from a steering actuator 856. [0119]-[0121]); and a synchronization apparatus configured to synchronize communication (see para [0091], by using the vehicle hardware 104, the other vehicle simulator component(s) 606 within the simulation environment 600 may be configured for communication with the vehicle hardware 104 which may be an ECU. For example, because the vehicle hardware 104 may be configured for installation within a physical vehicle (e.g., the vehicle 102), the vehicle hardware 104 may be configured to communicate over one or more connection types and/or communication protocols that are not standard in computing environments (e.g., in server-based platforms, in general-purpose computers, etc.). [0096] The vehicle simulator component (s) 606 may include one or more GPUs 652 that may provide, in an example, video streams that may be synchronized using sync component(s) 654), in which the simulation result in the real-time simulator is input into the electronic control unit with communication in which the control signal is input into the real-time simulator (see para [0083] For example, the vehicle simulator component(s) 622 may receive (e.g., retrieve, obtain, etc.), from the global simulation (e.g., represented by the simulated environment 610) hosted by the simulator component(s) 602, data that corresponds to, is associated with, and/or is required by the vehicle simulator components 622 to perform one or more operations by the vehicle simulator component(s) 622 for the PIL object. In such an example, data corresponding to each sensor of the PIL object may be received from the simulator component(s) 602 and may be used to generate an instance of the simulated environment corresponding to the field of view of a remote operator of virtual vehicle controlled by the remote operator, and the portion of the simulated environment may be projected on a display for assisting the remote operator in controlling the virtual vehicle through the simulated environment 610. The controls generated or input by the remote operator using the vehicle simulator component(s) 622 may be transmitted to the simulator component(s) 602. [0123] One or more of the controller(s) 836 may receive inputs (e.g., represented by input data) from an instrument cluster 832 of the vehicle 102 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (HMI) display 834, an audible annunciator, a loudspeaker, and/or via other components of the vehicle 102. [0217]), wherein the visualization apparatus updates the video including the operation apparatus in accordance with the simulation result in the real-time simulator (see para [0084] This data may be used to generate an instance of the simulated environment corresponding to the field of view of a remote operator of the virtual vehicle controlled by the remote operator, and the portion of the simulated environment may be projected on a display for assisting the remote operator in controlling the virtual vehicle through the simulated environment 610. The controls generated or input by the remote operator using the vehicle simulator component(s) 622 may be transmitted to the simulator component(s) 602 for updating a state of the virtual vehicle within the simulated environment 610.). It is submitted the synchronization of data between the simulator and the ECU [0217], and thus obvious in light of the teachings from Nassar in which the vehicle hardware 104 may also be used as an ECU being in communication with the simulator used to simulate the vehicle 102 (see further fig.6-8), and thus would have been obvious to a person of skilled in the art. Nevertheless, Higuchi et al. teaches simulation apparatus comprising a plurality of ECUs 50A-50C that simulate other controllers, and a simulation management unit 2 that manages the execution of an input and output process to a real ECU 300 (abstract, para [0008] To evaluate operations of the plurality of ECUs connected to each other via the network, it is considered that the HILS is provided for each ECU and simulation is performed while the HILSs are synchronizing with each other; further para [0079], data exchange between the simulator and the ECUs]). Nassar and Higuchi et al. are analogous art because they are from the same field of endeavor and that the model analyzes by Higuchi et al. is similar to that of Nassar. Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). 3.2 Regarding claim 2, the combined teachings of Nassar and Higuchi et al. teaches that wherein a position in a virtual space of the operation apparatus displayed in the virtual operation apparatus corresponds to a position of the operation apparatus, which the operator can touch in the pseudo operation input by the operator (see Nassar para [0235], The computing device 900 may be include depth cameras, such as touchscreen technology, and combinations of these, for gesture detection and recognition. [0237] The presentation component(s) 918 may include a display (e.g., a monitor, a touch screen, a television screen, a heads-up-display (HUD), other display types, or a combination thereof), speakers, and/or other presentation components. The presentation component(s) 918 may receive data from other components (e.g., the GPU(s) 908, the CPU(s) 906, etc.), and output the data (e.g., as an image, video, sound, etc.). Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). 3.3 As per claims 3 and 8, the combined teachings of Nassar and Higuchi et al. teaches that wherein the virtual operation apparatus includes a motion detector that detects a motion of the operator (see Nassar para [0161], Many applications for Level 3-5 autonomous driving require motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc. [0235], Additionally, the computing device 900 may include accelerometers or gyroscopes (e.g., as part of an inertia measurement unit (IMU)) that enable detection of motion. )., wherein the pseudo operation input by the operator is detected with the motion detector (see Nassar para [0235], The I/O components 914 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing device 900. The computing device 900 may be include depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, touchscreen technology, and combinations of these, for gesture detection and recognition. Additionally, the computing device 900 may include accelerometers or gyroscopes (e.g., as part of an inertia measurement unit (IMU)) that enable detection of motion), and wherein the motion detector detects a direction of a line of sight of the operator and displays a vehicle interior image corresponding to the detected direction of the line of sight (Nassar para [0083], This data may be used to generate an instance of the simulated environment corresponding to the field of view of a remote operator of the virtual vehicle controlled by the remote operator, and the portion of the simulated environment may be projected on a display (e.g., a display of a VR headset, a computer or television display, etc.) for assisting the remote operator in controlling the virtual vehicle through the simulation environment 610. [0235], Additionally, the computing device 900 may include accelerometers or gyroscopes (e.g., as part of an inertia measurement unit (IMU)) that enable detection of motion. In some examples, the output of the accelerometers or gyroscopes may be used by the computing device 900 to render immersive augmented reality or virtual reality. [0237] The presentation component(s) 918 may include a display (e.g., a monitor, a touch screen, a television screen, a heads-up-display (HUD), other display types, or a combination thereof), speakers, and/or other presentation components. The presentation component(s) 918 may receive data from other components (e.g., the GPU(s) 908, the CPU(s) 906, etc.), and output the data (e.g., as an image, video, sound, etc.).). Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). 3.4 With regards to claims 4 and 9-11, the combined teachings of Nassar and Higuchi et al. teaches the display apparatus configured to cause multiple persons to visually recognize a video of the simulation result, which is combined with the video displayed by the virtual operation apparatus (see Nassar para [0033], In particular, an evaluator can be used to fetch a state of an autonomous vehicle 102 (or the virtual representation thereof) when a simulation is running and can then search for the objective of a user specified declarative description based on expected behavior. In some embodiments, multiple evaluators can run at the same time and, based on the analysis of each of these evaluators, the search space can be optimized (e.g., to help with debugging because other problems that are not the root cause can be discounted). [0086], The simulation system 600C may include any number of HIL objects (e.g., each including its own vehicle simulator component(s) 606), any number of SIL objects (e.g., each including its own vehicle simulator component(s) 620), any number of PIL objects (e.g., each including its own vehicle simulator component(s) 622), and/or any number of AI objects (not shown, but may be hosted by the simulation component(s) 602 and/or separate compute nodes, depending on the embodiment)). Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). 3.5 As per claims 5 and 12-14, the combined teachings of Nassar and Higuchi et al. teaches that wherein the real-time simulator calculates outside environment influencing a vehicle behavior to reflect the calculated outside environment in the simulation result (see Nassar para [0083], This data may be used to generate an instance of the simulated environment corresponding to the field of view of a remote operator of the virtual vehicle controlled by the remote operator, and the portion of the simulated environment may be projected on a display (e.g., a display of a VR headset, a computer or television display, etc.) for assisting the remote operator in controlling the virtual vehicle through the simulation environment 610. [0114], The simulated environment may further include physics, traffic simulation, weather simulation, and/or other features and simulations for the simulated environment. GI engine 734 may calculate global illumination in the environment once and share the calculation with each of the nodes 718(1)-718(N) and 720(1)-720(N) (e.g., the calculation of GI may be view independent)). Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). 3.6 Regarding claims 6 and 15-17, the combined teachings of Nassar and Higuchi et al. teaches that wherein the real-time simulator creates an event for the outside environment to reflect the created event in the simulation result (see Nassar para [0114], GI engine 734 may calculate global illumination in the environment once and share the calculation with each of the nodes 718(1)-718(N) and 720(1)-720(N) (e.g., the calculation of GI may be view independent). The simulated environment 728 may include an AI universe 722 that provides data to GPU platforms 724 (e.g., GPU servers) that may create renderings for each sensor of the vehicle (e.g., at the virtual sensor/codec(s) 718 for a first virtual object and at the virtual sensor codec(s) 720 for a second virtual object)). Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). 3.7 As per claims 7 and 18-20, the combined teachings of Nassar and Higuchi et al. teaches that wherein the electronic control unit is disposed in a frame body including a cockpit in which the operator sits (see Nassar ECU 832 disposed within the vehicle 102; fig.6 further shows a vehicle simulator component having a driver sitting in the cockpit, see further para [0123] One or more of the controller(s) 836 may receive inputs (e.g., represented by input data) from an instrument cluster 832 of the vehicle 102 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (HMI) display 834,). Therefore, it would have been obvious to a person of skilled in the art to at the time of filing of the applicant’s invention to combine the method of Higuchi et al. with that of Nassar because Higuchi et al. teaches the improvement of model efficiency (see [0056]). Claim Interpretation 4. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 4.1 The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. 4.2 This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “visualization apparatus configured to…”; “virtual operation apparatus configured to…”; “electronic control unit configured to…”; “real-time simulator configured to...”; “synchronization apparatus configured to…”; “display apparatus configured to…”; in claim 1, 4, 9-11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 5.1 Frick (USPG_PUB No. 2008/0077370) teaches a system and method for integrating a process control system for a technical installation or a technical process into a training simulator, where the training simulator interacts with a virtual PC. 5.2 Sheridan (USPG_PUB No. 2007/0136041) teaches a vehicle operations simulator with augmented reality. 6. Claims 1-20 are rejected and this action is non-final. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE PIERRE-LOUIS whose telephone number is (571)272-8636. The examiner can normally be reached M-F 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, EMERSON C PUENTE can be reached at 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDRE PIERRE LOUIS/Primary Patent Examiner, Art Unit 2187 February 4, 2026
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Feb 05, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602523
RACK-BASED DESIGN VERIFICATION AND MANAGEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12561218
Automatic Functional Test Pattern Generation based on DUT Reference Model and Unique Scripts
2y 5m to grant Granted Feb 24, 2026
Patent 12546217
Machine-Learning based Rig-Site On-Demand Drilling Mud Characterization, Property Prediction, and Optimization
2y 5m to grant Granted Feb 10, 2026
Patent 12541626
VIRTUAL REVIEW SYSTEM FOR LAND DEVELOPMENT
2y 5m to grant Granted Feb 03, 2026
Patent 12518866
Method and System for Measuring, Predicting and Optimizing Human Alertness
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
82%
With Interview (+14.3%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 646 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month