Prosecution Insights
Last updated: April 19, 2026
Application No. 18/183,214

RESOURCE ALLOCATION USING VIRTUALIZED ENHANCED RESOURCES

Non-Final OA §103
Filed
Mar 14, 2023
Examiner
CHU JOY, JORGE A
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
314 granted / 408 resolved
+22.0% vs TC avg
Strong +37% interview lift
Without
With
+37.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
3.2%
-36.8% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/01/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-13, and 15-19, are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 2018/0121222 A1) in view of Lonappan (US 2014/0298345 A1). Sharma and Lonappan were cited in the previous Office Action for different claims. Regarding claim 1, Sharma teaches a computer system comprising: a central processing unit (CPU) associated with a host computer, wherein the CPU comprises CPU functionality and on-chip enhanced CPU functionality, wherein the CPU functionality and the on-chip enhanced CPU functionality are present in the CPU concurrently ([0014] Hardware processor 120 may be one or more central processing units (CPUs), semiconductor-based microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium, 130. Hardware processor 120 may fetch, decode, and execute instructions, such as 132-138, to control processes for determining virtual network function configurations. As an alternative or in addition to retrieving and executing instructions, hardware processor 120 may include one or more electronic circuits that include electronic components for performing the functionality of one or more instructions, e.g., a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC).; wherein the FPGA and ASIC correspond to the enhanced CPU functionality). a virtualized first instance of the CPU ([0021] The virtualized hardware resources may include a variety of hardware resources to be allocated for performing the particular VNF, such as a number of virtual machines, a number of data processors, a speed of the data processors, an amount of memory, a speed of the memory, and data storage space available, to name a few.; Fig. 2, Infrastructure configuration A 216A) comprising: a configurable virtualized first instance of the CPU functionality, and a configurable virtualized first instance of the on-chip enhanced CPU functionality (Fig. 2, Infrastructure configuration 216A shows Option A: enabled, Option B: enabled, Option C: disabled, Option D: enabled, and Option E: disabled; [0023] Many different values may be available for each of virtualized hardware resources that comprise the resource configuration. For example, there may be many options for varying the number of virtual machines, processors, and/or processor cores, or for varying the values for processor speeds, memory capacity, memory speed, and/or data storage capacity.; [0032] The example data flow 200 depicts three different infrastructure configurations, 216A, 216B, and 216C, though more infrastructure configurations 216 may be tested. Each example infrastructure configuration includes a default resource configuration 215 that specifies default virtualized hardware resource allocation for the virtual machine(s) 220. For example, the default resource configuration 215 may specify that each infrastructure configuration 216 is to be tested on one virtual machine that includes 1 processor, 1 GB of memory, and 1 GB of data storage. Each example infrastructure configuration 216 depicts multiple infrastructure options and their status for each iteration, e.g., enabled or disabled.); and a virtualized second instance of the CPU ([0021] The virtualized hardware resources may include a variety of hardware resources to be allocated for performing the particular VNF, such as a number of virtual machines, a number of data processors, a speed of the data processors, an amount of memory, a speed of the memory, and data storage space available, to name a few.; Fig. 2, Infrastructure configuration B 216B) comprising: a configurable virtualized second instance of the CPU functionality; and a configurable virtualized second instance of the on-chip enhanced CPU functionality (Fig. 2, Infrastructure configuration 216A shows Option A: disabled, Option B: enabled, Option C: enabled, Option D: enabled, and Option E: disabled; [0023] Many different values may be available for each of virtualized hardware resources that comprise the resource configuration. For example, there may be many options for varying the number of virtual machines, processors, and/or processor cores, or for varying the values for processor speeds, memory capacity, memory speed, and/or data storage capacity.; [0032] The example data flow 200 depicts three different infrastructure configurations, 216A, 216B, and 216C, though more infrastructure configurations 216 may be tested. Each example infrastructure configuration includes a default resource configuration 215 that specifies default virtualized hardware resource allocation for the virtual machine(s) 220. For example, the default resource configuration 215 may specify that each infrastructure configuration 216 is to be tested on one virtual machine that includes 1 processor, 1 GB of memory, and 1 GB of data storage. Each example infrastructure configuration 216 depicts multiple infrastructure options and their status for each iteration, e.g., enabled or disabled.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to understand the selectable processor cores to encompass the additional ASICs and FPGUs. Therefore, Sharma’s varying configurations of options/cores (ASICs or FPGA) to encompass the limitations of claim 1. Sharma does not teach a virtualized-resource matching (VRM) algorithm configured to, for each resource request, responsive to a resource request and status information associated with the configurable virtualized first instance of the CPU functionality, the configurable virtualized first instance of the on-chip enhanced CPU functionality, the configurable virtualized second instance of the CPU functionality, and the configurable virtualized second instance of the on-chip enhanced CPU functionality, classify the resource request to a task type, select the virtualized first instance of the CPU, classify the resource request to a task type, enable or disable the configurable virtualized first instance of the CPU functionality, and enable or disable the configurable virtualized first instance of the on-chip enhanced CPU functionality based at least in part on the task type. However, Lonappan a virtualized-resource matching (VRM) algorithm configured to, for each resource request, responsive to a resource request and status information associated with the configurable virtualized first instance of the CPU functionality, the configurable virtualized first instance of the on-chip enhanced CPU functionality, the configurable virtualized second instance of the CPU functionality, and the configurable virtualized second instance of the on-chip enhanced CPU functionality, classify the resource request to a task type, select the virtualized first instance of the CPU, classify the resource request to a task type, enable or disable the configurable virtualized first instance of the CPU functionality, and enable or disable the configurable virtualized first instance of the on-chip enhanced CPU functionality based at least in part on the task type (Abstract: Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks; [0004] IBM’s Capacity on Demand service. A service that allows cores of a processor to be activated on as needed basis by the user buying or leasing the additional resources.; [0006] Performance can be improved when the characteristics of a task to be performed on the additional cores are known, and corresponding processor cores are activated accordingly. In accordance with a preferred embodiment of the present invention, a value representing a number of processor cores to be enabled within a computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.; [0012]; [0014]; [0015] A human system manager can use management console 18 to specify the above-mentioned task requirement to direct hypervisor 11 to enable the appropriate processor cores among processors 13a-13d within computer system 10. Wherein the hypervisor executes the functions of enabling and disabling based on task type, these functions are interpreted to correspond to the VRM algorithm; [0016] Starting at block 20, a management console, such as management console 18 from FIG. 1, awaits for a user, such as a human system manager, to enter a desired number of processors cores to be enabled, as shown in block 21. Next, management console 18 awaits for the user to enter a scale variable related to a desired task optimization, as depicted in block 22. The user can set the scale variable based on the task type. The scale variable can be set to a value ranging from 1 to the maximum number of processor cores within a processor. For example, if the user intends to optimize computer system 10 for processor-centric tasks, the scale variable should be set to a maximum value (i.e., maximum number of processor cores within one processor). On the other hand, if the user intends to optimize computer system 10 for I/O-centric tasks, the scale variable is set to a minimum value (i.e., 1). Otherwise, the scale variable can be set to a value at somewhere in the middle of its range.; [0017] Alternatively, instead of obtaining from a user for information such as a desired number of processors cores to be enabled and/or a scale variable related to a desired task optimization, those information can be automatically obtained from a historical database (not shown). The historical database keeps track of the number of processors cores enabled as well as the scale variable for each task that was being optimized in the past.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lonappan with the teachings of Sharma of enabling/disabling cores based on task type to optimize the execution of the task based on a scaling factor based on the task type. The modification would have been motivated by the desire of tailoring the computing supply based on the requirements of the workload. Regarding claim 2, Sharma teaches process a resource request comprising a request to access one or more resources associated with the host computer, wherein the one or more resources associated with the host computer include the CPU, the virtualized first instance of the CPU, and the virtualized second instance of the CPU ([0008] To identify an infrastructure configuration that specifies which software and hardware features are to be used/enabled for virtual machines that perform a particular VNF, a device may iterate through various infrastructure configurations using a default resource configuration to identify an infrastructure configuration that meets certain performance thresholds and/or demonstrates improvement in performance. Similarly, a resource configuration that specifies the virtualized hardware resources to be used for performing a particular VNF may be identified by iterating through various resource configurations using the previously identified infrastructure configuration. The identified infrastructure and resource configurations may be associated with the particular VNF, in a manner designed to ensure that future deployments of the VNF use the identified infrastructure and resource configurations.; [0012]; [0022]). In addition, Lonappan teaches further comprising a virtual machine manager (VMM) configured to execute the VRM algorithm to process a resource request comprising a request to access one or more resources associated with the host computer ([0012] The management of processors 13a-13d, memory devices 15 and storage subsystem 16 may be performed via a software tool known as a hypervisor 11. In addition, hypervisor 11 enables multiple operating systems to share the hardware resources within computer system 10 by allowing each operating system to "think" it has exclusive control of all the hardware resources within computer system 10. Hypervisor 11 allocates resources to each operating system and ensures the operating systems cannot interfere with each other. As part of its resource allocation function, hypervisor 11 can also enable and disable various resources within computer system 10.; [0017] Alternatively, instead of obtaining from a user for information such as a desired number of processors cores to be enabled and/or a scale variable related to a desired task optimization, those information can be automatically obtained from a historical database (not shown). The historical database keeps track of the number of processors cores enabled as well as the scale variable for each task that was being optimized in the past.; [0018] After the number of processor cores and the scale variable have been defined and entered into management console 18 by the user, management console 18 passes those parameters to a hypervisor, such as hypervisor 11 from FIG. 1, as shown in block 23. In turn, hypervisor 11 determines which processor cores in which processors to be enabled, as depicted in block 24, and then enables those processor cores accordingly, as shown in block 25. [0025] Hypervisor used for handling tasks). Regarding claim 3, Lonappan teaches setting one or more flags of the virtualized first instance of the CPU ([0015] Once the allocation has been completed, hypervisor 11 saves that configuration in a non-volatile memory (not shown) so computer system 10 will have the correct configuration on subsequent boots until the next configuration change by the human system manager.). Further, Lonappan teaches wherein the VMM is configured to enable or disable the configurable virtualized first instance of the CPU functionality and configurable virtualized first instance of the on-chip enhanced CPU functionality by setting one or more flags of the virtualized first instance of the CPU ([0013] Another software tool known as a management console 18 enables a human system manager to configure various system components as appropriate to various tasks computer system 10 is expected to perform. Management console 18 communicates system configuration changes to hypervisor 11 that performs the necessary component activation and deactivation.). Regarding claim 4, Lonappan teaches wherein the VRM is further configured to fulfill the resource request (Abstract: a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.). Regarding claim 5, Lonappan teaches wherein the VRM is further configured to: record a usage of the virtualized first instance of the CPU while fulfilling the resource request ([0017] The historical database keeps track of the number of processors cores enabled as well as the scale variable for each task that was being optimized in the past.). Regarding claim 6, Sharma teaches wherein: the virtualized first instance of the on-chip enhanced CPU functionality comprises multiple types of the on-chip enhanced CPU functionality ([0014], [0021], [0023], [0032]). In addition, Lonappan the VRM algorithm is further configured to enable one of multiple types of the virtualized first instance of the on-chip enhanced CPU functionality based at least in part on the task associated with the resource request ([0006], [0012], [0014-17]). Regarding claim 8, it is a method type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale. Regarding claim 9, it is a method type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale. Regarding claim 10, it is a method type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale. Regarding claim 11, it is a method type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale. Regarding claim 12, it is a method type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale. Regarding claim 13, it is a method type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale. Regarding claim 15, it is a media/product type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale. Regarding claim 16, it is a media/product type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale. Regarding claim 17, it is a media/product type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale. Regarding claim 18, it is a media/product type claim having similar limitations as claim 4+5 above. Therefore, it is rejected under the same rationale. Regarding claim 19, it is a media/product type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale. Claims 7, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma and Lonappan as applied to claim 1, in further view of Durham et al. (US 2019/0004843 A1). Durham was cited in the previous Office Action. Regarding claim 7, Sharma nor Lonappan explicitly teach wherein the multiple types of the virtualized first instance of the on-chip enhanced CPU functionality comprise an accelerator on-chip enhanced CPU functionality and a compression on-chip enhanced CPU functionality. However, Durham teaches wherein the multiple types of the virtualized first instance of the on-chip enhanced CPU functionality comprise an accelerator on-chip enhanced CPU functionality and a compression on-chip enhanced CPU functionality ([0017] In addition, the CPU 120 also includes a compression/encryption support block 122. The compression/encryption support block 122 may be embodied as any functional block, digital logic, microcode, or other component capable of performing memory replay prevention techniques using compressive encryption as described herein.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Durham with the teachings of Sharma and Lonappan to further define the types of additional processing/accelerating capabilities. The modification would have been motivated by the desire of combining known elements to yield predictable results. Regarding claim 14, it is a method type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale. Regarding claim 20, it is a media/product type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale. Response to Arguments Applicant's arguments filed 01/05/2026 have been fully considered but they are not persuasive. In Remarks, Applicant’s argue: (I) It is respectfully submitted that the obviousness rejections of at least the independent claims are in error and should be withdrawn. The Examiner asserts that Sharma combined with Schuette and further in view of Lonappan and Durham renders obvious the claimed virtualized resource matching operation that selects and enables or disables on chip enhanced CPU functionality based on a task associated with a resource request. The Examiner relies on Sharma for virtualized hardware resources with options enabled or disabled and for the presence of FPGA or ASIC hardware alongside a CPU. The Examiner relies on Schuette for switching CPU modes based on workload efficiency. The Examiner adds Lonappan for a hypervisor executing resource allocation and Durham for identifying compression or encryption accelerator functionality. Even under the broadest reasonable interpretation, this combination does not teach or suggest the claimed per request task classification mapped to enabling or disabling specific on chip accelerator functionality within virtualized CPU instances. Lonappan teaches that a hypervisor can enable a selected subset of processor cores based on a user specified number of cores and a scale variable related to optimization type and provides a mathematical allocation across processors. This is an activation for processor intensive versus I/O intensive tasks and does not disclose classification of a resource request by task and enabling or disabling on chip accelerator functionality such as AI compression or cryptography for a virtualized CPU instance. See Lonappan, Paragraphs [0012], [0018], and [0019]. Considering the combination, Sharma provides offline configuration discovery and generic enable or disable of test options. Schuette provides a CPU mode toggle between hyper threading states based on spot checks. Lonappan provides hypervisor controlled core allocation based on a scale variable. Durham identifies an example of a compression or encryption support block. None of these references teaches or suggests a VRM algorithm that, responsive to a resource request and associated status across multiple virtualized instances, selects a particular virtualized CPU instance and classifies the resource request to a task type and then enables or disables the specific on chip enhanced CPU functionality in that instance based on that task to fulfill the request. The references do not disclose the claimed per request task aware matching of accelerator capability with enable or disable control of virtualized on chip enhanced portions of the CPU. The Examiner's rationale treats Schuette's hyper threading toggle as an equivalent to enabling or disabling on chip accelerator functionality in a virtualized environment, which is an error. Hyper threading mode selection is a CPU execution mode decision and is not virtualization of heterogeneous accelerator resources nor per request task classification. The Examiner also stretches Sharma's iterative test framework into a runtime scheduler that reacts to each resource request, which Sharma does not teach. Even under BRI, the claim terms on chip enhanced CPU functionality and VRM algorithm configured to select and enable or disable based at least in part on a task require that accelerator functionality decisions be tied to each resource request. Reading hyper threading mode toggling or generic core activation into on chip enhanced CPU functionality would be unreasonable and inconsistent with the claim context and the dependent claims that reference multiple enhanced functionality types such as accelerator and compression. In view of the above, examiner respectfully submits the following: As to point (I) In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (See highlighted sections above) are not recited in the rejected claim 1. Claim 1, recites “on-chip enhanced CPU functionality” but does not recite on-chip accelerator capability/functionality. Functionality such as accelerator and compression are recited in claim 7 and the applied art explicitly teaches this. The claimed limitation in claim 1 is taught by at least Sharma’s ASICs and FPGA which are well-known accelerators (See pertinent prior art section). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). While new grounds of rejection as presented above render arguments regarding Schuette moot, the references used were made of record for different claims in the previous Office Action and the mapping has been updated accordingly. Examiner respectfully disagrees with the Applicant for at least the following reasons. Upon further consideration of the amendments and previously cited art, Examiner finds the combination of Sharma and Lonappan to teach the claim as amended. Lonappan teaches an improvement on IBM’s Capacity on Demand service. A service that allows cores of a processor to be activated on as needed basis by the user buying or leasing the additional resources. Lonappan states in at least [0006] “Performance [of IBM Capacity on Demand] can be improved when the characteristics of a task to be performed on the additional cores are known, and corresponding processor cores are activated accordingly…Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks” (Emphasis added) As such, the combination of Sharma and Lonappan teaches the new limitations as the number of core’s on a processor chip are enabled/disabled depending on the type of task being handled. Accordingly, Applicant argument is not persuasive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Asaad et al. (US 2015/0046478 A1) teaches in [0025] “The accelerator 180 may be a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or other suitable device that is configured to perform specific processing tasks.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Mar 14, 2023
Application Filed
Jul 14, 2025
Non-Final Rejection — §103
Oct 16, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103
Jan 05, 2026
Response after Non-Final Action
Jan 22, 2026
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602244
OFFLOADING PROCESSING TASKS TO DECOUPLED ACCELERATORS FOR INCREASING PERFORMANCE IN A SYSTEM ON A CHIP
2y 5m to grant Granted Apr 14, 2026
Patent 12596565
USER ASSIGNED NETWORK INTERFACE QUEUES
2y 5m to grant Granted Apr 07, 2026
Patent 12591821
DYNAMIC ADJUSTMENT OF WELL PLAN SCHEDULES ON DIFFERENT HIERARCHICAL LEVELS BASED ON SUBSYSTEMS ACHIEVING A DESIRED STATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585490
MIGRATING VIRTUAL MACHINES WHILE PERFORMING MIDDLEBOX SERVICE OPERATIONS AT A PNIC
2y 5m to grant Granted Mar 24, 2026
Patent 12579065
LIGHTWEIGHT KERNEL DRIVER FOR VIRTUALIZED STORAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+37.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month