Prosecution Insights
Last updated: April 19, 2026
Application No. 18/243,387

AUTOMATION WITH COMPOSABLE ASYNCHRONOUS TASKS

Non-Final OA §101§102
Filed
Sep 07, 2023
Examiner
GILLS, KURTIS
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Salesforce Inc.
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
307 granted / 536 resolved
+5.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
37.5%
-2.5% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 536 resolved cases

Office Action

§101 §102
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Notice to Applicant In response to the communication received on 09/07/2023, the following is a Non-Final Office Action for Application No. 18243387. Status of Claims Claims 1-20 are pending. Drawings The applicant’s drawings submitted on 09/07/2023 are acceptable for examination purposes. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims fall within statutory class of process or machine; hence, the claims fall under statutory category of Step 1. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font: A computer-implemented method comprising: receiving, at a computing device, a prompt; determining, by the computing device using a first large language model (LLM), from the prompt, at least two composable asynchronous tasks, wherein a first composable asynchronous task of the at least two composable asynchronous tasks uses a second LLM; and performing, by the computing device, the at least two composable asynchronous tasks, wherein performing the first composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a first output with the second LLM based on the prompt, and validating the first output of the LLM, and wherein performing a second composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a second output using the first output. [or] A computer-implemented system comprising: a storage; and one or more processors that receive a prompt; determine using a first large language model (LLM), from the prompt, at least two composable asynchronous tasks, wherein a first composable asynchronous task of the at least two composable asynchronous tasks uses a second LLM; and perform the at least two composable asynchronous tasks, wherein the first composable asynchronous task of the at least two composable asynchronous tasks is performed by: generating a first output with the second LLM based on the prompt, and validating the first output of the LLM, and wherein performing a second composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a second output using the first output. [or] A system comprising: one or more computers and one or more non-transitory storage devices storing instructions which are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: receiving, at a computing device, a prompt; determining, by the computing device using a first large language model (LLM), from the prompt, at least two composable asynchronous tasks, wherein a first composable asynchronous task of the at least two composable asynchronous tasks uses a second LLM; and performing, by the computing device, the at least two composable asynchronous tasks, wherein performing the first composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a first output with the second LLM based on the prompt, and validating the first output of the LLM, and wherein performing a second composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a second output using the first output. The claim(s) recite(s) the following summarization of the abstract idea which includes generating a first output with a second LLM based on a prompt and validating the first output of the LLM via the additional element(s) of computing device, storage, processor, non-transitory storage device, and/or computer. This falls into at least the Abstract Idea Grouping of Mental Processes since the information can be analyzed by an abstract evaluation judgment process. Thus, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity since the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion). Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion). Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The computing device, storage, processor, non-transitory storage device, and/or computer is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing/transmitting data. This generic computing device, storage, processor, non-transitory storage device, and/or computer limitation is no more than mere instructions to apply the exception using a generic computer component. Further, generating a first output with the second LLM by a computing device, storage, processor, non-transitory storage device, and/or computer is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, the 2019 PEG flowchart is directed to Step 2B. Per Step 2B, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of: computing device, storage, processor, non-transitory storage device, and computer. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, generating a first output with the second LLM by a computing device, storage, processor, non-transitory storage device, and/or computer is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic computer/memory type structure at ¶58 wherein “Implementations may be implemented using hardware that may include a processor, such as a general purpose microprocessor and/or an Application Specific Integrated Circuit (ASIC).” Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); PNG media_image1.png 18 19 media_image1.png Greyscale ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); PNG media_image1.png 18 19 media_image1.png Greyscale iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or PNG media_image1.png 18 19 media_image1.png Greyscale v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine/manufacture for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vidan et al. (US 20250013437 A1) hereinafter referred to as Vidan. Vidan teaches: Claim 1. A computer-implemented method comprising: receiving, at a computing device, a prompt (¶0050 The integration of Flow-Based Programming with large language models within the Just-In-Time Programming framework offers several benefits that enhance task-time development: (a) Modularity and Reusability: Flow-Based Programming's component-based design fosters modularity and code reusability. Components can be easily connected and combined, allowing users to create flexible and scalable solutions. This modularity also enables incremental development and iterative improvements, aligning well with the Just-In-Time Programming approach ¶0083 AI assistant 102 may predict that the user is most probable to begin with a Data Ingestion block, and may suggest a filtered list of block templates 112 to select. In the alternative, the user can issue a prompt to AI assistant 102 specifying a task to be performed, and AI assistant 102 may generate code to be plugged into the program under developed. In either event, in FIG. 2b, the user has selected “ODBC Database Query Functional block” 222 to ingest data from a database. ¶¶0123-0129 Visually, a flow-based program generated by a purpose-built LLM based on the prompt given in 0069 is shown as a Flow-Based Program in FIG. 9B...To summarize, a powerful Just-In-Time Computing Framework may use trained LLM combined with a Flow-Based Programming model, may work as follows: 1. End-user (non-technical or technical user) defines a flow-based execution structure (Flow-Based Program) utilizing pre-built components (functional blocks or Modules) 2. As part of the Flow-Based Program, end-user inserts one or more prompts for specific tasks or subtasks to be completed 3. Purpose-built LLM generates a Flow-Based Program for each prompt 4. Flow-Based Programs are visually represented (and available for inspection even by a non-technical user) 5. Child Flow-Based Programs are executed according to the defined Flow-Based Program); determining, by the computing device using a first large language model (LLM), from the prompt, at least two composable asynchronous tasks, wherein a first composable asynchronous task of the at least two composable asynchronous tasks uses a second LLM (¶0041 A Large Language Model Integration component may use one or more pre-trained LLMs to generate task instructions and perform language-based tasks. Large Language Model Integration may integrate a pre-trained LLM (e.g., OpenAI's GPT-4) into the framework. A suitable model may be selected based on its ability to understand and generate human-like text that instructs instructions based on input data, predefined templates, and context. Contextual understanding may be achieved through fine-tuning the models on domain-specific data.. ¶0044 Monitoring may include task completion time, resource utilization, and error rates. Feedback data may be used to retrain the LLMs and optimize the Flow-Based Programming graph. Retraining may improve the accuracy and relevance of task instructions. Optimization may involve refining node connections and data flows to enhance system performance. Anomaly detection algorithms may identify and address deviations in task execution, and initiate corrective actions to maintain system reliability ¶¶0123-0129 Visually, a flow-based program generated by a purpose-built LLM based on the prompt given in 0069 is shown as a Flow-Based Program in FIG. 9B...To summarize, a powerful Just-In-Time Computing Framework may use trained LLM combined with a Flow-Based Programming model, may work as follows: 1. End-user (non-technical or technical user) defines a flow-based execution structure (Flow-Based Program) utilizing pre-built components (functional blocks or Modules) 2. As part of the Flow-Based Program, end-user inserts one or more prompts for specific tasks or subtasks to be completed 3. Purpose-built LLM generates a Flow-Based Program for each prompt 4. Flow-Based Programs are visually represented (and available for inspection even by a non-technical user) 5. Child Flow-Based Programs are executed according to the defined Flow-Based Program ¶0131 we outline the steps involved in integrating LLMs with Flow-Based Programming to develop an effective Just-in-Time Programming framework. 1. Identify Task-Specific LLMs: Begin by identifying the LLMs that are most relevant to the specific task domain. Select LLMs that align with the programming language or task requirements to enhance the Just-in-Time Programming capabilities. (See § III.C.6 below.) 2. Define LLM Components: Next, define LLM components within the Flow-Based Programming framework. These components encapsulate the interactions with LLMs, such as sending input text, retrieving generated code or responses, and managing the LLM state…); and performing, by the computing device, the at least two composable asynchronous tasks, wherein performing the first composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a first output with the second LLM based on the prompt, and validating the first output of the LLM, and wherein performing a second composable asynchronous task of the at least two composable asynchronous tasks comprises: generating a second output using the first output (¶0048 Flow-Based Programming may have the following advantages. Flow-Based Programming encourages breaking down a system into smaller, self-contained components. These components have well-defined inputs and outputs, facilitating modularity, code reuse, and easy maintenance. Flow-Based Programming emphasizes the flow of data streams between components. ¶0049 Components can receive input data, process it, and produce output data that is then passed to downstream components. The connections between components define the flow of data, allowing for flexible and reactive execution. Flow-Based Programming may promote an asynchronous and reactive execution model. Components react to incoming data, processing it as soon as it becomes available, enabling real-time responsiveness and dynamic task adaptation. ¶¶0196-0197 The model's performance may be validated using a separate validation set, and specifically verify its ability to generate accurate and contextually relevant workflow code. Hyperparameters can be adjusted, and model retraining can be performed as necessary. Based on validation results, fine-tuning may be performed in an iterative fashion, by refining the training data and model configurations. This may involve additional rounds of data collection, annotation, and cleaning). Vidan teaches: Claim 2. The computer-implemented method of claim 1, wherein the first output comprises computer code for a component of an application, and wherein the second output comprises the component of the application (¶0048 Flow-Based Programming may have the following advantages. Flow-Based Programming encourages breaking down a system into smaller, self-contained components. These components have well-defined inputs and outputs, facilitating modularity, code reuse, and easy maintenance. Flow-Based Programming emphasizes the flow of data streams between components. ¶0049 Components can receive input data, process it, and produce output data that is then passed to downstream components. The connections between components define the flow of data, allowing for flexible and reactive execution. Flow-Based Programming may promote an asynchronous and reactive execution model. Components react to incoming data, processing it as soon as it becomes available, enabling real-time responsiveness and dynamic task adaptation.). Vidan teaches: Claim 3. The computer-implemented method of claim 1, wherein generating the second output using the first output further comprises combining the first output with at least one additional output of at least one additional composable asynchronous task of the at least two composable asynchronous tasks (¶¶0140-0141 FIG. 7A shows a program that receives input from two sources, aggregates the two sources to join them, applies a filter, and generates some form of output for storage or dissemination. As shown in FIG. 7B, a Module takes in zero or more inputs, and produces one or many outputs. These outputs can then be connected to any number of other Module inputs.). Vidan teaches: Claim 4. The computer-implemented method of claim 1, wherein validating the first output of the LLM comprises:generating tests for the first output, and performing the tests on the first output (¶0094 FIG. 4A shows a Flow-Based Program showing a Just-in-Time program to test whether an input integer is prime. In this example, we request a just-in-time algorithm for determining whether an input number is prime. Here, the Just-in-Time system is supplemented with a more generalized Python Scripter and Executor Modules that can generate and accept any Python script and any given number of inputs ¶ 0169 Also, as we saw in the above examples, as we move from simple algorithms (for arithmetic operations), to more complex algorithms (primality test), to complex data manipulation (finding duplicates), the generated code from the LLM becomes more complex. This requires an expert programmer to read the code, check it for accuracy, and test it.). Vidan teaches: Claim 5. The computer-implemented method of claim 4, further comprising:determining that the first output fails at least one of the tests, and regenerating the first output with the second LLM based on the at least one of the tests failed by the first output (¶¶0171-0173 As with any other software development process, Just-in-Time Programming requires trust that the software performs its intended functions correctly and predictably, and that the resulting end-to-end system delivers accurate results, responds to inputs appropriately, and operates without unexpected failures or errors. To improve trust, the LLM may generate not just a block of text to be used as executable code, but rather generate a complete, visual, flow-based program, a visual algorithm that includes pre-defined functional blocks (Modules), to ensure consistency, accuracy and reliability. Our approach leverages two key features of Flow-Based Programming: (a) Strongly Typed Modules: the Flow-Based Programming framework may enforce strong typing of modules, ensuring that data types are explicitly defined and consistent throughout the DataFlow. (b) Loose Coupling: the Flow-Based Programming framework may promote loose coupling between modules, meaning that modules are decoupled from each other and communicate through well-defined data interfaces). Vidan teaches: Claim 6. The computer-implemented method of claim 1, wherein performing the first composable asynchronous task of the at least two composable asynchronous tasks further comprises:generating a sub-prompt based on the prompt and the first composable asynchronous task; and inputting the sub-prompt to the first composable asynchronous task of the at least two composable asynchronous tasks (¶0035 AI assistant 102 may assist by recommending specific edges to the graph, to connect the blocks. Likewise, a user may issue a prompt to AI assistant 102 specifying a function to be performed, and AI assistant may 102 return code to be plugged into the program under development. AI assistant 102 may be implemented as a trained large language model ¶0083 AI assistant 102 may predict that the user is most probable to begin with a Data Ingestion block, and may suggest a filtered list of block templates 112 to select. In the alternative, the user can issue a prompt to AI assistant 102 specifying a task to be performed, and AI assistant 102 may generate code to be plugged into the program under developed. In either event, in FIG. 2b, the user has selected “ODBC Database Query Functional block” 222 to ingest data from a database. ¶0093 FIG. 3C shows the new output given the slightly altered prompt requesting subtraction rather than addition, showing the different code being generated just in time, based on the new request.). Vidan teaches: Claim 7. The computer-implemented method of claim 1, wherein the composable asynchronous composable asynchronous task has a defined input structure and a defined output structure (¶0045 Just-In-Time Programming may provide a framework that provides a structured approach to building software applications that is responsive to any user input. A Just-in-Time Programming framework may be based on integration of Flow-Based Programming techniques and Large Language Models ¶0165 The generated text from the LLC may be reformed into a structured format that the Composable DataFlow engine can understand and execute as code. This may involve parsing JSON or another structured output format.). As per claims 8-14 and 15-20, the system and system tracks the method of claims 1-7 and 1-6, respectively, resulting in substantially similar limitations. The same cited prior art and rationale of claims 1-7 and 1-6 are applied to claims 8-14 and 15-20, respectively. Vidan discloses that the embodiment may be found as a system (Fig. 1a and ¶0004). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20240354567 A1 DUGGAL; Dave M. et al. KNOWLEDGE-DRIVEN AUTOMATION PLATFORM TO CONNECT, CONTEXTUALIZE, AND CONTROL ARTIFICIAL INTELLIGENCE TECHNOLOGIES INCLUDING GENERATIVE AI REPRESENTING A PRACTICAL IMPLEMENTATION OF NEURO-SYMBOLIC AI WO 2024175935 A1 DIETRICK ELISE et al. METHOD AND SYSTEM FOR PROVIDING A COMPUTER-IMPLEMENTED TEACHING ENVIRONMENT US 10860373 B2 Akella; Jahnavi et al. Enhanced governance for asynchronous compute jobs NPL Lukasz Ziarek, KC Sivaramakrishnan, Suresh Jagannathan Composable Asynchronous Events Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURTIS GILLS whose telephone number is (571)270-3315. The examiner can normally be reached on M-F 8-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KURTIS GILLS/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602664
INTELLIGENT MEETING TIMESLOT ANALYSIS AND RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12572864
AVOIDING PROHIBITED SEQUENCES OF MATERIALS PROCESSING AT A CRUSHER USING PREDICTIVE ANALYTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12572872
Mine Management System
2y 5m to grant Granted Mar 10, 2026
Patent 12567013
METHOD AND SYSTEM FOR SOLVING SUBSET SUM MATCHING PROBLEM USING DYNAMIC PROGRAMMING APPROACH
2y 5m to grant Granted Mar 03, 2026
Patent 12561703
SYSTEM AND METHOD FOR PERSONA GENERATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
87%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 536 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month