Prosecution Insights
Last updated: April 19, 2026
Application No. 18/674,211

TECHNIQUES FOR TRAINING AND VALIDATING AN OPTIMIZED MACHINE LEARNING MODEL

Non-Final OA §103
Filed
May 24, 2024
Examiner
WEAVER, ADAM MICHAEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Layer Health Inc.
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 09/19/2024 and 08/13/2025 is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 8-10, 12-13, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. ("Let’s Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models", 10/20/2023), hereinafter referred to as Wang, in view of Thompson et al. ("Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping", 2023), hereinafter referred to as Thompson. Regarding claim 1, Wang discloses in response to a heuristic based on the responses returned by the LLM indicating a readiness for optimization ("Similarly, we assume that the process of attaining the seed data set (denoted as {Si}n1i=1), where n1 is the number of seed data points, is to draw n1 i.i.d. samples from our seed data distribution P(0) LLM," Wang pg. 5), training a new model on the queries and respective responses (Wang Figure 2 shows training a small model based on a dataset that is created by prompting an LLM for rationales on queries), the new model being more computationally efficient than the LLM ("S3 has high practical potential in many real-world applications because it can effectively (i.e, with better performance) and efficiently (i.e., with improved data efficiency) transfer knowledge in an extremely large model (e.g., GPT 3.5) to a small model (e.g., DistilBert), achieving data efficiency and computation efficiency at the same time," Wang pg. 9); performing a validation operation on additional responses returned by the new model ("Train the small model with the synthesized dataset, then validate the small model on real-world validation data, and attain misclassified data of the small model, use them as errors," Wang Methodology pg. 3); and in response to the validation operation indicating that the new model has satisfied a condition, sending future queries to the new model instead of the LLM ("Combine the additional dataset and original dataset as a new synthesized train dataset for the small model, then repeat steps 2 and 3 for multiple rounds until the performance of the small model converges," Wang Methodology pg. 3, it would be obvious to begin using the small model once its performance converges, as that means it is performing above a certain threshold and thereby more optimized to perform the task it’s been trained to do). However, Wang fails to disclose a method performed by a computing system of responding to questions about documents, the method comprising: initially sending queries to a large language model (LLM), each query including a request for any snippets within a corresponding document that provide evidence in response to a question, the LLM returning a respective response to each query. Thompson discloses a retrieval-augmented generation (RAG) method for disease phenotyping. Thompson teaches a method performed by a computing system of responding to questions about documents, the method comprising: initially sending queries to a large language model (LLM) (Thompson Querying pg. 3), each query including a request for any snippets within a corresponding document that provide evidence in response to a question ("To address this challenge, we implemented a RAG approach based on regular expressions (Regex) to identify relevant snippets from the text," Thompson Retrieval pg. 2 "To address this, we employed a MapReduce approach where each snippet is concurrently presented as context to the LLM, along with a set of instructions to facilitate decision-making," Thompson Querying pg. 3), the LLM returning a respective response to each query (“Generated outputs, one per snippet, are then aggregated to form the final decision," Thompson pg. 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Thompson’s teaching of using RAG to query an LLM. RAG allows for LLMs and other language models to reduce hallucinations and providing citations, increasing the models’ accuracy and efficacy. The utilization of this sort of process to train a smaller model would allow for the smaller model to learn from these citation processes and translate the increase accuracy onto the newly trained small model. Regarding claim 2, Wang, in view of Thompson, discloses all of the limitations of claim 1. Wang further discloses wherein the heuristic ensures that there are at least a threshold number of queries with respective responses prior to indicating readiness for optimization ("Similarly, we assume that the process of attaining the seed data set (denoted as {Si}n1i=1), where n1 is the number of seed data points, is to draw n1 i.i.d. samples from our seed data distribution P(0) LLM," Wang pg. 5 and "For instance, for the IMDb dataset, a sentiment analysis dataset on movie reviews, Tration(yi = positive/negative) is: "What is the reason that may lead to a positive/negative movie review." and the Tquery(rcurr, positive) is: "Now imagine that you just watched a movie that has great acting, intriguing plot, and beautiful cinematography. Now you should write a positive review about this movie." We use the prompt as an input to the LLM and obtain the target output as the synthesized pseudo example. This “explain-then-generate” approach enables us to generate more diverse, informative, and realistic examples," Wang pg. 3, this implies that the seed dataset is generated from each input of a corresponding dataset, i.e. the threshold number of number of queries is equal to the number of points in a used dataset). Regarding claim 8, Wang, in view of Thompson, discloses all of the limitations of claim 1. Wang further discloses wherein the new model is a sequence classification model (Wang Figure 2 shows training a small model based on a dataset that is created by prompting an LLM for rationales on queries). Regarding claim 9, Wang, in view of Thompson, discloses all of the limitations of claim 1. Wang further discloses wherein the method further comprises, in response to the validation operation indicating that the new model has not satisfied the condition, continuing to send future queries to the LLM until the validation operation indicates that the new model has satisfied the condition ("Combine the additional dataset and original dataset as a new synthesized train dataset for the small model, then repeat steps 2 and 3 for multiple rounds until the performance of the small model converges," Wang Methodology pg. 3, it would be obvious to continue to train the small model before its performance converges, as that means it is not fully optimized to perform the tasks it is being trained to perform). Regarding claim 10, Wang, in view of Thompson, discloses all of the limitations of claim 1. However, Wang fails to disclose wherein each query relates to a medical question and each corresponding document includes one or more medical notes. Thompson teaches wherein each query relates to a medical question and each corresponding document includes one or more medical notes (Thompson Fig. 1 shows both the query and the retrieval of the document relating to medical questions and medical documents, respectively). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Thompson’s teaching of using RAG to query an LLM to formulate answers to medical questions. Using this process for medical based queries and questions allows for the evidence-based answering of these questions through the utilization of health records and clinical literature. This provides more trustworthy answers and responses to these queries since the model is able to cite their sources. As to claims 12-13, computer program product claims 12-13 and method claims 1-2 are related as method and computer program product of using same, with each claimed element’s function corresponding to the method step, respectively. Accordingly, claims 12-13 are similarly rejected under the same rationale as applied above with respect to the method claims. As to claim 19, computer program product claim 19 and method claim 8 are related as method and computer program product of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 20, system claim 20 and method claim 1 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to the method claim. Claim(s) 3-5, 11, and 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Thompson, and further in view of Stremmel et al. (US Patent Application Publication No. 2025/0348708), hereinafter referred to as Stremmel. Regarding claim 3, Wang, in view of Thompson, discloses all of the limitations of claim 2. Wang further discloses and the heuristic further ensures that each classification of the plurality of different classifications has been determined at least a threshold number of times prior to indicating readiness for optimization ("Similarly, we assume that the process of attaining the seed data set (denoted as {Si}n1i=1), where n1 is the number of seed data points, is to draw n1 i.i.d. samples from our seed data distribution P(0) LLM," Wang pg. 5 and "For instance, for the IMDb dataset, a sentiment analysis dataset on movie reviews, Tration(yi = positive/negative) is: "What is the reason that may lead to a positive/negative movie review." and the Tquery(rcurr, positive) is: "Now imagine that you just watched a movie that has great acting, intriguing plot, and beautiful cinematography. Now you should write a positive review about this movie." We use the prompt as an input to the LLM and obtain the target output as the synthesized pseudo example. This “explain-then-generate” approach enables us to generate more diverse, informative, and realistic examples," Wang pg. 3, this implies that the seed dataset is generated from each input of a corresponding dataset, i.e. the threshold number of number of queries is equal to the number of points in a used dataset). However, Wang fails to disclose each response having one or more snippets is sent to a second LLM requesting a determination of a classification of a plurality of different classifications, the determined classification answering the question based on the one or more snippets. Stremmel discloses methods of prompt engineering and extractive LLM techniques. Stremmel teaches each response having one or more snippets is sent to a second LLM requesting a determination of a classification of a plurality of different classifications, the determined classification answering the question based on the one or more snippets (Stremmel paras [0151]-[0152]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Stremmel’s teaching of using a second LLM to classify the outputs of multiple prior LLMs. Utilizing another LLM to classify the output of prior LLMs as well as to answer the original query can help reduce error and increase efficiency of training the new model. Reviewing the outputs from both the original LLM and the actively training new model will help to reduce the error in the new model. Regarding claim 4, Wang, in view of Thompson, discloses all of the limitations of claim 1. Wang further discloses and the heuristic ensures that each classification of the plurality of different classifications has been determined at least a threshold number of times prior to indicating readiness for optimization ("Similarly, we assume that the process of attaining the seed data set (denoted as {Si}n1i=1), where n1 is the number of seed data points, is to draw n1 i.i.d. samples from our seed data distribution P(0) LLM," Wang pg. 5 and "For instance, for the IMDb dataset, a sentiment analysis dataset on movie reviews, Tration(yi = positive/negative) is: "What is the reason that may lead to a positive/negative movie review." and the Tquery(rcurr, positive) is: "Now imagine that you just watched a movie that has great acting, intriguing plot, and beautiful cinematography. Now you should write a positive review about this movie." We use the prompt as an input to the LLM and obtain the target output as the synthesized pseudo example. This “explain-then-generate” approach enables us to generate more diverse, informative, and realistic examples," Wang pg. 3, this implies that the seed dataset is generated from each input of a corresponding dataset, i.e. the threshold number of number of queries is equal to the number of points in a used dataset). However, Wang fails to disclose each response having one or more snippets is sent to a second LLM requesting a determination of a classification of a plurality of different classifications, the determined classification answering the question based on the one or more snippets. Stremmel teaches each response having one or more snippets is sent to a second LLM requesting a determination of a classification of a plurality of different classifications, the determined classification answering the question based on the one or more snippets (Stremmel paras [0151]-[0152]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Stremmel’s teaching of using a second LLM to classify the outputs of multiple prior LLMs. Utilizing another LLM to classify the output of prior LLMs as well as to answer the original query can help reduce error and increase efficiency of training the new model. Reviewing the outputs from both the original LLM and the actively training new model will help to reduce the error in the new model. Regarding claim 5, Wang, in view of Thompson, discloses all of the limitations of claim 1. However, Wang fails to disclose the method further comprises sending each response having one or more snippets and each additional response having one or more snippets to another LLM, the other LLM returning a respective structured result in response, each returned structured result answering the question based on the corresponding one or more snippets; performing the validation operation includes: comparing (i) the structured results (hereinafter the initial structured results) returned by the other LLM in response to the responses to (ii) the structured results (hereinafter the new structured results) returned by the other LLM in response to the additional responses on a per query basis; and for each initial structured result that differs in substance from the new structured result for the same query, performing an adjudication operation, the adjudication operation assigning either the LLM or the new model as producing a superior response for that query; and the validation operation indicates that the new model has satisfied the condition when a ratio of assignments by the adjudication operation to the new model against assignments by the adjudication operation to the LLM exceeds a threshold ratio. Stremmel teaches the method further comprises sending each response having one or more snippets and each additional response having one or more snippets to another LLM, the other LLM returning a respective structured result in response, each returned structured result answering the question based on the corresponding one or more snippets (Stremmel paras [0151]-[0152]); performing the validation operation includes: comparing (i) the structured results (hereinafter the initial structured results) returned by the other LLM in response to the responses to (ii) the structured results (hereinafter the new structured results) returned by the other LLM in response to the additional responses on a per query basis ("In some embodiments, the term “resolution capability iterations” may refer to a computer-implemented process for providing feedback to an LLM based on a machine learning classification model. A resolution capability iteration may include comparing the resolution capability classification output of the machine learning classification model for an input question to the predictive output classification of the LLM for the input question in order to determine whether the outputs converge or diverge," Stremmel para [0092]); and for each initial structured result that differs in substance from the new structured result for the same query, performing an adjudication operation, the adjudication operation assigning either the LLM or the new model as producing a superior response for that query (Stremmel para [0090] and Stremmel Fig. 8 shows generating a resolution capability classification 804 to compare to a generated predictive output 806, if it diverges 808, then the prompt is updated, indicating that in this case the initial model's response of the resolution capability classification is superior); and the validation operation indicates that the new model has satisfied the condition when a ratio of assignments by the adjudication operation to the new model against assignments by the adjudication operation to the LLM exceeds a threshold ratio ("In some examples, the LLM agent model may be configured to perform one or more resolution capability iterations until a resolution capability stop condition is satisfied and provide feedback to the LLM for each resolution capability iteration. In some examples, the resolution capability stop condition may be a threshold number of one or more resolution capability iterations (e.g., N maximum iterations, where N is an integer) or a classification model output convergence," Stremmel para [0091]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Stremmel’s teaching of comparing the outputs of the LLMs. Comparing the respective outputs of both the LLM and the new model that is actively being trained is a quick and efficient way to determine the quality of their outputs. This would help to increase the accuracy of the new model, as well as reduce common LLM pitfalls such as hallucination and bias. Regarding claim 11, Wang, in view of Thompson, discloses all of the limitations of claim 1. Wang further discloses in response to another heuristic based on the structured results returned by the other LLM indicating a readiness for optimization ("Similarly, we assume that the process of attaining the seed data set (denoted as {Si}n1i=1), where n1 is the number of seed data points, is to draw n1 i.i.d. samples from our seed data distribution P(0) LLM," Wang pg. 5), training another new model on the responses (Wang Figure 2 shows training a small model based on a dataset that is created by prompting an LLM for rationales on queries), the other new model being more computationally efficient than the other LLM ("S3 has high practical potential in many real-world applications because it can effectively (i.e, with better performance) and efficiently (i.e., with improved data efficiency) transfer knowledge in an extremely large model (e.g., GPT 3.5) to a small model (e.g., DistilBert), achieving data efficiency and computation efficiency at the same time," Wang pg. 9); performing another validation operation on additional structured results returned by the other new model ("Train the small model with the synthesized dataset, then validate the small model on real-world validation data, and attain misclassified data of the small model, use them as errors," Wang Methodology pg. 3); and in response to the validation operation indicating that the other new model has satisfied another condition, sending future responses to the other new model instead of the other LLM ("Combine the additional dataset and original dataset as a new synthesized train dataset for the small model, then repeat steps 2 and 3 for multiple rounds until the performance of the small model converges," Wang Methodology pg. 3, it would be obvious to begin using the small model once its performance converges, as that means it is performing above a certain threshold and thereby more optimized to perform the task it’s been trained to do). However, Wang fails to disclose initially sending each response having one or more snippets to another LLM requesting a determination of a structured result, the determined structured result answering the question based on the one or more snippets. Stremmel teaches initially sending each response having one or more snippets to another LLM requesting a determination of a structured result, the determined structured result answering the question based on the one or more snippets (Stremmel paras [0151]-[0152]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Stremmel’s teaching of using a second LLM to produce a structured result, such as a classification, based upon the outputs of multiple prior LLMs. Utilizing another LLM to further answer the original query or classify the outputs of prior LLMs can help reduce error and increase efficiency of training the new model. Reviewing the outputs from both the original LLM and the actively training new model will help to reduce the error in the new model. As to claims 14-16, computer program product claims 14-16 and method claims 3-5 are related as method and computer program product of using same, with each claimed element’s function corresponding to the method step, respectively. Accordingly, claims 14-16 are similarly rejected under the same rationale as applied above with respect to the method claims. Claim(s) 6-7 and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Thompson, further in view of Stremmel, and further in view of Liusie et al. ("LLMComparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models", 02/06/2024), hereinafter referred to as Liusie. Regarding claim 6, Wang, in view of Thompson, and further in view of Stremmel, discloses all of the limitations of claim 5. However, Wang fails to disclose displaying that query and its corresponding initial structured result and new structured result to a user; and receiving an indication from the user of whether the initial structured result or the new structured result is superior. Liusie discloses a method for LLM comparative assessments for LLM outputs. Liusie teaches displaying that query and its corresponding initial structured result and new structured result to a user ("Human evaluation, where annotators critically assess the quality of the outputs of natural language generation (NLG) systems, has been the gold standard approach," Liusie Introduction pg. 1 and Liusie Fig. 1 LLM Comparative Assessment shows an LLM comparing outputs from two LLMs and choosing which one is better, since human annotation has been the gold standard, replacing the LLM with a human in this scenario is an obvious conclusion); and receiving an indication from the user of whether the initial structured result or the new structured result is superior ("Human evaluation, where annotators critically assess the quality of the outputs of natural language generation (NLG) systems, has been the gold standard approach," Liusie Introduction pg. 1 and Liusie Fig. 1 LLM Comparative Assessment shows an LLM comparing outputs from two LLMs and choosing which one is better, since human annotation has been the gold standard, replacing the LLM with a human in this scenario is an obvious conclusion). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Liusie’s teaching of comparing LLM outputs to decide which one answers the original query more efficiently. This allows the user to facilitate the new model being trained, as well as allowing them to evaluate performance of both models, improve accuracy, reduce cost, and identify common LLM pitfalls such as hallucinations or biases. Regarding claim 7, Wang, in view of Thompson, and further in view of Stremmel, discloses all of the limitations of claim 5. However, Wang fails to disclose sending that query and its corresponding initial structured result and new structured result to another LLM; and receiving an indication from the other LLM of whether the initial structured result or the new structured result is superior. Liusie teaches sending that query and its corresponding initial structured result and new structured result to another LLM (Liusie Fig. 1 LLM Comparative Assessment shows an LLM comparing outputs from two LLMs and choosing which one is better); and receiving an indication from the other LLM of whether the initial structured result or the new structured result is superior (Liusie Fig. 1 LLM Comparative Assessment shows an LLM comparing outputs from two LLMs and choosing which one is better). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang’s teaching of training a small model from synthesized prompts of larger models by including Liusie’s teaching of using another LLM to compare other LLM outputs to decide which one answers the original query more efficiently. This allows the user to automize the comparison to facilitate the new model being trained, as well as allowing the LLM to evaluate performance of both models, improve accuracy, reduce cost, and identify common LLM pitfalls such as hallucinations or biases. This would even further reduce costs and increase the speed at which the new model is trained and prepared accurately. As to claims 17-18, computer program product claims 17-18 and method claims 6-7 are related as method and computer program product of using same, with each claimed element’s function corresponding to the method step, respectively. Accordingly, claims 17-18 are similarly rejected under the same rationale as applied above with respect to the method claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US Patent Application Publication No. 2025/0259011 US Patent Application Publication No. 2024/0202539 Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM MICHAEL WEAVER whose telephone number is (571)272-7062. The examiner can normally be reached Monday-Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM MICHAEL WEAVER/ Examiner, Art Unit 2658 /RICHEMOND DORVIL/ Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591752
ZERO-SHOT DOMAIN TRANSFER WITH A TEXT-TO-TEXT MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12585765
SYSTEM AND METHOD FOR ROBUST NATURAL LANGUAGE CLASSIFICATION UNDER CHARACTER ENCODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579375
IMPLEMENTING ACTIVE LEARNING IN NATURAL LANGUAGE GENERATION TASKS
2y 5m to grant Granted Mar 17, 2026
Patent 12562077
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month