Prosecution Insights
Last updated: April 19, 2026
Application No. 18/602,071

DOMAIN-ADAPTIVE PRE-TRAINING OF INSTRUCTION-TUNED LLMS FOR RADIOLOGY REPORT IMPRESSION GENERATION

Non-Final OA §101§103
Filed
Mar 12, 2024
Examiner
HAYNES, DAWN TRINAH
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Siemens Healthineers AG
OA Round
2 (Non-Final)
2%
Grant Probability
At Risk
2-3
OA Rounds
4y 7m
To Grant
5%
With Interview

Examiner Intelligence

Grants only 2% of cases
2%
Career Allow Rate
1 granted / 67 resolved
-50.5% vs TC avg
Minimal +4% lift
Without
With
+3.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
32 currently pending
Career history
99
Total Applications
across all art units

Statute-Specific Performance

§101
38.6%
-1.4% vs TC avg
§103
36.2%
-3.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 67 resolved cases

Office Action

§101 §103
DETAILED ACTION The present office action represents a 2nd nonfinal action on the merits. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application claims the priority date of foreign application IN202311034814 of May 18, 2023. Status of Claims Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-8 are drawn to a computer-implemented method, which is within the four statutory categories (i.e., process). Claims 9-13 are drawn to an apparatus, which is within the four statutory categories (i.e., machine). Claims 14-17 are drawn to a non-transitory computer-readable storage medium, which is within the four statutory categories (i.e., machine). Claims 18-20 are drawn to a computer-implemented method, which is within the four statutory categories (i.e., process). Claims 1-8 recite a computer-implemented method comprising: receiving input medical data associated with a medical domain; performing a clinical task based on the input medical data using a trained language model; and outputting results of the clinical task, wherein the trained language model is trained by: receiving domain-specific training data associated with the medical domain, and training a pretrained, instruction-tuned language model for the medical domain using the domain-specific training data. Claim 9 recites an apparatus in addition to the same abstract idea that is recited in claim 1. Claim 14 recites a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations in addition to the same abstract idea that is recited in claim 1. Claim 18 recites a computer-implemented method in addition to the same abstract idea that is recited in claim 1. The bolded limitations, given the broadest reasonable interpretation, cover a certain method of organizing human activity and/or mathematical concepts, but for the recitation of generic computer components (e.g., in this case a computer and an apparatus.). The underlined limitations are not part of the identified abstract idea (the method of organizing human activity or mathematical concepts) and are deemed “additional elements,” and will be discussed in further detail below. Dependent claims 2-8, 10-13, 15-17, and 19-20 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. These limitations only serve to further limit the abstract idea (or contain the same additional elements found in the independent claim), and hence are nonetheless directed towards fundamentally the same abstract idea as independent claims 1, 9, 14, and 18. The additional elements from claims 1, 9, 14, and 18 include: receiving input medical data (apply it, MPEP 2106.05(f); insignificant extra-solution activity MPEP 2106.05(g)). receiving domain-specific training data (apply it, MPEP 2106.05(f); insignificant extra-solution activity MPEP 2106.05(g)). The additional elements from claim 9 include: an apparatus (apply it, MPEP 2106.05(f)). The additional elements from claim 14 include: a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations (apply it, MPEP 2106.05(f)). These additional elements, in the independent claims are not integrated into a practical application because the additional elements (i.e., the limitations not identified as part of the abstract idea) amount to no more than limitations which: amount to mere instructions to apply an exception – for example, the recitation of an apparatus and computer, See Specification paragraphs [0107]-[0109] (See MPEP 2106.05(f)). amount to insignificant extra-solution activity- for example, the recitation of receiving input medical data See Specification paragraphs [0027] (See MPEP 2106.05(g)). Furthermore, the claims do not include additional elements that are sufficient to amount to “significantly more” than the judicial exception because, the additional elements (i.e., the elements other than the abstract idea) amount to no more than limitations which: amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrated by: The Specification discloses that the additional elements are well-understood, routine, and conventional in nature (i.e., the Specification, paragraphs [0107]-[0109], discloses that the additional elements (i.e., an apparatus and computer) comprise a plurality of different types of generic computing systems that are configured to perform generic computer functions (i.e., receiving and transmitting data) that are well understood routine, and conventional activities previously known to the pertinent industry (i.e., healthcare, domain-adaptive pre-training of instruction-tuned LLMS for radiology report impression generation.); Relevant court decisions: The following example of court decision demonstrating well understood, routine and conventional activities, e.g., see MPEP 2106.05(d)(II): Receiving input medical data and domain-specific training data, e.g., see Intellectual Ventures v. Symantec. Dependent claims 2-8, 10-13, 15-17, and 19-20 include other limitations, but none of these functions are deemed significantly more than the abstract idea. Thus, taken alone, the additional elements do not amount to “significantly more” than the above identified abstract idea. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves any other technology, and their collective functions merely provide conventional computer implementation. The application, is an attempt to organize human activity or relate to mathematical concepts, using domain-specific training data. The inventive concept is the domain-adaptive pre-training of instruction-tuned LLMS for radiology report impression generation, which is not patentable. Therefore, whether taken individually or as an ordered combination, claims 1-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chang (U.S. Pub. No. 2021/0082561 A1) in view of Dinu (U.S. Pub. No. 2024/0354319 A1). Regarding claim 1, Chang discloses computer-implemented method comprising: receiving input medical data associated with a medical domain (Paragraphs [0002], [0013] discuss receiving a set of finding inputs in the radiology field.); performing a clinical task based on the input medical data using a trained language model (Paragraphs [0009], [0011]-[0013] discuss inputs and outputs in a system/method for the automated generation of impression text in a radiology report using a trained machine learning model.); and outputting results of the clinical task, wherein the trained language model is trained by (Paragraphs [0009], [0011]-[0013] discuss inputs and outputs in a system/method for the automated generation of impression text in a radiology report using a trained machine learning model.): receiving domain-specific training data associated with the medical domain, and training a pretrained, tuned language model for the medical domain using the domain-specific training data (Paragraphs [0025], [0028]-[0029], [0035], and [0061] discuss trained, pretrained, fine-tuned or using other forms of transfer learning (e.g., based on a pretrained model), combined with one or more ontologies (e.g., radiological or other clinical ontology database), and/or any combination of these.). Chang does not explicitly disclose: instruction-tuned language model. Dinu teaches: an instruction-tuned language model (Paragraphs [0004] and [0040] discuss one or more dialog flows can be used to provide natural language instructions to a language model, such as to provide natural language instructions to an instruction-tuned LLM.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, instruction-tuned language model, as taught by Dinu, in order to provide more effective techniques constraining language models to generate desired outputs. (Dinu Paragraph [0005]). Regarding claims 2, 10, and 19, Chang discloses wherein the pretrained, tuned language model is trained by: performing general pretraining of a language model using non-domain-specific training data (Paragraph [0035 discusses model(s) can be any or all of: trained, pretrained, fine- tuned or using other forms of transfer learning (e.g., based on a pretrained model), combined with one or more ontologies (e.g., radiological or other clinical ontology database), and/or any combination of these.); and performing instruction tuning on the general pretrained language model using labeled training data (Paragraphs [0035] and [0061] discuss model(s) can be any or all of: trained, pretrained, fine- tuned or using other forms of transfer learning (e.g., based on a pretrained model), combined with one or more ontologies (e.g., radiological or other clinical ontology database), and/or any combination of these.). Chang does not explicitly disclose: instruction-tuned language model. Dinu teaches: instruction-tuned language model (Paragraphs [0004] and [0040] discuss one or more dialog flows can be used to provide natural language instructions to a language model, such as to provide natural language instructions to an instruction-tuned LLM.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, instruction-tuned language model, as taught by Dinu, in order to provide more effective techniques constraining language models to generate desired outputs. (Dinu Paragraph [0005]). Regarding claims 3, 11, and 20, Chang discloses wherein a same loss function is used for performing the general pretraining, performing the tuning, and the training (Paragraphs [0062] and [0065] discuss preprocessing can optionally additionally or alternatively include upweighting or up sampling sets of cases in the training data, which can function to help the model(s) properly handle complex and/or particularly important (e.g., critical, life-threatening, etc.) cases and includes implementing a loss function to upweight cases so that model pays closer attention to them, for example, in cancer cases, which are complicated to interpret and determine accurate impressions for, are upweighted through a loss function; Preprocessing can include training separate models based on a set of preferences (e.g., preferred and/or prescribed recommendations, radiology group preferences, radiologist preferences, healthcare facility preferences, preferred follow-up treatments, etc.), for instance, models are trained separately to be able to determine particular recommendations based on the patient's condition.). Chang does not explicitly disclose: Instruction tuning. Dinu teaches: Instruction tuning (Paragraphs [0004] and [0040] discuss one or more dialog flows can be used to provide natural language instructions to a language model, such as to provide natural language instructions to an instruction-tuned LLM.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, instruction tuning, as taught by Dinu, in order to provide more effective techniques constraining language models to generate desired outputs. (Dinu Paragraph [0005]). Regarding claims 4 and 12, Chang discloses wherein training a pretrained, tuned language model for the medical domain using the domain-specific training data comprises: updating only parameters of certain layers of the pretrained, tuned language model at each iteration (Paragraphs [0032], [0065], and [0106]-[0108] discuss the models can be trained to reflect recommendations which often get updated, for instance, the recommendation data being trained on is historical, wherein the method includes tagging that recommendation and updating it and/or flagging it so that it can be updated. In specific examples, a token is used to tag a portion of the impression (e.g., corresponding to outdated and/or potentially outdated information) when training the model, wherein in post-processing, logic adjusts the language corresponding to the token to reflect the up-to-date language; further the set of models further preferably includes one or more deep learning models configured for natural language processing (NLP) (e.g., models configured to handle sequential data), such as one or more deep learning models with attention mechanisms, such as any or all of: a sequence-to-sequence architecture; one or more attention layers (e.g., in one or more encoders, in one or more decoders, etc.); one or more self-attention layers (e.g., in one or more encoders, in one or more decoders, etc.); and/or any other tools, features, and/or architecture.). Chang does not explicitly disclose: instruction-tuned language model. Dinu teaches: instruction-tuned language model (Paragraphs [0004] and [0040] discuss one or more dialog flows can be used to provide natural language instructions to a language model, such as to provide natural language instructions to an instruction-tuned LLM.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, instruction-tuned language model, as taught by Dinu, in order to provide more effective techniques constraining language models to generate desired outputs. (Dinu Paragraph [0005]). Regarding claims 5 and 13, Chang discloses wherein training a pretrained, language model for the medical domain using the domain-specific training data comprises: adding domain-specific vocabulary for the medical domain to the pretrained, tuned language model (Paragraphs [0061] and [0106] discuss preprocessing can optionally additionally or alternatively include adding and/or removing items from training data (e.g., using a syntheticator), which can include any or all of: training the model(s) to add phrasing and/or use particular language which is recommended and/or prescribed (e.g., based on standards of a radiology society and/or group, based on preferences of a radiology group, based on preferences of a particular radiologist, based on preferences of a healthcare facility for coding and/or billing optimization and/or to satisfy coding and/or billing requirements or standards, etc.).). Chang does not explicitly disclose: instruction-tuned language model. Dinu teaches: instruction-tuned language model (Paragraphs [0004] and [0040] discuss one or more dialog flows can be used to provide natural language instructions to a language model, such as to provide natural language instructions to an instruction-tuned LLM.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, instruction-tuned language model, as taught by Dinu, in order to provide more effective techniques constraining language models to generate desired outputs. (Dinu Paragraph [0005]). Regarding claims 6 and 15, Chang discloses wherein the input medical data comprises a findings section of a radiology report and the clinical task comprises generation of an impressions section of the radiology report (Paragraphs [0013] and FIG. 2 discuss a method for automatically generating impression text (and/or any other suitable fields of a radiology report such as comparisons, contrast amounts, specific measurements, etc.) includes: receiving a radiologist identifier (radiologist ID); receiving a set of findings inputs and optionally other inputs; determining a context of each of the set of inputs; determining an impression based on the context and the radiologist style; and inserting the impression text (and/or any other suitable text) into the report.). Regarding claims 7 and 16, Chang discloses wherein the medical domain is radiology (Paragraphs [0002], [0013] discuss receiving a set of finding inputs in the radiology field.). Regarding claims 8 and 17, Chang discloses wherein the trained language model is a trained large language model (Paragraphs [0011] and [0027] discuss generating a field (e.g., impression field) of a radiology report includes a set of one or more models includes one or more machine learning models, further preferably one or more deep learning models or a transformer machine learning model. Additionally or alternatively, the set of models can include any or all of: algorithms, equations, rules and/or rulesets, databases, lookup tables, and/or any other suitable tools for generating, checking, editing, and/or otherwise processing language information in a radiology report.). Regarding claim 18, Chang discloses a computer-implemented method comprising (Paragraph [0013 discusses a method for automatically generating impression text of a radiology report.): Claim 18, Chang includes substantially the same limitations discussed above in Claim 1 and is similarly rejected. Claims 9 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Chang in view of Dinu and in further view of Rahbar (U.S. Pub. No. 2022/0059200 A1). Regarding claim 9, Chang discloses the same limitations discussed above in Claim 1 with the addition of an apparatus and is similarly rejected. Chang does not explicitly disclose: an apparatus comprising. Rahbar teaches: an apparatus comprising (Paragraph [0084] discuss an apparatus.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, an apparatus comprising, as taught by Rahbar, in order to provide detailed descriptions of the imaging findings, anatomical considerations that may impact operative planning, and recommendations for additional work-up. (Rahbar Paragraph [0007]). Regarding claim 14, Chang discloses the same limitations discussed above in Claim 1 with the addition of a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations comprising and is similarly rejected. Chang does not explicitly disclose: a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations comprising. Rahbar teaches: a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations comprising (Paragraph [0081] discusses the computer systems may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media.). Therefore, it would have been obvious to one of ordinary skill in the art to modify Chang to include, a non-transitory computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out operations comprising, as taught by Rahbar, in order to provide detailed descriptions of the imaging findings, anatomical considerations that may impact operative planning, and recommendations for additional work-up. (Rahbar Paragraph [0007]). Response to Arguments Applicant’s arguments filed 11/6/2025 have been fully considered. Rejections under 35 U.S.C. 101: With respect to claim 1 and the 35 U.S.C. 101 rejection, Applicant’s arguments fail to overcome the previous rejection. Claim 1 recites an abstract idea, a method of organizing human activity or mathematical concepts. See MPEP 2106.04(a)(2)(II)(C) Managing Personal Behavior or Relationships or Interactions Between People and MPEP 2106.04(a)(2)(I)(A) Mathematical Concepts. Step 2A, Prong One Applicant states, “The claims generally relate to an innovative general-pretrain-prompt-tune-and-special- pretrain approach for training a language model for performing a clinical task. The claims are not directed to a fundamental economical principle or practice, a commercial or legal interaction, or managing personal behavior or relationships or interactions between people.” (Remarks, page 6). Applicant further states, “the claims are not similar to any of the examples listed in MPEP 2106.04(a)(2)(II)(C) and are not directed to managing personal behavior, relationships, or activities between people or any other method of organizing human activity. The claims do not claim the performance of any step as being performed by a person and the specification does not describe any of the steps of claim 1 as being required to be performed by a person.” (Remarks, page 6). Applicant states, “claims do not recite a mathematical concept. While the claims may arguably be based on or involve a mathematical concept, the claims do not actually recite a mathematical concept. For instance, the claims do not recite a mathematical relationship, a mathematical formula or equation, or a mathematical calculation, either expressed in words or in mathematical symbols. Thus, the claims do not recite a mathematical concept.” (Remarks, page 7). Examiner respectfully disagrees. The Application recites the abstract idea of domain-adaptive pre-training of instruction-tuned LLMS for radiology report impression generation. The claimed invention does not provide an improvement in technology. Generating the impressions section of a radiology repot from the findings section of the radiology report, is not a technical problem rooted in the technology. Here, there is no improvement to the apparatus or any other device – the apparatus is used to receive input medical data and output results of the clinical task, which itself is not an improvement. Applicant’s claims are directed to gathering information, organizing the information, comparing the information, and presenting the information. Step 2A, Prong Two While practical application is a way to overcome the Prong 2 35 U.S.C. 101 rejection, here, claim 1 fails to integrate the recited judicial exception into a practical application. Applicant states, the “claims are integrated into the practical application of an improvement in the functioning of a computer or other technology. Specifically, the claims are integrated into the practical application of an improved approach for training a language model for performing a clinical task.” (Remarks, page 9). Applicant states, “Advantageously, language models trained in accordance with embodiments of the invention have been experimentally found to significantly improve performance of the impressions of generation task, which will simplify and improve adaptation by clinicians. (Specification; para. [0022]).” (Remarks, pages 9-10). Examiner respectfully disagrees. The " novel three-stage approach for training a pretrained language model for performing domain-specific tasks: 1) general pretraining, 2) prompt-tuning, and 3) domain- specialized pretraining”, does not result in a practical application as it is recited as part of the abstract idea, as stated above. All components in the claims are being used for their intended purpose and as written do not result in a practical application. Here, the improvement is to the abstract idea. For the reasons stated above, claims 9, 14, and 18 similarly fail to overcome the 35 U.S.C. 101 rejection. Step 2B All components in the claims are being used for their intended purpose and as written do not result in a practical application or significantly more than the abstract idea. Applicant states, “One example of an element that the courts have found to qualify as significantly more is an element adding a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application. See MPEP 2106.05(d). The claims recite elements that amount to significantly more than the alleged abstract idea itself. Specifically, claim 1 requires "training a pretrained, instruction-tuned language model for the medical domain using the domain-specific training data." The cited reference does not teach or suggest at least these limitations of claim 1. It follows that the claims are not well understood, routine, or conventional.” (Remarks, page 10). Examiner respectfully disagrees. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). MPEP 2106.05(d). Here, claim 1 is, “receiving input medical data associated with a medical domain; performing a clinical task based on the input medical data using a trained language model; and outputting results of the clinical task, wherein the trained language model is trained by: receiving domain-specific training data associated with the medical domain, and training a pretrained, instruction-tuned language model for the medical domain using the domain-specific training data.” The computer functions are well‐understood, routine, and conventional functions claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. Rejections under 35 U.S.C. 103: With respect to claim 1 and the 35 U.S.C. 102 rejection, Examiner withdraws the rejection and Applicant’s arguments with respect to claim 1 have been considered and the Examiner’s rejection has been updated to address Applicant’s claim 1. Examiner has similarly amended the rejection for claims 9, 14, and 18. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAWN TRINAH HAYNES whose telephone number is (571)270-5994. The examiner can normally be reached M-F 7:30-5:15PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Dunham can be reached on (571)272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAWN T. HAYNES/ Art Unit 3686 /RACHELLE L REICHERT/Primary Examiner, Art Unit 3686
Read full office action

Prosecution Timeline

Mar 12, 2024
Application Filed
Sep 25, 2025
Non-Final Rejection — §101, §103
Nov 06, 2025
Response Filed
Feb 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12469037
COMPUTING DEVICE, METHOD AND COMPUTER PROGRAM PRODUCT FOR CONSTRUCTING A CONSOLIDATED MESSAGE
2y 5m to grant Granted Nov 11, 2025
Patent 12437852
System and Method for Audible Prescription Label Information Using RFID Prescription Packaging
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
2%
Grant Probability
5%
With Interview (+3.5%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 67 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month