Prosecution Insights
Last updated: April 19, 2026
Application No. 15/555,290

Ensemble-Based Research Recommendation Systems And Methods

Non-Final OA §101§112
Filed
Sep 01, 2017
Examiner
WOITACH, JOSEPH T
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Nantomics LLC
OA Round
7 (Non-Final)
49%
Grant Probability
Moderate
7-8
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
187 granted / 381 resolved
-10.9% vs TC avg
Strong +28% interview lift
Without
With
+28.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
71 currently pending
Career history
452
Total Applications
across all art units

Statute-Specific Performance

§101
35.0%
-5.0% vs TC avg
§103
18.7%
-21.3% vs TC avg
§102
4.2%
-35.8% vs TC avg
§112
25.4%
-14.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 381 resolved cases

Office Action

§101 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/21/2025 has been entered. Applicant’s amendment As requested, Applicant’s after final amendment filed 1/22/2024 has been received and entered. Claims 1 and 26 have been amended. Claims 1-31 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-31 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Specifically, the claims have been amended to require that accuracy gain be calculated using average spread, number of models, maximum accuracy or minimum accuracy, but in review of the specification there does not appear to be any support for a calculation that provides for using these limitations to provide an ability to assess the data and provide an evaluation of ‘accuracy gain’. In review of the specification, general support for evaluating the models is provided for example at [0052] “Accuracy of a model can be derived through use of evaluation models built from the known genomic data sets and corresponding known clinical outcome data sets. For a specific model template, modeling engine 135 can build a number of evaluation models that are both trained and validated against the input known data sets. For example, a trained evaluation model can be trained based on 80% of the input data. Once the evaluation model has been trained, the remaining 20% of the genomic data can be run through the evaluation model to see if it generates prediction data similar to or closet to the remaining 20% of the known clinical outcome data. The accuracy of the trained evaluation model is then considered to be the ratio of the number of correct predictions to the total number of outcomes. Evaluation models can be trained using one or more cross-fold validation techniques.” However, this appears to provide for evaluating accuracy of the model’s accuracy and does not support using the limitations for any specific calculations as required of the amended claims. It is acknowledged that the specification teaches that accuracy can be arithmetical differences and using a classifier, for example at [0055] “Another metric related to accuracy includes accuracy gain. Accuracy gain can be defined as the arithmetical difference between a model's accuracy and the accuracy of a "majority classifier". The resulting metric can be positive or negative. Accuracy gain can be considered a model's performance relative to chance with respect to the known possible outcomes.” appears to generally support assessing the model, but fails to provide any specific link or arithmetic relationship now encompassed by the claims. The claims as amended appear to provide for new calculations and relationships for the calculations that are not taught in the present specification. Additionally, the claims have been amended to require a new step for ‘determining one or more patients that are suitable, but it appears that it is only based on genomic data, and there does not appear to be a clear correlation with the steps of the claim that rank research projects based on a variety of metrics listed and any specific means of determining for any given patient. Further, it is unclear how determining a model is appropriate and ranking research projects would result in a step of ‘causing’ the identified drug to be administered. In light of the claim being directed to a system, there does not appear to be a system that is capable of causing or administering a drug, or alternatively the system as a whole might be interpreted as a set of instructions without any physical requirement or administration as set forth in the claims. More clearly providing limitations set forth in the specification for calculating accuracy and for accuracy gain for the breadth of the claims of different models, uses and possible research projects could address the basis of the rejection. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, as amended the claims appear vague and incomplete in the recitation of “ranked listing of potential research projects selected from the plurality of potential research projects according to an accuracy gain of each research project, wherein the accuracy gain is calculated using an average, spread, number of models, maximum accuracy, and/or minimum accuracy of determining for any given patient. Further, it is unclear how determining a model is appropriate and ranking research projects would result in a step of ‘causing’ the identified drug to be administered. In light of the claim being directed to a system, there does not appear to be a system that is capable of causing or administering a drug, or alternatively the system as a whole might be interpreted as a set of instructions without any physical requirement or administration as set forth in the claims. More clearly setting forth a specific calculation or providing evidence that accuracy gain is a known term in the art and uses the limitations set forth in the claim could address the basis of the rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim analysis Claims 1 and 26 have been amended in prosecution and are generally directed to a system for clinical research project machine learning (claims 1-25) and to a method to generate machine learning results that would rank research projects (claims 26-31). The claims set has been amended to require that a “ranked listing of potential research projects selected from the plurality of potential research projects according to an accuracy gain of each research project, wherein the accuracy gain is calculated using an average, spread, number of models, maximum accuracy, and/or minimum accuracy” and deleted the more general relationship of any model characteristic metric ( previous claim limitation deleted: “ In view of the specification, there is no specific guidance for what any of the models are or what templates are used, and in view of the art of record there does not appear to be such models that exist in evidence of record. For the use of a computer network, the specification appears to provide literal support, but does not provide for any specific configuration or how the system is to use the ‘ensemble of trained digital clinical outcome models in parallel’. With the specific method steps that are implemented, the system of claim 1 implements the method of claim 26 which comprises providing genomic and clinical data from a cohort that together represents a clinical outcome (for example data obtained from a clinical study), and providing several modeling/prediction models to evaluate the data from the cohort, and based on the analysis rank potential projects from a plurality of potential research projects for ranking the possible for further research/analysis. The dependent claims set forth more descriptive steps for the analysis for the types of algorithms that are used to classify, supervised (user input needed to check analysis), at least ten algorithms are analyzed and assessed, and that metrics can be present and used to assess accuracy. With respect to interpreting the ‘system’ that implements the methodology, in review of the guidance of the specification the physical requirements of the system appear to be a memory to store the data for analysis (see [0018] for descriptions of types of data storage) and for the modelling engine this appears to be programs (see [008]-[0010] for types of models that are known and referenced) and appears to use a processor (see [0018]). The claims provide the basis of analyzing research projects which themselves intend to investigate the effect of a drug/compound. As broadly and generically provided, the final step appears to encompass analyzing a variety of features of a possible drug with a variety of features for possible outcome, and then based on modeling alone provide a drug to a patient in need thereof, but the final step appears to high level and generic as to be simply a suggestion to administer something to a patient that might have a disease or disorder being studied and no necessary correlation to whether the patient would ‘need’ the drug, or that there would be any necessary benefit to the patient given the generic nature of the analysis required of the claims. Response to Applicants arguments Applicants note the amendment to the claim for ‘administering the drug’ integrates the abstract idea into a practical application consistent with Example 48 provided by the USPTO. In response, the support and guidance of the specification is acknowledged and was considered in the claim analysis previously. However, the generic guidance of the specification fails to provide any definition nor requirements to the claims that provide for establishing that analysis of research programs would provide a drug for a patient in need thereof, and does not appear to be a practical application. There is no guidance nor evidence of record for a modeling engine for generating an ensemble of clinical outcome predictions or the ability to assess any metrics for the ranking of research projects. The figures are prophetic, and the guidance lacks any specificity where the steps of the claims are interpreted to be more that instructions and a mental process of revieing and assessing possible correlations of genomic data and clinical data for possible areas of further investigation as broadly claimed. Rejection of record For step 1 of the 101 analysis, the claims are found to be directed to a statutory category of a product and process. Here the system comprises physical parts and the method since it is stored on a non-transitory memory/medium and use of a computer for the data analysis steps. For step 2A of the 101 analysis, the judicial exception of the claims are the instructional steps of accessing genomic and clinical data from a cohort to assess and rank further research based on outcomes/correlations present of the data that as provided by ‘at least one modeling engine’ program. The steps of modelling and generating prediction models and ranking potential research projects are recited at a very high level and found to be instructional steps. In review of the specification, there are no definitions or specific requirements in the practice of these steps as broadly set forth in the claims. Additionally, in the claims, there is no requirement on the amount or complexity of the data that obtained/stored nor any specific modeling methodology recited in the claims, and at a high level for the independent claims (if not reciting and requiring using a computer) would comprise reviewing clinical results for a cohort and ranking the outcomes (and thus future studies) on the basis of any possible favorable clinical indices such as efficacy, lack of side effects,… that are common clinical outcomes that may be recorded in such data sets. The judicial exception is a set of instructions for analysis of genomic and clinical data and are considered mental processes, that is concepts performed in the human mind (including an observation, evaluation, judgment, opinion). It is noted that the claims recite the use of a modeling computer, but clearly encompass modeling methods/machine learning methods that require supervised learning with human input (see specification and claim 5 for example). Given the guidance of the specification and breadth of the claims, it appears that the computer implementation provides broadly for analysis tools to analyze and assess possible correlations that might exist in a data set that is first obtained. A review of the specification does not indicate that ‘research projects’ are generated anew from the claimed methods steps, but simply rank/classify outcomes/correlations present in the data for possible further research projects that may be proposed or contemplate as potential projects. Recent guidance from the office requires that the judicial exception be evaluated under a second prong to determine whether the judicial exception is practically applied. In the instant case, the claims do not have an additional element beyond the requirement that the method is computer implemented or a system that is a computer comprising a memory and processor. The judicial exception for the method steps that analyze cohort data requires steps recited at a high level of generality and are only stored on a non-transitory media, and given the evidence of record as a whole is not found to be a practical application of the judicial exception as broadly set forth. For step 2B of the 101 analysis, as amended the claims recite the use of a network and distributed computing system, however in view of the specification while it appears have literal support, this appears to be a suggestion to use such a system without any detailed guidance. Similar to the fact pattern in Alice, this appears to invoke the system as a tool, and is the conventional use of known conventional computer systems. In addition, the independent claims can be interpreted to recite an additional element of obtaining cohort genomic and clinical data, however this appears to be simply data import into the system. There are no physical steps on how the data is obtained and as such, the claims do not provide for any additional element to consider under step 2B that provides for significantly more. It is noted that in explaining the Alice framework, the Court wrote that "[i]n cases involving software innovations, [the step one] inquiry often turns on whether the claims focus on the specific asserted improvement in computer capabilities or, instead, on a process that qualifies as an abstract idea for which computers are invoked merely as a tool." The Court further noted that "[s]ince Alice, we have found software inventions to be patent-eligible where they have made non-abstract improvements to existing technological processes and computer technology." Moreover, these improvements must be specific -- "[a]n improved result, without more stated in the claim, is not enough to confer eligibility to an otherwise abstract idea . . . [t]o be patent-eligible, the claims must recite a specific means or method that solves a problem in an existing technological process." Here, the claims appear to be directed to the use of a computer for ‘modelling’ broadly, and based generally on the assessment or correlations identified identifying areas for further research which appear could be assessed in one’s mind or on paper by analyzing the cohort data. As noted a review of the relevant art provides US Patent 7899764 (March 1, 2011) by Martin et al. which is provided as evidence for the broad overview for the use of machine learning to analyze medical ontologies and their use in patient care, more specifically for the use of information in the medical data/research/ontologies for the assessment and possible treatment of a condition in a patient (see [0068] for example). Leung et al. (Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets, Proceedings of the IEEE | Vol. 104, No. 1, January 2016) provides an overview of the art of machine learning and the use of genetics noting “we do not expect computational methods to be able to entirely replace laboratory and clinical diagnosis, but they should greatly shorten the time required for these methods of analysis by reducing the search space of hypotheses that need to be validated.” (at page 188). Similarly, Holzinger et al, (Interactive Knowledge Discovery and Data Mining in Biomedical Informatics State-of-the-Art and Future Challenges Springer 2014 ISSN 0302-9743) provide multiple and detailed papers for advances in biomedical informatics, noting that in paper #15 “Intelligent integrative knowledge bases: bridging genomics, integrative biology and translational medicine”, and Nguyen et al. [62] present a perspective for data management, statistical analysis and knowledge discovery related to human disease, which they call an intelligent integrative knowledge base (I2KB). By building a bridge between patient associations, clinicians, experimentalists and modelers, I2KB will facilitate the emergence and propagation of systems medicine studies, which are a prerequisite for large-scaled clinical trial studies, efficient diagnosis, disease screening, drug target evaluation and development of new therapeutic strategies. Each provide evidence for the use of modeling and analysis with AI was known and used for assessment of genetic data, and further proposed to be used to synthesize and evaluate the broad sources of growing data for development of new drugs and therapies by focusing and evaluating available data to best manage patient care. As indicated in the summary of the judicial exception above and in view of the teachings of the specification, the steps are drawn to analysis of cohort data. While the instruction are stored on a medium and could be implemented on a computer, together the steps do not appear to result in significantly more than a means to compare sequences. The judicial exception of the method as claimed can be performed by hand and in light of the previous claims to a computer medium and in light of the teaching of the specification on a computer. In review of the instant specification the methods do not appear to require a special type of processor and can be performed on a general purpose computer. The second part, Step 2B of the two step analysis is to determine whether any element or combination of elements, in the claim is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception. No additional steps are recited in the instantly claimed invention that would amount to significantly more than the judicial exception. Without additional limitations, a process that employs mathematical algorithms (from the specification using statistics to assess ‘accuracy’ or claims 7-9 for example) to assess existing information (identify a correlation in the cohort data if it exists) to generate additional information is not patent eligible. Furthermore, if a claim is directed essentially to a method of calculating, using a mathematical formula, even if the solution is for a specific purpose, the claimed method is non-statutory. In other words, patenting abstract idea (ranking potential research projects based on past research project cohort data) cannot be circumvented by attempting to limit the use to a particular technological environment or purpose and desired result. Based upon an analysis with respect to the claim as a whole, claims 1-31 do not recite something significantly different than a judicial exception. Claims 1-31 are directed towards a method of receiving genomic and cohort data and comparing the data to identify further research projects. Dependent claims set forth additional steps which are more specifically define the considerations and steps of calculating, and comparing, and do not add additional elements which result in significantly more to the claimed method for the analysis. In prosecution Applicants have relied and provide a summary of the PTAB decision and rejection noting that the claims require a large number of trained models on a distributed network of computers, and parallel processing which are limitations that cannot be performed in the human mind. In response, the amendments were acknowledged, however these embodiments in view of the lack of necessary guidance of the specification appear to be similar to the fact pattern in the Alice decision and these elements simply invoke the use as a tool. Again, the specification fails to provide any guidance for how these distributed networked computers are to function, or how to implement parallel processing for any of the steps of the claims, and appear to provide for generic computer systems. While parallel processing may provide faster analysis, without any guidance and any evidence of record there does not appear to be any more than the suggestion t use these systems to implement the method which is the judicial exception. Additionally, applicants stated that the claims require that the models are ‘digital’ and that a million models cannot be performed in one’s mind. In response, as reviewed above, the specification fails to provide any specific model, let alone the million required of the claims. As discussed above, in view of the generic requirements of the claims, for the use of any sort of classifier and lack of evidence of record that the use of one or even the use of at least ten as set forth in dependent claims provides any advantage to how the computer operates or how any provide a better ranking, Applicants arguments appear inconsistent with any improvement to computer technology or removal of bias since the claims recite generically and require the use of the algorithms known in the art without any specific guidance on how they are to be implemented. Claim 1 generically sets forth the limitation of generating a ranked listing, and in view of the guidance of the specification, there is no clear or necessary guidance for how these known classifiers are to be implemented such that together there is any improvement or even specifically how the ranking would be affected. The general nature of the disclosure suggesting that multiple classifiers could be used to analyzed data sets to assist in analyzing potential research projects appear to be abstract steps and a judicial exception not necessarily tied to a computer environment nor an improvement since the computer would simply implement the classifier algorithm to rank projects. Given the generic nature and lack of specific requirement or guidance for integration of multiple classifiers, the claims appear to be directed simply to the judicial exception. In contrast to Applicants arguments that the claims provide for specific guidance to generate a ranked listing, it is noted that dependent claims provide only general and generic requirements of model measure accuracy gain (claims 6-7 or 13) or use of area under curve metric without even a requirement of any of the modeling systems providing this type of output or analysis to the end of being able to measure accuracy. Further the elements that are evaluated are provided as generic possible data outcome that should be evaluated such as the number of drugs-with no indication of any specificity (see claims 11, 14, 15, 16) and possible sources of genetic data (claims 17-19) without any indication of how it is to be evaluated, or how it is to be correlated to the number of possible drugs, and is relative and dependent on the details of generic and undefined projects to be ranked. Further, the types of research projects proposed in the claims provide for a generic listing of ‘prediction studies’ (see claim 22-23) without any guidance in the specification to how the data is to be analyzed or if a correlation exists between the proposed projects and any of the data that analyzed and potentially correlated would/could extend any of the particular studies generically set forth, or to any specific study in such a way that a ranking of projects would be performed by a computer or over a network (claim 25). Given the lack of specific guidance of the specification and broad generic limitations for elements that are to be considered as data sources and general nature of using known models and classifiers without any particular guidance to rank potential research projects, it is found that the claims broadly are directed to a judicial exception and the concept of analyzing relevant clinical and genetic data and based on the data, rank proposed future research projects with the general indication to use machine learning. Conclusion No claim is allowed. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joseph T Woitach whose telephone number is (571)272-0739. The examiner can normally be reached Mon-Fri; 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz R Skowronek can be reached on 571 272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Joseph Woitach/Primary Examiner, Art Unit 1687
Read full office action

Prosecution Timeline

Sep 01, 2017
Application Filed
May 25, 2020
Non-Final Rejection — §101, §112
Aug 21, 2020
Response Filed
Aug 21, 2020
Response after Non-Final Action
Sep 08, 2020
Response Filed
Dec 02, 2020
Final Rejection — §101, §112
Jan 14, 2021
Applicant Interview (Telephonic)
Jan 14, 2021
Examiner Interview Summary
Mar 05, 2021
Notice of Allowance
Mar 05, 2021
Response after Non-Final Action
Mar 15, 2021
Response after Non-Final Action
Jun 19, 2021
Response after Non-Final Action
Aug 27, 2021
Response after Non-Final Action
Aug 30, 2021
Response after Non-Final Action
Aug 31, 2021
Response after Non-Final Action
Aug 31, 2021
Response after Non-Final Action
Jul 20, 2022
Response after Non-Final Action
Aug 29, 2022
Request for Continued Examination
Aug 31, 2022
Response after Non-Final Action
Mar 25, 2023
Non-Final Rejection — §101, §112
Jun 01, 2023
Interview Requested
Jun 06, 2023
Examiner Interview Summary
Jun 06, 2023
Applicant Interview (Telephonic)
Jun 29, 2023
Response Filed
Oct 07, 2023
Final Rejection — §101, §112
Jan 12, 2024
Request for Continued Examination
Jan 17, 2024
Response after Non-Final Action
May 18, 2024
Non-Final Rejection — §101, §112
Aug 22, 2024
Response Filed
Nov 18, 2024
Final Rejection — §101, §112
Jan 22, 2025
Response after Non-Final Action
Feb 21, 2025
Request for Continued Examination
Feb 25, 2025
Response after Non-Final Action
Sep 10, 2025
Non-Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603152
METHODS AND APPLICATIONS OF GENE FUSION DETECTION IN CELL-FREE DNA ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12525361
SYSTEMS AND METHODS FOR MODELLING PHYSIOLOGIC FUNCTION USING A COMBINATION OF MODELS OF VARYING DETAIL
2y 5m to grant Granted Jan 13, 2026
Patent 12522819
SYSTEMS AND METHODS FOR DETERMINING NUCLEIC ACIDS
2y 5m to grant Granted Jan 13, 2026
Patent 12522820
SYSTEMS AND METHODS FOR DETERMINING NUCLEIC ACIDS
2y 5m to grant Granted Jan 13, 2026
Patent 12516385
METHODS FOR USING MOSAICISM IN NUCLEIC ACIDS SAMPLED DISTAL TO THEIR ORIGIN
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
49%
Grant Probability
78%
With Interview (+28.5%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 381 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month