Prosecution Insights
Last updated: April 19, 2026
Application No. 18/678,501

SYSTEM AND METHOD FOR UTILIZING LARGE LANGUAGE MODELS AND NATURAL LANGUAGE PROCESSING TECHNOLOGIES TO PRE-PROCESS AND ANALYZE DATA TO IMPROVE DETECTION OF CYBER THREATS

Non-Final OA §101§103
Filed
May 30, 2024
Examiner
HARRIS, CHRISTOPHER C
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Darktrace Holdings Limited
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
275 granted / 362 resolved
+18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
21 currently pending
Career history
383
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
38.4%
-1.6% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 362 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION Remarks This action is in response to communications filed on 02/13/2026, claims 1-17 are presently pending in the application and have been considered as follows. Election/Restrictions Applicant's election with traverse of Group 1 in the reply filed on 02/13/2026 is acknowledged. The traversal is on the ground(s) that they are not patentably distinct from each other because the logic of Group I represents the core functional logic of Group II. Applicant further contends that searching for the second LLM of Group II would necessarily encompass a search for Group II, thereby negating any serious search burden This is not found persuasive because at least for the reasons that the applicant does not admit the inventions are obvious variants and there is nothing on the record to indicate that they are. Furthermore, each group is directed to different category classes namely a process and apparatus. MPEP 806.05(e) states “Process and apparatus for its practice can be shown to be distinct inventions, if either or both of the following can be shown: (A) that the process as claimed can be practiced by another materially different apparatus or by hand; or (B) that the apparatus as claimed can be used to practice another materially different process.” In this case Group I does not require the specific structure of Group II, namely Group I can be practiced by a materially different apparatus, such as a single large language model (LLM) server or a standard machine learning system that does not use LLMs for correlation and Group II requires a specific three-LLM architecture that includes a first LLM for complex filtering and a third LLM for anonymizing personally identifiable information (PII) which are not required by the method of Group I. Additionally, the inventions requires different fields of searches as previously indicated in the restriction requirement mailed December 3, 2025. Finally a search for dependent claims 7 (complex filter) and dependent claim 6 (PII anonymization) does not obviate the burden of searching the multi-LLM hardware and/or software architecture specifically recited within Group II. The requirement is still deemed proper and is therefore made FINAL. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 and 13-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Claim Interpretation: Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111. Claim 1 is directed to a method for enhancing detection of cyber threats by a cybersecurity system through use of one or more Large Language Models (LLMs), the method comprising: generating a first embedding vector based on content within a first data set, wherein the first data set includes at least a first user credential; generating a second embedding vector based on content within a second data set, wherein the second data set includes at least a second user credential; based on a level of correlation between the first embedding vector and the second embedding vector exceeding a prescribed value, identifying a user is associated with both the first user credential and the second user credential; and training an Artificial Intelligence (AI) model using both the content within the first data set and the content within the second data set to improve detection of cyber threats associated with data pertaining to the user. Broadly, the following elements can be interpreted as: The specification states a “cybersecurity system" includes several components configured to communicate and cooperate with each other, including a cybersecurity appliance implemented with a cyber threat detection engine, a cyber threat autonomous response engine, a cyberattack simulation engine, a cyberattack restoration engine, and other components such as one or more large language models (hereinafter, “LLM(s)”). For example, the LLM(s) and natural language processing (NLP) logic may be configured to (i) pre-process data. Thus the broadest and most reasonable interpretation of firewall device is any physical or virtual entity that process packets based on rules. Furthermore, the specification states a “LLM" can be one or more AI-based algorithms that have been trained on a large amount of text-based data, typically scraped from the publicly available resource. Furthermore, the specification does not appear to provide a special definition of an “Artificial Intelligence (AI) model " and thus the AI model has broadly and reasonably been interpreted to have the same interpretation as LLM. The term “embedding vector” has been interpreted in light of the specification to broadly mean string of text data, text character pattern based on content extracted from data. The term “user credential” has been interpreted to broadly mean usernames or other presences in different platforms and/or authentication data. The term “data set” has been interpreted to broadly mean data comprised from any source. The term “a level of correlation” appears to be directed to a fuzzy hash computation and comparison and thus involves a mathematical computation. The claim limitation “based on a level of correlation between the first embedding vector and the second embedding vector exceeding a prescribed value, identifying a user is associated with both the first user credential and the second user credential;” as previously noted has been interpreted to mean performing a mathematical computation involving data. Additionally, identifying based on correlation encompasses a mental observation as well. Each directed to the abstract ideas of a mental process and mathematical concept/. Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claims recites a method. And a non-transitory storage medium These are directed to a series of steps or acts and a manufacture, and falls within one of the statutory categories of invention. (Step 1: YES). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). Claim 1 recites a method for enhancing detection of cyber threats by a cybersecurity system through use of one or more Large Language Models (LLMs), the method comprising: generating a first embedding vector based on content within a first data set, wherein the first data set includes at least a first user credential;. generating a second embedding vector based on content within a second data set, wherein the second data set includes at least a second user credential; based on a level of correlation between the first embedding vector and the second embedding vector exceeding a prescribed value, identifying a user is associated with both the first user credential and the second user credential; and training an Artificial Intelligence (AI) model using both the content within the first data set and the content within the second data set to improve detection of cyber threats associated with data pertaining to the user. The claim requires the steps of generating a first embedding vector and generating a second embedding vector based on data sets. These steps broadly describe the organization and mathematical transformation of data into vectors. The claim also recites identifying a user is associated with credentials based on a level of correlation between the vectors which requires mathematical calculations to compare the vectors and determine a numerical score. Therefore, if a claim limitation, under its broadest reasonable interpretation, covers mathematical computations but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Furthermore, the step of identifying a user based on a level of correlation encompasses a would fall under mental observations and evaluations that can practically be performed in the human mind or by a human using a paper or pen. For example the claim encompasses a user observing two pieces of credential data, comparing them and performing an evaluation using judgement if they belong to a same person, for example establishments that require two forms of ID to verify identity. Therefore, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As the claim recites both mathematical concepts and mental processes, it is determined that the claim recites multiple abstract ideas and as MPEP 2106.04 requires that a claim should not be parse into multiple exceptions the limitations together are consider as a single abstract idea. (Step 2A, Prong One: YES). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements – a cybersecurity system through use of one or more Large Language Models (LLMs) and training an Artificial Intelligence (AI) model. The limitations “receiving a packet;” and “sending the packet.” are mere data gathering and outputting and are recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and post-solution activity, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. The limitations directed to the elements of a cybersecurity system and LLM are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component.. They are not specialized components nor do they reflect any improvement the functioning of the environment. Furthermore, while the background of the specification discloses given that the categorization of the unstructured data is based on human activity, the transformation of the unstructured third-party data into structured data cannot be performed quickly or reliably, as human analysts may, at times, fail to collect salient or meaningful data for a specific customer that would improve cyber threat detection and/or assists in the creation of new detection tools and/or AI detection models for that customer the claim only recites the outcome of training an Artificial Intelligence (AI) model to improve detection of cyber threats associated with data pertaining to the user. The claim lacks any steps of actual use of the AI model to detect and remediate cyber threats instead it automates something previously done manually by humans which is abstract and lacks a technical improvement to the network and only indicates a field of use or technological environment in which the judicial exception is performed. See MPEP 2106(h) which states “... that this type of limitation merely confines the use of the abstract idea to a particular technological environment and thus fails to add an inventive concept to the claims.” No technical improvement or transformation of data is disclosed nor any specific configuration of the hardware or specialized hardware is claimed. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element as described above amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1) and 2106.05(a). Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the monitoring, determining and performing an action step was considered to a mental process in Step 2A, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. Ad discussed in Step 2A, Prong Two, the only additional elements beyond the abstract idea are the storage medium and system which are generic and conventional. The additional element of the cybersecurity system, LLM and AI Model and sending was found to insignificant extra-solution activity in in Step 2A, Prong Two and are recited at a high level of generality. These elements amount to receiving or transmitting data over a network and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The additional element as described above amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. (Step 2B: NO). Therefore, claim 1 is directed to non-statutory subject matter. Additionally, the claims 8 and 13, is rejected for at least the reasons mentioned above. Additionally, the dependent claims 2-7 and 14-17 (e.g. receiving, source, analytics, receiving, data sanitization, data generation) are rejected as they do not recite additional elements that amount to significantly more than the judicial exception as they are only directed towards further limitations of mathematical formulas, mental evaluations and longstanding conventional human activities as these limitation merely provide details on how to perform the abstract idea, define the rules, provide further data gathering, or mental evaluations. Considered individually or in combination with claim 1 these claims lack an inventive concept and merely apply an abstract idea using well understood, routine, conventional activity. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 7, 8, 13, 14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over NPL “DeepLink: A Deep Learning Approach for User Identity Linkage” to Zhou et al. (hereinafter “Zhou”) in view of US 20240330446 to BULUT et al. (hereinafter “Bulut”) ”) Claim 1 Zhou teaches a method for enhancing detection of cyber threats by a cybersecurity system [e.g. Zhou; Section 1 “Introduction” – Zhou discloses identity linkage being used in malicious account detection for cyber security] generating a first embedding vector based on content within a first data set, wherein the first data set includes at least a first user credential; [e.g. Zhou; Section 1 INTRODUCTION , Section III. PRELIMINARY BACKGROUND” Section IV. DEEPLINK: THE PROPOSED MODEL, A. Network Structure Sampling thru E. Discussion – Zhou discloses features can be embedded into the vector (e.g. first embedding vector) from Social Network Graphs of different social networks (e.g. content with first, second...n data set). Features from a user such as a user name (e.g. first user credential) is embedded into a vector ] generating a second embedding vector based on content within a second data set, wherein the second data set includes at least a second user credential; [e.g. Zhou; Section 1 INTRODUCTION , Section III. PRELIMINARY BACKGROUND” Section IV. DEEPLINK: THE PROPOSED MODEL, A. Network Structure Sampling thru E. Discussion – Zhou discloses features can be embedded into the vector (e.g. second embedding vector) from Social Network Graphs of different social networks (e.g. content with first, second...n data set). Features from a user such as a user name (e.g. second user credential) is embedded into a vector.] based on a level of correlation between the first embedding vector and the second embedding vector exceeding a prescribed value, identifying a user is associated with both the first user credential and the second user credential; [e.g. Zhou; Section 1 INTRODUCTION , Section III. PRELIMINARY BACKGROUND” Section IV. DEEPLINK: THE PROPOSED MODEL, A. Network Structure Sampling thru E. Discussion – Zhou discloses measuring similarity between vectors to predict if pairs of user identity belong to the same person.] and While Zhou teaches the method of claim 1 and teaches linking credentials of a user across various networks for use in cyber security systems, Zhou fails to teach the use of LLMs and training an AI model based on the linked data thus Zhou fails to explicitly teach however, Bulut teaches the use of LLMs and training an Artificial Intelligence (AI) model using both the content within the first data set and the content within the second data set to improve detection of cyber threats associated with data pertaining to the user. [e.g. Bulut; Abstract, Para. 0003, 0017, 0019, 0024, 0036, 0045, 0051, 0056, 0064, 0081, 0083, 0084 – Bulut discloses a system that utilizes a security specific large language model (LLM) for improving the performance and energy efficiency of machine learning systems that generate security specific machine learning models and generate security related information and training machine learning models for anomaly detection.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by Zhou with the advantage of improving the performance and energy efficiency of machine learning systems as disclosed in the abstract by Bulut Claim 2 Zhou teaches the method of claim 1, wherein prior to generating the first embedding vector and the second embedding vector, the method further comprising: receiving data from external sources, wherein the received data comprises at least (i) the first data set including the first user credential followed by the second data set including the second user credential. [e.g. Zhou; Section 1 INTRODUCTION , Section III. PRELIMINARY BACKGROUND” Section IV. DEEPLINK: THE PROPOSED MODEL, A. Network Structure Sampling thru E. Discussion – Zhou discloses receiving user names from a social network.] [e.g. Bulut; Abstract, Para. 0003, 0017, 0019, 0024, 0036, 0045, 0051, 0056, 0064, 0081, 0083, 0084– Bulut discloses usernames being on a threat intelligent document.] Claim 3 Zhou teaches the method of claim 2, wherein the external source comprises open-source cyber threat intelligence, social media information, and news website information. [e.g. Zhou; Section 1 INTRODUCTION , Section III. PRELIMINARY BACKGROUND” Section IV. DEEPLINK: THE PROPOSED MODEL, A. Network Structure Sampling thru E. Discussion – Zhou discloses the external sources comprises social media networks such as Facebook, Twitter, etc. (e.g. social media and news website information).] [e.g. Bulut; Abstract, Para. 0003, 0017, 0019, 0024, 0036, 0045, 0051, 0056, 0064, 0081, 0083, 0084 – Bulut discloses cyber threat intelligence.] Claim 6 Seshardi teaches the method of claim 1, wherein the second application is being executed on the second computing device. [e.g. Seshardi; Claim 13, Abstract, Para. 0015-0017, 0057 – Seshardi discloses applications are executed on computing resources in cloud domains. ] Claim 7 Zhou as modified by Bulut teaches the method of claim 1, further comprising: generating one or more structured elements, each operating as a complex filter for extracting salient data from first data set and the second data set to facilitate integration of the content of the second data set to improve or expand cyber threat detection functionality. [e.g. Bulut; Abstract, Para. 0003, 0017, 0019, 0024, 0034, 0035, 0036, 0045, 0051, 0056, 0064, 0081, 0083, 0084 – Bulut discloses a generic parser may be used to create unique templates for the one or more log lines, and then each unique template may correspond with a unique event ID (e.g. generating structured elements operating as complex filters for extracting salient data.).] Regarding claims 8, 13, 14 and 17 they are manufacture claims essentially corresponding to the above recitations, and they are rejected, at least, for the same reasons. Claims 4 and 15 is rejected under 35 U.S.C. 103 as being unpatentable over NPL “DeepLink: A Deep Learning Approach for User Identity Linkage” to Zhou et al. (hereinafter “Zhou”) in view of US 20240330446 to BULUT et al. (hereinafter “Bulut”) and further in view of US 20170230391 to FERGUSON et al. (hereinafter “Ferguson”) Claim 4 While the combination teaches the method of claim 2 the combination fails to explicitly teach however, Ferguson teaches: wherein the Al model is configured to conduct analytics on the received data and pattern of life data associated with the user that increases a likelihood of identifying cyber threats associated with the received data based on detection of anomalous behaviors by the user. [e.g. Ferguson; Abstract, Para. 0065, 0179, 0184 – Ferguson discloses an anomalous behavior detection system takes all the information that is available relating to this employee and establishes a ‘pattern of life’ for that person, which is dynamically updated as more information is gathered. The ‘normal’ model is used as a moving benchmark, allowing the system to spot behavior on a system that seems to fall outside of this normal pattern of life, and flags this behavior as anomalous, requiring further investigation.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by the combination with in order to improve the ability to detect attacks as disclosed in para. 0067 of Ferguson Regarding claims 15 they are manufacture claims essentially corresponding to the above recitations, and they are rejected, at least, for the same reasons. Claims 6 and 16 is rejected under 35 U.S.C. 103 as being unpatentable over NPL “DeepLink: A Deep Learning Approach for User Identity Linkage” to Zhou et al. (hereinafter “Zhou”) in view of US 20240330446 to BULUT et al. (hereinafter “Bulut”) and further in view of US 20220398055 to GUSTIN et al. (hereinafter “Gustin”) Claim 6 While the combination teaches the method of claim 2 the combination fails to explicitly teach however, Ferguson teaches: removing sensitive information including personally identifiable information (PII) data from the first data set prior to generating of the first embedding vector and removing sensitive information including PII data from the second data set prior to generating of the second embedding vector. [e.g. Gustin; Abstract, Col Para. 0017– Gustin discloses preprocessing data to remove PII of a user for use in an AI model.] Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include, the above limitations in the invention as disclosed by the combination with in order to provides improvements to computing devices in the field of security as it allows for predictions or determinations as described herein without PII of the individual as disclosed in para. 0017 of Gustin Regarding claims 16 they are manufacture claims essentially corresponding to the above recitations, and they are rejected, at least, for the same reasons. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER C HARRIS whose telephone number is (571)270-7841. The examiner can normally be reached Monday through Friday between 8:00 AM to 4:00 PM CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER C HARRIS/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602467
In-memory scan for threat detection with binary instrumentation backed generic unpacking, decryption, and deobfuscation
2y 5m to grant Granted Apr 14, 2026
Patent 12585746
AUTHENTICATION SYSTEM, USER DEVICE, AND KEY INFORMATION TRANSMISSION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12580915
SERVICE ACCESS METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12572668
DATA SECURITY USING REQUEST-SUPPLIED KEYS
2y 5m to grant Granted Mar 10, 2026
Patent 12561460
System And Method for Performing Security Analyses of Digital Assets
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+26.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 362 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month