Prosecution Insights
Last updated: April 19, 2026
Application No. 18/744,434

SMART DISPATCHER IN A COMPOSITE ARTIFICIAL INTELLIGENCE (AI) SYSTEM

Non-Final OA §101§102§103
Filed
Jun 14, 2024
Examiner
SHAIKH, ZEESHAN MAHMOOD
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Intuit Inc.
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
16 granted / 31 resolved
-10.4% vs TC avg
Strong +55% interview lift
Without
With
+55.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§101 §102 §103
Dd to DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 and 8 recites, “generating a user request from a user utterance submitted by a user”, “classifying a user intent from the user request and a context of the user utterance”, “determining to send the user request to one of a first AI model or a second AI model based on a determination that the user intent is fullfillable by one of the first AI model or the second AI model”, “generating a first response by one of the first AI model or the second AI model based on the determination”, and “transmitting the first response to the user”. The limitation of generating a user request from an utterance, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “a memory” and “a processor”, nothing in the claim precludes the step from practically being performed in the mind. For example, “generating” in the context of this claim encompasses understanding speech, which a human can do in the mind. Next, the limitation of classifying intent, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “classifying” in the context of this claim encompasses analyzing speech, which a human can do in the mind. Next, the limitation of determining where to send a user request, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “determining” in the context of this claim encompasses categorizing information, which a human can do in the mind. Next, the limitation of generating a response, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting generic models, nothing in the claim precludes the step from practically being performed in the mind. For example, “generating” in the context of this claim encompasses producing a response to a question, which a human can do in the mind or with a pen and paper. Lastly, the limitation of transmitting a response, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “transmitting” in the context of this claim encompasses sending data, which a human can do in the mind or with a pen and paper. The judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements, using “a memory” and “a processor” to perform the recited limitations. These elements in these steps are recited at a high-level of generality such that is amounts no more than mere instructions to apply the exception using generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using “a memory” and “a processor” to perform the generating steps amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Dependent claims 2-7 and 9-14 are also rejected for the same reasons provided in independent claim 1 and 8 above. The dependent claim, including the further recited limitation, does not integrate the abstract idea into a practical application and the additional elements, taken individually and in combination do not contribute to an inventive concept. In other words, the dependent claim is directed to an abstract idea without significantly more. Independent claim 15 recites, “generate a first response to a user intent using a set of human-curated responses”, “generate the first response to the user intent”, “selectively direct a user utterance to one of the deterministic AI model or the generative AI model based on a determination that the user intent is fulfillable by the deterministic AI model”, “identify the user intent based on the user utterance and context extracted from a user request”, “maintain a conversation list, the conversation tracker adding the user intent and follow-up intents to the conversation list, the follow-up intents representing probable responses to subsequent user utterances provided by a user in reaction to the first response”, and “receive the first response from the selected one of the deterministic AI model or the generative AI model and present the first response to the user”. First, the limitation of generating a response, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “generate” in the context of this claim encompasses responding to intents based off a set of rules, which a human can do in the mind or with a pen and paper. Next, the limitation of generating a response, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “generate” in the context of this claim encompasses responding to intents, which a human can do in the mind or with a pen and paper. Next, the limitation of selectively direct user utterances, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting generic models, nothing in the claim precludes the step from practically being performed in the mind. For example, “direct” in the context of this claim encompasses classifying data, which a human can do in the mind. Next, the limitation of identifying a user’s intent from an utterance, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “identify” in the context of this claim encompasses analyzing data, which a human can do in the mind. Next, the limitation of maintaining a list, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the components listed above, nothing in the claim precludes the step from practically being performed in the mind. For example, “maintain” in the context of this claim encompasses managing data, which a human can do in the mind. Lastly, the limitation of receiving a response, as drafted, is a process, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting generic models, nothing in the claim precludes the step from practically being performed in the mind. For example, “receive” in the context of this claim encompasses receiving data, which a human can do in the mind or with a pen and paper. The judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements, using generic models to perform the recited limitations. These elements in these steps are recited at a high-level of generality such that is amounts no more than mere instructions to apply the exception using generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using generic models to perform the recited limitations amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Dependent claims 16-20 are also rejected for the same reasons provided in independent claim 15 above. The dependent claim, including the further recited limitation, does not integrate the abstract idea into a practical application and the additional elements, taken individually and in combination do not contribute to an inventive concept. In other words, the dependent claim is directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 7-10, and 14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bhathena et al. US 20240282296 A1 (hereinafter Bhathena). Regarding independent claim 1 and 8, Bhathena teaches a method for implementing a composite artificial intelligence (AI) system, comprising / a processing system, comprising: a memory comprising computer-executable instructions (FIG. 1, 106); and a processor configured to execute the computer-executable instructions and cause the processing system to (FIG. 1, 104): generating a user request from a user utterance submitted by a user (FIG. 4, S402 [0074] “the utterance includes a user query that is expressed by text using natural language in a conversational mode”); classifying a user intent from the user request and a context of the user utterance (FIG. 4, S404; FIG. 3, 302;); determining to send the user request to one of a first AI model or a second AI model based on a determination that the user intent is fullfillable by one of the first AI model or the second AI model (FIG. 4, S406; [0086] “one or more domain classification models will be triggered to recognize the higher level domain(s) to which that utterance might belong”); generating a first response by one of the first AI model or the second AI model based on the determination (FIG. 6, [0089] “A knowledge base of FAQs and their corresponding answers has been curated”); and transmitting the first response to the user ([0086] “This Question/Answer pair either answers the user's question completely or at least lets the user know some more incrementally relevant information”). Regarding claims 2 and 9, Bhathena teaches all of the limitations of claim 1 and 8, upon which claims 2 and 9 depend. Additionally, Bhathena teaches wherein determining to send the user request to one of the first AI model or the second AI model further comprises comparing the user intent against a master intent list, the master intent list including a list of intents for which a set of human-curated responses are available through the first AI model ([0078] “the outputting may include displaying, to the user, a predetermined list of items that corresponds to possible intentions associated with the domain(s) to which the utterance has been assigned”, examiner interprets the predetermined list to be human-curated). Regarding claims 3 and 10, Bhathena teaches all of the limitations of claim 2 and 9, upon which claims 3 and 10 depend. Additionally, Bhathena teaches wherein the user request is generated based on at least the user utterance, customer information, and experience information, where the customer information, and the experience information provide the context of the user utterance ([0080] “the user experience would be improved if the VA could respond with a more targeted response, at least showing that the VA understands the general idea of the query, but needs more information to drill down to the exact user intent”). Regarding claims 7 and 14, Bhathena teaches all of the limitations of claim 1 and 10, upon which claims 7 and 14 depend. Additionally, Bhathena teaches wherein the first AI model is a natural language understanding (NLU) model using human-curated responses, and the second AI model is a generative AI model referencing a predefined set of information ([0081] “the present inventive concept is designed to provide a conditional bottom-up hierarchical NLU model which can always provide some form of an answer to a user's original question, either in the form of an in-domain frequently asked question (FAQ) or explaining the possible intents to the user within the relevant domain”; [0017] “an artificial intelligence (AI) model that is configured to assign the received utterance to at least one domain from among a predetermined plurality of domains”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-6, 11-13, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhathena in view of Rodriguez Garcia et al. US 20250307564 A1 (hereinafter Rodriguez Garcia). Regarding claims 4 and 11, Bhathena teaches all of the limitations of claim 3 and 10, upon which claim 4 and 11 depend. Bhathena fails to teach generating the first response using a natural language understanding (NLU) model as the first AI model, generating the first response including: assigning an intent identifier associated with at least one selected response from among the set of human-curated responses, the selected response corresponding to the user intent; adding the user intent to a conversation list; and adding follow-up intents to the conversation list. However, Rodriguez Garcia teaches generating the first response using a natural language understanding (NLU) model as the first AI model, generating the first response including: assigning an intent identifier associated with at least one selected response from among the set of human-curated responses, the selected response corresponding to the user intent ([0096] “the utterance and the corresponding label is randomly selected from a training dataset, where the training dataset includes a plurality of utterance-intent pairs of a particular domain”, examiner interprets training dataset to be the human-curated responses); adding the user intent to a conversation list ([0074] “, the processing logic may add any such identified new intents to the list of known intents”); and adding follow-up intents to the conversation list ([0074] “, the processing logic may add any such identified new intents to the list of known intents”). Bhathena in view of Rodriguez Garcia are considered to be analogous to the claimed invention because both are the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques performing hierarchical domain routing and intent classification on user queries in order to improve accuracy in responding to such queries and to create smoother conversations between virtual voice assistants and users of Bhathena with the technique of modifying intents in a conversation list taught by Rodriguez Garcia in order to improve identifying intents in utterances to update a known list of intents (see Rodriguez Garcia [0001]). Regarding claims 5 and 12, Bhathena in view of Rodriguez Garcia teaches all of the limitations of claims 4 and 11, upon which claims 5 and 12 depend. Additionally, Bhathena teaches wherein the follow-up intents represent probable responses to utterances provided by a user in reaction to the first response ([0078] “the outputting may include displaying, to the user, a predetermined list of items that corresponds to possible intentions associated with the domain(s) to which the utterance has been assigned, together with a prompt that acts as an invitation to the user to provide a response by which one or more of the possible intentions is selected by the user”). Regarding claims 6, 13, and 19, Bhathena in view of Rodriguez Garcia teaches all of the limitations of claims 4, 11, and 18 upon which claims 6, 13, and 19 depend. Additionally, Rodriguez Garcia teaches applying response rules and response templates, by a dialog manager, to the set of human-curated responses ([0045] “the prompt 122 may be constructed by the prompt generator 106 based on a template that precisely defines its content and format”); and personalizing, by the dialog manager, the first response using the customer information ([0090] “one or more of the data centers 916 can be configured using a multi-instance cloud architecture to provide every customer with its own unique customer instance or instances”). Regarding independent claim 15, Bhathena teaches a composite artificial intelligence system (AI), comprising: a deterministic AI model configured to generate a first response to a user intent using a set of human-curated responses ([0078] “the outputting may include displaying, to the user, a predetermined list of items that corresponds to possible intentions associated with the domain(s) to which the utterance has been assigned”, examiner interprets the predetermined list to be human-curated.); a generative AI model configured to generate the first response to the user intent (FIG. 6, [0089] “A knowledge base of FAQs and their corresponding answers has been curated”); and a dispatcher configured to selectively direct a user utterance to one of the deterministic AI model or the generative AI model based on a determination that the user intent is fulfillable by the deterministic AI model (FIG. 4, S406; [0086] “one or more domain classification models will be triggered to recognize the higher level domain(s) to which that utterance might belong”), the dispatcher including: a classifier configured to identify the user intent based on the user utterance and context extracted from a user request (FIG. 4, S404; FIG. 3, 302;), a responder configured to receive the first response from the selected one of the deterministic AI model or the generative AI model and present the first response to the user (FIG. 6, [0089] “A knowledge base of FAQs and their corresponding answers has been curated”; [0086] “This Question/Answer pair either answers the user's question completely or at least lets the user know some more incrementally relevant information”). Bhathena fails to teach a conversation tracker configured to maintain a conversation list, the conversation tracker adding the user intent and follow-up intents to the conversation list, the follow-up intents representing probable responses to subsequent user utterances provided by a user in reaction to the first response However, Rodriguez Garcia teaches a conversation tracker configured to maintain a conversation list, the conversation tracker adding the user intent and follow-up intents to the conversation list, the follow-up intents representing probable responses to subsequent user utterances provided by a user in reaction to the first response ([0005] “the operations may further comprise determining that a particular intent in the list of predicted intents is not in the list of known intents; and updating the list of known intents with the particular intent”; [0074] “the processing logic may add any such identified new intents to the list of known intents”) Bhathena in view of Rodriguez Garcia are considered to be analogous to the claimed invention because both are the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques performing hierarchical domain routing and intent classification on user queries in order to improve accuracy in responding to such queries and to create smoother conversations between virtual voice assistants and users of Bhathena with the technique of modifying intents in a conversation list taught by Rodriguez Garcia in order to improve identifying intents in utterances to update a known list of intents (see Rodriguez Garcia [0001]). Regarding claim 16, Bhathena in view of Rodriguez Garcia teaches all of the limitations of claim 15, upon which claim 16 depends. Additionally, Bhathena teaches wherein determining to send the user request to one of the first AI model or the second AI model further comprises comparing the user intent against a master intent list, the master intent list including a list of intents for which a set of human-curated responses are available through the first AI model ([0078] “the outputting may include displaying, to the user, a predetermined list of items that corresponds to possible intentions associated with the domain(s) to which the utterance has been assigned”, examiner interprets the predetermined list to be human-curated.). Regarding claim 17, Bhathena in view of Rodriguez Garcia teaches all of the limitations of claim 16, upon which claim 17 depends. Additionally, Bhathena teaches wherein a failure of the comparator to match the user intent to an intent in the master intent list causes the dispatcher to direct the user utterance to the generative AI model, the generative AI model being a large language model (LLM) ([0089] Following the classification process by both intent and domain models, utterances that are identified as being out-of-scope are directed to an LLM-powered conversational QA system). Regarding claim 18, Bhathena in view of Rodriguez Garcia teaches all of the limitations of claim 15, upon which claim 18 depends. Additionally, Bhathena teaches wherein the user request is generated based on at least the user utterance, customer information, and experience information, where the customer information, and the experience information provide the context of the user utterance ([0080] “the user experience would be improved if the VA could respond with a more targeted response, at least showing that the VA understands the general idea of the query, but needs more information to drill down to the exact user intent”). Regarding claim 20, Bhathena in view of Rodriguez Garcia teaches all of the limitations of claim 15, upon which claim 20 depends. Additionally, Rodriguez Garcia teaches wherein the conversation tracker is further configured to update the conversation list based on subsequent user utterances received in reaction to the first response ([0005] “the operations may further comprise determining that a particular intent in the list of predicted intents is not in the list of known intents; and updating the list of known intents with the particular intent”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ding et al. (US 20210200956 A1) teaches a method and an apparatus for processing questions and answers, an electronic device and a storage medium. The implementation solution includes: in a process of determining an answer to a question to be answered, determining the semantic representation on the question to be answered respectively with a first semantic representation model of question and a second semantic representation model of question, splicing semantic representation vectors obtained through the first semantic representation model of question and the second semantic representation model of question, determining a spliced semantic vector as a semantic representation vector of the question to be answered, acquiring an answer semantic vector matching the semantic representation vector of the question to be answered from a vector index library of answer, and determining an answer corresponding to the answer semantic vector as a target answer to the question to be answered. Brigham et al. (US 20200143115 A1) teaches systems and methods for parsing a message in a conversation series is provided. This involves receiving a message, isolating the current exchange, dividing it up into sentences, and detecting the language being used. The message sentences are normalized, and any ‘speech acts’ are identified. Likewise, any ‘critical intents’ are identified. If there is no critical intent, the classification text is provided to sets of models for parallel prediction of the intent(s) of the message. Models are queried for based upon series of the conversation, the industry involved, the client the model is for, the message campaign, and any speech acts present. Mapping rules and/or prediction machine learning models are used to convert the intents into meanings, which are filtered. It is also possible to apply a decision engine policy for the determination of the meaning. This is followed by entity extraction and response generation by mapping meanings to actions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZEESHAN SHAIKH whose telephone number is (703)756-1730. The examiner can normally be reached Monday-Friday 7:30AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZEESHAN MAHMOOD SHAIKH/Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Jun 14, 2024
Application Filed
Mar 01, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579373
SYSTEM AND METHOD FOR SYNTHETIC TEXT GENERATION TO SOLVE CLASS IMBALANCE IN COMPLAINT IDENTIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12555575
Wakeup Indicator Monitoring Method, Apparatus and Electronic Device
2y 5m to grant Granted Feb 17, 2026
Patent 12518090
LOGICAL ROLE DETERMINATION OF CLAUSES IN CONDITIONAL CONSTRUCTIONS OF NATURAL LANGUAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12511318
MULTI-SYSTEM-BASED INTELLIGENT QUESTION ANSWERING METHOD AND APPARATUS, AND DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12512088
METHOD AND SYSTEM FOR USER-INTERFACE ADAPTATION OF TEXT-TO-SPEECH SYNTHESIS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+55.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month