Prosecution Insights
Last updated: April 18, 2026
Application No. 18/388,532

GENERATED AUTOMATED AGENT INSTRUCTIONS FROM USER TRAINING MATERIALS

Non-Final OA §101§103
Filed
Nov 10, 2023
Examiner
MEIS, JON CHRISTOPHER
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Scaled Cognition Inc.
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
10 granted / 22 resolved
-16.5% vs TC avg
Strong +59% interview lift
Without
With
+59.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
30 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
24.9%
-15.1% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-24 are pending. Claims 1, 9, and 17 are independent. This Application was published as US 20250117579. Apparent priority is 4 October 2023. The instant Application is directed to a method of prompt engineering to generate instructions from training manuals. Claim Objections Applicant is advised that should claim 8 be found allowable, claim 24 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Step 1: The independent Claims are directed to statutory categories: Claim 1 is a method claim and directed to the process category of patentable subject matter. Claim 9 is a non-transitory computer-readable storage medium claim and is directed to the machine or manufacture category of patentable subject matter. Claim 17 is a system claim and directed to the machine or manufacture category of patentable subject matter. Step 2A, Prong One: Does the Claim recite a Judicially Recognized Exception? Abstract Idea? Are these Claims nevertheless considered Abstract as a Mathematical Concept (mathematical relationships, mathematical formulas or equations, mathematical calculations), Mental Process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion), or Certain Methods of Organizing Human Activity (1-fundamental economic principles or practices (including hedging, insurance, mitigating risk), 2-commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), 3- managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and fall under the judicial exception to patentable subject matter?) The rejected Claims recite Mental Processes. Step 2A, Prong Two: Additional Elements that Integrate the Judicial Exception into a Practical Application? Identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. “Integration into a practical application” requires an additional element(s) or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Uses the considerations laid out by the Supreme Court and the Federal Circuit to evaluate whether the judicial exception is integrated into a practical application. The rejected Claims do not include additional limitations that point to integration of the abstract idea into a practical application and are therefore directed to a Mental Process. Claim 1 is a generic automation of a mental process because a human agent can type a prompt to an LLM to summarize training materials. Prong Two of step 2A in the 101 analysis asks whether the abstract idea is integrated with a practical application. The answer is no in this instance because there is no technological solution in the Claim that “integrates” the abstract idea. The Claim only suggests that the abstract idea be applied. It does not describe an application. 1. A method for creating automated agent instructions from training materials, comprising: accessing a plurality of training materials having at least two different formats; [Agent accesses training manuals on paper and in a text document.] processing a first training material of the plurality of training materials to convert the first training material into a form suitable for processing by a machine learning model; [Agent transcribes the paper manual.] providing a prompt to the machine learning model, [Agent types a prompt for ChatGPT.] the prompt specifying a role, [Agent includes a role of Instructor.] at least one of the plurality of training materials, and [Agent copies the manuals into the prompt.] prompt instructions to the machine learning model to generate instructions from the at least one of the training materials; [Agent adds instructions to the LLM to summarize the manuals.] receiving an output from the machine learning model in response to the prompt, [Agent reads the reply.] the output including a set of generated instructions derived from the training materials contained in the prompt; and [ChatGPT outputted the correct instructions.] storing the set of generated instructions output by the machine learning model in a vector database. [Agent copies the reply into a vector database.] Step 2B: Search for Inventive Concept: Additional Element Do not amount to Significantly More: The limitations of "machine learning model" and “vector database” are well-understood, routine, and conventional machine components that are being used for their well-understood, routine, and conventional and rather generic functions. Additionally, these limitations are expressed parenthetically and lack nexus to the Claim language and as such are a separable and divisible mention to a machine. The machine learning model is passive in the claims, and the processing done by the model is not explained or claimed. The storing of the instructions in a vector database amounts to necessary data outputting. Accordingly, they are not sufficient to cause the Claim to amount to significantly more than the underlying abstract idea. The Dependent Claims do not add limitations that could help the Claim as a whole to amount to significantly more than the Abstract idea identified for the Independent Claim: 2. The method of claim 1, wherein the machine learning model includes a large language model. [ChatGPT is a LLM.] 3. The method of claim 1, wherein processing the at least one of the plurality of training materials includes at least one of converting audio to text, converting graphics to text, and removing sensitive information. [Agent converted the graphics to text. Agent can also convert audio to text and remove sensitive information.] 4. The method of claim 1, wherein providing a prompt includes: providing a first prompt to the machine learning model, the first prompt including a first portion of the first training material, the role, and the prompt instructions to the machine learning model; and providing a second prompt to the machine learning model, the second prompt including a second portion of the first training material, the role, and the prompt instructions to the machine learning model. [Agent can split the material and input separate prompts.] 5. The method of claim 1, wherein providing a prompt includes: providing a first prompt to the machine learning model, the first prompt including at least a first portion of the first training material, the role, and the prompt instructions to the machine learning model; and providing a second prompt to the machine learning model, the second prompt including at least a first portion of a second training material, the role, and the prompt instructions to the machine learning model. [Agent can provide the separate materials in separate prompts.] 6. The method of claim 1, wherein the training materials include at least two of a training manual, knowledge base documents, presentation slides, troubleshooting guides, discussion forums, messaging discussions, and texts from a video file or audio file. [Agent chooses a training manual and a knowledge base document.] 7. The method of claim 1, further comprising summarizing the output of the machine learning model for each type of training material. [Agent further summarizes the output for his manager.] 8. The method of claim 1, wherein the set of generated instructions is stored in a document in the vector database, the document including instructions generated by the machine learning model from at least two different formats of training materials. [Agent uses both documents and saves them.] Claim 24 is a duplicate of claim 8 and provides no further limitations. The additional limitations introduced by the Dependent Claims are not sufficient as additional elements that integrate the judicial exception into a practical application or as additional elements that cause the Claim as a whole to amount to substantially more than the underlying abstract idea. With respect to Independent Claim 9 and independent Claim 17, which have limitations similar to the limitations of Claim 1, the limitations of “computer readable storage medium” and “one or more servers, wherein each server includes a memory and a processor” are well-understood, routine, and conventional machine components and are expressed parenthetically and lack nexus to the Claim language and as such are a separable and divisible mention to a machine. Accordingly, they do not include additional limitations that cause the Claim as a whole to amount to more than the underlying abstract idea. The Dependent Claims 10-16 and 18-23 are similar to claims 2-8 and do not add limitations that could integrate the judicial exception into a practical application or help the Claim as a whole to amount to significantly more than the Abstract idea identified for the Independent Claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 6, 8-11, 14, 16-19, 22 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xia et al. "Towards autonomous system: flexible modular production system enhanced with large language model agents" in view of Gharibi et al. (US 20250103746 A1). Regarding claim 1, Xia discloses: 1. A method for creating automated agent instructions from training materials, comprising: accessing a plurality of training materials having at least two different formats; (Table 2 shows Context and Examples sections which are training materials in different formats.) processing a first training material of the plurality of training materials to convert the first training material into a form suitable for processing by a machine learning model; ("... However, as the knowledge representation in digital twins’ software system and texts in natural language are different, the information from the digital twin model shall be converted in text form in natural language, e.g., with fill-in-the-template mechanism and concatenation of text strings. ... The converted information from digital twins offered in this context section serves two purposes: first, it enables the model to comprehend the production system’s operations, incorporating additional information about the particular system. ..." pg. 4, para 6-7) providing a prompt to the machine learning model, ("We create these agents by contextualizing a GPT-model with prompts." Pg. 4, para 2) the prompt specifying a role, ("Role and goal: You are a manager of a production system." Table 2) at least one of the plurality of training materials, and (Context section is a training material - see Table 2) prompt instructions to the machine learning model to generate instructions from the at least one of the training materials; ("Instructions: As a manager of this production system, please arrange a production process based on the input …" Table 2) receiving an output from the machine learning model in response to the prompt, the output including a set of generated instructions derived from the training materials contained in the prompt; and ("The generated output by the agent: {(S1) – (T1) – (P2) – (T1) – (I3) – (T1) – (S2)} Explanation: (S1) retrieve the wood nameplate from storage module. (T1) transport the workpiece from storage module to painting module. …" Table 2) storing the set of generated instructions output by the machine learning model in a vector database. ("In order to keep track of the dynamic effects on environment and to perform more informed decision, the agent need to maintain a memory of the data about the production, which could require an extra software component (e.g., a database) or new mechanism to merge the LLM agents into the digital twin system." pg. 7, para 10) Xia does not explicitly disclose a vector database. Xia discloses that a database could be implemented so the LLM can maintain context on how the output affects the production system. Regarding claim 1, Gharibi discloses a vector database. ("[0074] Specifically, it utilizes machine learning models trained on the organization's repository of documents, images, and code to assign sensitivity scores to these assets. These scores serve as a benchmark for real-time monitoring of users' inputs to the LLM 114. By conducting semantic searches within a secured vector database of these assets, the system dynamically identifies potential data leakage risks and notifies the user, offering different levels of alerts based on the assessed sensitivity. Additional features include human-in-the-loop feedback mechanisms, context-based sensitivity adjustments, and continuous learning capabilities to adapt to the evolving nature of what is considered sensitive within the enterprise." - See also Fig. 1 "VECTOR DATABASE" 108) Xia and Gharibi are considered analogous art to the claimed invention because they disclose methods of prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Xia with a vector database as taught by Gharibi to store the output. This combination falls under simple substitution of one known element for another to obtain predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding claim 2, Xia discloses: 2. The method of claim 1, wherein the machine learning model includes a large language model. ("To tackle these challenges and requirements, we propose a novel solution: a large language model (LLM) enhanced automated modular production system for flexible manufacturing." pg. 1, para 4) Regarding claim 3, Xia does not disclose the additional limitations. Gharibi discloses: 3. The method of claim 1, wherein processing the at least one of the plurality of training materials includes at least one of converting audio to text, converting graphics to text, and removing sensitive information. ("[0053] ... One can introduce a prompt engineering system that improves the user prompt by, among other things, connecting a user's prompt with relevant enterprise documents or a knowledge source 102. But then one needs the privacy monitor to process the user's augmented prompt (including the document excerpts) to remove sensitive information. This combined approach results in a strong prompt engineering system that makes the model's response relevant to enterprise data without the complex process of model finetuning, while minimally disclosing sensitive enterprise information, and while improving explainability and defending against hallucinations in the model response." ) Xia and Gharibi are considered analogous art to the claimed invention because they disclose methods of prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified the system of Xia in view of Gharibi to remove sensitive information as taught by Gharibi. This would have been beneficial in order to avoid disclosing sensitive information. (Gharibi [0053]) Regarding claim 6, Xia discloses: 6. The method of claim 1, wherein the training materials include at least two of a training manual, knowledge base documents, presentation slides, troubleshooting guides, discussion forums, messaging discussions, and texts from a video file or audio file. (“As shown in table 2 and 3 in the Implementation Section, the knowledge should at least contain the objects description, the skills and functions of the objects and the mappings to the service interfaces of the executable operations." pg. 4, para 6 – this Context would be information found in a training manual. The Examples in Table 2 would be considered knowledge base documents. ) Regarding claim 8, Xia discloses: 8. The method of claim 1, wherein the set of generated instructions is stored in a document in the vector database, (see claim 1 for mapping) the document including instructions generated by the machine learning model from at least two different formats of training materials. (Table 2 shows that both Context and Examples (two formats) are used by the machine learning model to generate instructions.) Claim 9 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, a non-transitory computer readable storage medium of the Claim are taught by Xia (“Several software components are implemented to realize our system design.” Pg. 6, para 2) Claim 10 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale. Claim 11 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale. Claim 14 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale. Claim 16 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 8 and is rejected under similar rationale. Claim 17 is a system claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, one or more servers, wherein each server includes a memory and a processor; and one or more modules of the Claim are taught by Xia (“Several software components are implemented to realize our system design.” Pg. 6, para 2 – one of ordinary skill in the art would understand that software could be run on a server which would include memory and a processor. Fig. 9 also shows MES system with an image of servers. Fig. 8 shows software modules.) Claim 18 is a system claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale. Claim 19 is a system claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale. Claim 22 is a system claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale. Claim 24 is a duplicate of claim 8 and is rejected under similar rationale. Claim(s) 4-5, 7, 12-13, 15, 20-21, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xia in view of Gharibi, in further view of O’Kelly et al. (US 11861321 B1). Regarding claim 4, Xia discloses: 4. The method of claim 1, wherein providing a prompt includes: providing a first prompt to the machine learning model, the first prompt including a first portion of the first training material, the role, and the prompt instructions to the machine learning model; (Table 2 – see claim 1 for exact mapping) and providing a second prompt to the machine learning model, the second prompt including a second portion of the first training material, the role, and the prompt instructions to the machine learning model. (Table 2 shows multiple context numbers which can be considered separate portions. Xia does not explicitly disclose a second prompt.) Xia does not explicitly disclose a second prompt for the same task. Gharibi discloses chunking [0071]-[0072] but not explicitly sending chunks in separate prompts. O’Kelly discloses: providing a second prompt to the machine learning model, the second prompt including a second portion of the first training material, the role, and the prompt instructions to the machine learning model. (Figure 6 shows the process of chunking a text that is too large into multiple prompts. See also "In many workflows, an input document is divided into chunks via a chunking technique. Then, chunks are inserted into prompt templates for processing by a large language model such as the GPT-3 or GPT-4 available from OpenAI." Col 3; 52-56. See also Col 43-44 show an example regular expression prompt template, which has instructions which are included in each prompt with different portions of the text." ) Xia, Gharibi, and O’Kelly are considered analogous art to the claimed invention because they disclose methods of prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Xia in view of Gharibi to split the prompt into multiple prompts if needed. Doing so would have been beneficial to comply with one or more size limitations associated with the text. (O’Kelly Col 8; 35-40) Regarding claim 5, Xia discloses: 5. The method of claim 1, wherein providing a prompt includes: providing a first prompt to the machine learning model, the first prompt including at least a first portion of the first training material, the role, and the prompt instructions to the machine learning model; (Table 2 – see claim 1 for exact mapping) and providing a second prompt to the machine learning model, the second prompt including at least a first portion of a second training material, the role, and the prompt instructions to the machine learning model. (Table 2 shows the Examples are also included in the prompt. Xia does not explicitly disclose a second prompt.) Xia does not explicitly disclose a second prompt for the same task. Gharibi discloses chunking [0071]-[0072] but not explicitly sending chunks in separate prompts. O'Kelly discloses: providing a second prompt to the machine learning model, the second prompt including at least a first portion of a second training material, the role, and the prompt instructions to the machine learning model. (Figure 6 shows the process of chunking a text that is too large into multiple prompts. 606 and 608 show that the text portions are separately processed to see if it will fit into the last chunk or if a new chunk is needed. The Context and Examples sections in Xia Table 2 are separate portions which would end up in separate chunks if the size is too large. See also "In many workflows, an input document is divided into chunks via a chunking technique. Then, chunks are inserted into prompt templates for processing by a large language model such as the GPT-3 or GPT-4 available from OpenAI." Col 3; 52-56. See also Col 43-44 show an example regular expression prompt template, which has instructions which are included in each prompt with different portions of the text." ) Xia, Gharibi, and O’Kelly are considered analogous art to the claimed invention because they disclose methods of prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Xia in view of Gharibi to split the prompt into multiple prompts based on the existing sections/portions if needed. Doing so would have been beneficial to comply with one or more size limitations associated with the text. (O’Kelly Col 8; 35-40) Regarding claim 7, Xia and Gharibi do not disclose the additional limitations. O'Kelly discloses: 7. The method of claim 1, further comprising summarizing the output of the machine learning model for each type of training material. ("In some embodiments, the one or more parsed summary responses 920 may be used as input to generate a consolidated summary. For example, a consolidated summary may be generated if the aggregate size of the parsed summary responses 920 exceeds or falls below a designated threshold." Col 20; 62-66) Xia, Gharibi, and O’Kelly are considered analogous art to the claimed invention because they disclose methods of prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Xia in view of Gharibi to generate a consolidated summary. Doing so would have been beneficial to save the user time if the combined responses exceed a threshold. (O’Kelly Col 20;62-66) Claim 12 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale. Claim 13 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale. Claim 15 is a non-transitory computer readable storage medium claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale. Claim 20 is a system claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale. Claim 21 is a system claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale. Claim 23 is a system claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JON C MEIS whose telephone number is (703)756-1566. The examiner can normally be reached Monday - Thursday, 8:30 am - 5:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JON CHRISTOPHER MEIS/ Examiner, Art Unit 2654 /HAI PHAN/ Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Aug 18, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603087
VOICE RECOGNITION USING ACCELEROMETERS FOR SENSING BONE CONDUCTION
2y 5m to grant Granted Apr 14, 2026
Patent 12579975
Detecting Unintended Memorization in Language-Model-Fused ASR Systems
2y 5m to grant Granted Mar 17, 2026
Patent 12482487
MULTI-SCALE SPEAKER DIARIZATION FOR CONVERSATIONAL AI SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Nov 25, 2025
Patent 12475312
FOREIGN LANGUAGE PHRASES LEARNING SYSTEM BASED ON BASIC SENTENCE PATTERN UNIT DECOMPOSITION
2y 5m to grant Granted Nov 18, 2025
Patent 12430329
TRANSFORMING NATURAL LANGUAGE TO STRUCTURED QUERY LANGUAGE BASED ON MULTI-TASK LEARNING AND JOINT TRAINING
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
99%
With Interview (+59.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month