Prosecution Insights
Last updated: April 19, 2026
Application No. 18/241,155

Script-Based Animations for Live Video

Non-Final OA §103
Filed
Aug 31, 2023
Examiner
MCCOY, AIDAN WILLIAM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-12.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
22.4%
-17.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed February 2nd 2026 has been entered. Applicant’s amendments to the claims have overcome the claim objections previously set forth in the final office action mailed November 5th 2025 and said objections are accordingly withdrawn. Applicant’s amendments to claim 1 have overcome the previously set forth 35 USC § 103 rejection, however a new rejection has been entered under 35 USC § 103 as necessitated by amendment. Claim Objections Claim 16 objected to because of the following informalities: in line 9 "a second gestured" appears to be a typographical error of "a second gesture". Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer (US 2019/0303404 A1) in view of Deng (US 2015/0365627 A1) and Edward Yu-Te Shen, & Bing-Yu Chen. (2005). Toward gesture-based behavior authoring. International 2005 Computer Graphics, 59–65. https://doi.org/10.1109/cgi.2005.1500374 and Feng Wang, Chong-Wah Ngo, & Ting-Chuen Pong. (2007). Lecture video enhancement and editing by integrating posture, gesture, and text. IEEE Transactions on Multimedia, 9(2), 397–409. https://doi.org/10.1109/tmm.2006.886292 (hereinafter “Shen”).. Regarding claim 1, Amer teaches a method comprising: obtaining a script and an animation associated with a text segment of the script and a stored gesture parameter (Figure 1C #112, 113, & 124, paragraphs [0014], [0072] & [0074]) generated during script authoring (paragraph [0014]-[0016] – search animation) based at least in part on a selection of the text segment (paragraphs [0012], [0151], [0152]) and the animation in a user interface corresponding to the script authoring (Fig. 1A #108, Fig. 4, 7B, 7C – selection of events and videos correspond to selection of animation) and a captured performance of a gesture ([0031]); determining a portion of an audio stream corresponds to the text segment (paragraph [0138]); determining the gesture performed by a user matches a stored gesture parameter (paragraph [0137]) based at least in part on a similarity score determined by at least comparing a vector generated based on the performance of the gesture (paragraphs [0051], [0070], [0152]) to the stored gesture parameter (paragraph [0014]); and responsive to determining the vector matches the stored gesture parameter (paragraph [0051], [0152] – describe in detail), and the portion of the audio stream of the live video stream correspond to the text segment causing the animation to be displayed (paragraphs [0105]-[107]). Amer describes a system of animation generation. While Amer does not directly state that a vector, generated from a gesture, is matched with a stored gesture parameter, it does describe generating the vector with various information captured, which includes gesture information. This vector information is then used in determining an animation to display through search. This process can be considered analogous to determining a vector matches a stored gesture parameter. Amer fails to teach determining a portion of an audio stream of a live video stream corresponds to the text segment; determining a live performance of the gesture performed by a user in the live video stream matches the stored gesture parameter; and responsive to determining the vector matches the gesture parameter, causing the animation to be displayed in the live video stream. a captured performance of a gesture during script authoring; However, Deng teaches a live video stream; determining a live performance of a gesture performed by a user in the live video stream matches the stored gesture parameter (Figure 2, paragraph [0013]); and responsive to determining the gesture matches the gesture parameter, causing the animation to be displayed in the live video stream (paragraphs [0019] & [0036]). Deng is considered analogous to the claimed invention as it is in the same field of video content, specifically animated video content, generation. Therefore it would have been obvious to one of ordinary skill in the art to combine the audio, text and gesture-based animation generation of Amer with the gesture based live video stream animation generation of Deng to implement the animation system in a live video context. Amer in view of Deng fails to teach a captured performance of a gesture during script authoring. However, Shen teaches a captured performance of a gesture during script authoring (Shen – intro paragraph 2, section 3.3); Shen describes a script authoring system which allows users to capture performances of mouse gestures during script authoring which align with a gesture of an animated character. Shen is considered analogous to the claimed invention as it is in the same field of animation and script authoring. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Shen with Amer in view of Deng in order to allow non-programmers to engage in creating animations. Regarding claim 2, the combination of Amer, Deng and Shen teach the method of claim 1. Amer further teaches the method further comprises: obtaining a selection of words in the script from the user (paragraph [0012]); and generating the text segment based on the selection of words and at least one word in the script preceding the selection of words (paragraphs [0010],[0062],[0063],[0101],[0102]). Amer describes generating animations for a text input phrase, which is analogous to a selection of words. Amer suggests the use of a human collaborator to modify scene description, which would include modifying or selecting the text input phrase, analogous to obtaining a selection of words in the script from the user. Amer further describes the use of a language parser to process text input and perform event extraction and glean context from descriptions, analogous to generation based on at least one word in the script preceding the selection of words. Regarding claim 4, the combination of Amer, Deng and Shen teach the method of claim 1. Amer further teaches wherein causing the animation to be displayed in the live video stream further comprises initiating an adaptation interval during which the animation is displayed (paragraphs [0011], [0187]) based on at least one of: a second portion of the audio stream corresponding to a word in the text segment or a cadence of the user determined based on the second portion of the audio stream (paragraphs [0019], [0062], [0138], [0187]). Amer describes the use of “event frames” which specify a certain input event to be animated. Amer also describes the use of loudness, pitch, speaking rate, and prosody of speech in an audio stream to generate event frames related to different text portions parsed from the audio stream. This is analogous to basing the adaptation interval on both words and speech cadence of a second portion of the audio stream Regarding claim 5, the combination of Amer, Deng and Shen teach the method of claim 1. Amer further teaches causing the animation to be displayed in the live video stream further comprises initiating an adaptation interval during which the animation is displayed based on a similarity score indicating an amount that a second gesture performed by the user in the live video stream matching a second gesture parameter (paragraph [0014], [0037]). Amer describes the use of event frames for the animations which are analogous to the claimed invention’s adaptation interval. Amer also describes the assignment of similarity scores for different animations to be displayed. These animations being compared are generated from gesture recognition in various embodiments, therefore the similarity scores are based on gesture similarity. Regarding claim 6, the combination of Amer, Deng and Shen teach the method of claim 5. Amer further teaches wherein the animation includes a plurality of graphical states, where a graphical state of the plurality of graphical states defines a set of graphic parameters for an object in the animation (paragraphs [0055] & [0056]). Amer describes various methods of evaluating every frame or time resolution (graphical states) of an animation, these evaluations result in scores or parameters for validating the graphical states. Amer also describes object detection, meaning that the scores of parameters described previously may be representative of an object in the animation. Regarding claim 7, the combination of Amer, Deng and Shen teach the method of claim 6. Amer further teaches method further comprises determining to advance the animation to a second graphical state of the plurality of graphical states based on a weight value applied to the similarity score (paragraphs [0014]-[0016], [0056]). Amer describes producing scores of every frame or time resolution using discriminators which produce said scores using weights. The scores generated are also described as informing which animation to display. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Deng and Shen and in further view of Lee (US 2021/0295578 A1). Regarding claim 3, the combination of Amer, Deng and Shen teach the method of claim 2. Deng further teaches wherein determining the gesture performed by the user in the live video stream matches the gesture parameter further comprises, causing generation of gesture parameters corresponding to the gesture performed by the user in the live video stream(paragraphs [0036], [0037]). Deng describes association of some example gestures performed by a user and their corresponding gesture parameters. Deng fails to teach doing so in response to detecting the at least one word in the portion of the audio stream causing a gesture model to generate. However, Lee teaches in response to detecting the at least one word in the portion of the audio stream causing a gesture model to generate (paragraph [0013]). Lee describes the generation of avatar movement (gestures) based upon text detected from inputted audio. Lee uses a mapping table and a decision-making process to generate a relationship between text and avatar gestures, this is analogous to an audio stream causing a gesture model to generate. Lee is considered analogous to the claimed invention as it is in the same field of audio-based animation generation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the audio-based animation generation teachings of Lee with the user gesture-based animation teachings of Amer, Deng and Shen to create a system that can utilize both the audio and gestures of a user to generate animation content. Claim(s) 8, 9, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Liao, J., Karim, A., Jadon, S. S., Kazi, R. H., & Suzuki, R. (2022). RealityTalk: Real-time speech-driven augmented presentation for AR Live Storytelling. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, 1–12. https://doi.org/10.1145/3526113.3545702 (hereinafter “Liao”) and Shen. Regarding claim 8, Amer teaches A non-transitory computer-readable medium storing executable instructions (paragraph [0016]) embodied thereon, which, when executed by a processing device, cause the processing device to perform operations comprising: obtaining a script and an animation to be applied to a video stream in response to a text segment included in the script and a first gesture to be performed in the video stream (Figure 1c #124, #112, #113, paragraphs [0063], [0072], [0074]), the animation including a plurality of states of an object; and causing the animation to be displayed in the video stream (paragraph [0048], [0056]) in response to: detecting a first portion of the text segment based on text converted from an audio stream corresponding to the video stream (paragraph [0012], [0138]); and detecting, by a gesture model, the first gesture in the video stream (paragraph [0137]) by at least determining a similarity score (paragraphs [0160], [0192]) between a first vector generated based on the first gesture in the video stream and a second vector (paragraph [0051], [0152]) Amer describes the generation of video content, such as an animation, with text, audio, or non-text related input such as gestures. More specifically, Amer describes a non-transitory computer-readable medium storing executable instructions, a method for obtaining audio, text and gesture information, both input and output of video (analogous to video stream), animation which includes a plurality of frames (each frame can be considered a graphical state), speech recognition of audio input, and gesture recognition. Amer is analogous to the claimed invention as it is in the same field of multi-input animation generation. Therefore, it would have been obvious to one of ordinary skill in the art to combine the various teachings of input methods, animation generation, and video editing ([0132]) together to apply the methods to an existing video stream rather than the generated stream of Amer. Amer fails to teach a first gesture to be performed live during the video stream, wherein the text segment is selected by a user during script authoring prior to the video stream and is associated with a second gesture performed during script authoring and a similarity score between a first vector generated based on the first gesture performed live during the video stream and the second vector generated based on the second gesture performed during script authoring. However, Liao teaches a first gesture to be performed live during the video stream (section 5.3, Figs 6, 8 & 9), wherein the text segment is selected by a user during script authoring prior to the video stream (Sections 5.1, 5.2) and is associated with a second gesture (section 5.3) and a similarity score between a first vector generated based on the first gesture performed live during the video stream and a second vector generated based on the second gesture performed during script authoring (section 6.4). Liao describes a live storytelling system that inserts visuals associated with textual elements to a video, which can be modified and manipulated using gestures. Liao includes a script authoring phase that allows users to associated visual elements with text and speech. Liao describes recognizing gestures based on calculations using position and finger and pinching states, because these states are indicative of direction (i.e. upward or inward for finger state) the information can be considered analogous to a vector. Liao is considered analogous to the claimed invention as it is in the same field of image processing, and live graphics processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Liao with Amer to improve the system of augmenting live video or presentations. Liao fails to teach a second gesture performed during script authoring and a second vector generate based on the second gesture performed during script authoring However Shen teaches a second gesture performed during script authoring (section 3). Shen describes capturing gestures during script authoring. While Shen does not generate a vector based on this gesture and calculate a similarity score, it would have been obvious to one of ordinary skill in the art to integrate the capturing of a gesture during script authoring from Shen with the animation generation system of Amer in view of Liao, which generates representative data vectors, in order to allow non-programmers to engage in creating animations. Regarding claim 9, Amer in view of Liao and Shen teaches the medium of claim 8. Amer further teaches wherein the first portion of the text segment includes a plurality of words preceding a selection of words in the script provided by a user through a script authoring interface (paragraphs [0010], [0012], [0062], [0063], [0101], [0102], [105]). Amer describes generating animations for a text input phrase, which is analogous to a selection of words. Amer suggests the use of a human collaborator to modify scene description, which would include modifying or selecting the text input phrase, analogous to obtaining a selection of words in the script from the user. Amer further describes the use of a language parser to process text input and perform event extraction and gleaning context from descriptions, analogous to generation based on a plurality of words in the script preceding the selection of words. Regarding claim 15, Amer in view of Liao and Shen teaches The medium of claim 8. Amer further teaches wherein the second vector generated based on a first image of a user performing the gesture during script authoring (paragraph [0070]) and the first vector generated on a second image of the user during the video stream (paragraphs [0051], [0056]). Amer’s description of its input acquisition which includes both textual and gesture information can be considered analogous to the script authoring phase of the claimed invention. Amer describes encoding animation frames (first and second images) as vectors. Amer describes evaluating similarity of animations on a frame-by-frame basis. It would be obvious to one of ordinary skill in the art to substitute the evaluation of the animation with the evaluation of the vectors they are being encoded into. Claim(s) 10-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Liao and Shen and in further view of Lee. Regarding claim 10, the combination of Amer in view of Liao and Shen teach the medium of claim 9. Amer in view of Liao and Shen fail to teach wherein the script authoring interface allows the user to associate the plurality of states of the object included in the animation with a plurality of gestures to be performed in the video stream. However, Lee teaches wherein the script authoring interface allows the user to associate the plurality of states of the object included in the animation with a plurality of gestures to be performed in the video stream (paragraph [0059]). Lee describes a user terminal recommending a plurality of avatar motions (gestures) to be displayed which are associated with the plurality of states necessary to display the animated avatar (object). The motivation to combine Lee with Amer in view of Liao and Shen would have been the same as claim 3. Regarding claim 11, Amer in view of Liao, Shen, and Lee teaches the medium of claim 10. Amer further teaches wherein causing the animation to be displayed in the video stream further comprises advancing the animation to a first state of the object of the plurality of states of the object based on detecting the first gesture in the video stream (paragraphs [0070], [0117], [0119]). Amer describes the generation of a composition graph based on a variety of input, including gestures detected via the behavior analytics system. Amer further describes animating based on the composition graph and the detected gestures. These animations necessitate a plurality of states which is reinforced by the mention of draws paths or trajectories for objects or actors in order to generate the animations. Regarding claim 12, Amer in view of Liao, Shen and Lee teaches the medium of claim 11. Lee further teaches wherein causing the animation to be displayed in the video stream further comprises advancing the animation to a second state of the object of the plurality of states of the object based on a first amount of time elapsed from displaying the animation and a second amount of time corresponding to the user speaking the text segment (paragraphs [0013], [0101], [0105]). Lee describes determining the time of a text string (text segment) in order to determine when to display an animation. Lee goes on to describes the changing of animation states based on whether or not the two animations are overlapping, in other words, based on the amount of time elapsed from the time point of the text string. Regarding claim 13, Amer in view of Liao, Shen and Lee teaches the medium of claim 11. Lee further teaches wherein causing the animation to be displayed in the video stream further comprises advancing the animation to a second state of the object of the plurality of states of the object based on detecting a third gesture of the plurality of gestures in the video stream (Amer, paragraphs [0070], [0117], [0119]; Lee paragraphs [0013], [0101], [0105]). Lee describes advancing animation to a second animation (state) based on the determined gesture of the detected text segment at that time point. Amer describes the detection of one or more gestures. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to substitute Amer’s gesture recognition with the text to gesture system of Lee in order to allow for a greater amount of input types. Regarding claim 14, Amer in view of Liao and Shen teaches the medium of claim 8. Amer in view of Liao and Shen fail to teach wherein the plurality of states of the object are associated with a plurality of gestures. However, Lee teaches wherein the plurality of states of the object are associated with a plurality of gestures (paragraphs [0013], [0101], [0105]). Lee describes a plurality of states associated with detected text strings which are associated with a plurality of avatar gestures. Claim(s) 16-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Liao and Shen and in further view of Lee. Regarding claim 16, Amer teaches a system comprising: a memory component; and a processing device coupled to the memory component, the processing device to perform operations comprising: obtaining a text segment include in a script and a set of gesture parameters associated with a gesture to be performed by a user during a video stream, the text segment and the gesture associated with an animation (Figure 1c #124, #112, #113, paragraphs [0063], [0072], [0074]); obtaining the video stream and an audio stream corresponding to the video stream (paragraphs [0012], [0138]); detecting the gesture performed by the user in the video stream (paragraph ) and determining the user speaking a portion of the text segment included in the script (paragraphs [0012] [0035] [0138]) and determining a similarity between a second set of gesture parameters generated based on the gesture performed by the user in the video stream and the set of gesture parameters (paragraphs [0051], [0152], [0157]); and as a result of detecting the gesture, applying the animation to the video stream (paragraphs [0106]). Amer fails to teach a gesture to be performed live by a user; detecting the gesture performed by the user in the video stream based on determining the user speaking a portion of the text segment included in the script. However, Lee teaches detecting the gesture in the video stream based on determining the user speaking a portion of the text segment included in the script (paragraph [0013]). Lee is considered analogous to the claimed invention as it is in the same field of applying video effects. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to implement the association of speech and gesture movement of Lee with the system of Amer to allow for association of multiple different input types. Amer in view of Lee fails to teach a gesture to be performed live by a user; where the set of gesture parameters are generated based on a second gestured included in a video captured during script authoring. However Liao teaches a gesture to be performed live by a user (section 5.3, Figs 6, 8 & 9); where the set of gesture parameters are generated based on a second gestured included in a video captured during script authoring (section 5, section 6.4). Liao describes obtaining visual elements associated with text segments during script authoring. These visual elements can be considered gestured parameters as they are the elements to be modified by the performed gestures. The motivation to combine Liao with Amer in view of Lee would have been the same as that of claim 1. Regarding claim 17, Amer in view of Lee and Liao teaches the system of claim 16. Lee further teaches wherein the operations further comprise determining a location in the script based on a portion of the audio stream and a script index indicating locations within the script and corresponding words in the script (paragraphs [101] & [105]). Lee describes time points of specific text strings in the input audio, this is analogous to a script index indicating locations within the script corresponding to words in the script. Regarding claim 18, Amer in view of Lee and Liao teach the system of claim 17. Lee further teaches wherein determining the user speaking the portion of the text segment included in the script further comprises determining the location in the script corresponds to the portion of the text segment (paragraph [105]). Lee describes utilizing time points associated with specific phrases to indicate avatar gestures. Regarding claim 20, Amer in view of Lee and Liao teach the system of claim 16. Amer further teaches where the gesture parameters include a first vector generated based on a portion of the user captured during the script authoring and the second set of gesture parameters include a second vector generated based on the portion of the user captured during presentation of the script (paragraphs [0051], [0056], [0070]). Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Lee and Liao and in further view of Novikoff (US 2015/0058733 A1). Regarding claim 19, Amer in view of Lee and Liao teach the system of claim 16. However, Amer in view of Lee and Liao fail to teach where detecting the gesture performed by the user in the video stream further comprises determining an intentionality of an action performed by the user based on a hand of the user being static. However, Novikoff teaches the system of claim 16, where detecting the gesture performed by the user in the video stream further comprises determining an intentionality of an action performed by the user based on a hand of the user being static (paragraph [0036], [0039]). Novikoff describes gestures including those that are performed by one or both of the user’s hands. Novikoff further describes the determination of intent, including basing the determination based on the speed and time of the gesture, which is analogous to determining intent based on a user’s hand being static (speed of 0). Novikoff is considered analogous to the claimed invention as it is in the same field of image processing and manipulation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to implement the intent detection of Novikoff with Amer in view of Lee and Liao. Response to Arguments Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Aneja (US 20240244287 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN W MCCOY/Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Jun 05, 2025
Non-Final Rejection — §103
Jun 10, 2025
Interview Requested
Jul 02, 2025
Examiner Interview Summary
Jul 02, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Response Filed
Nov 03, 2025
Final Rejection — §103
Nov 18, 2025
Interview Requested
Dec 01, 2025
Examiner Interview Summary
Dec 01, 2025
Applicant Interview (Telephonic)
Feb 06, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Feb 28, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month