Prosecution Insights
Last updated: April 19, 2026
Application No. 18/793,089

SMART DIGITAL INTERACTIONS WITH AUGMENTED REALITY AND GENERATIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §103§112
Filed
Aug 02, 2024
Examiner
SUO, JOSHUA JUNGWOOK
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Wells Fargo Bank N A
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
10 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103 §112
DETAILED ACTION Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17 recites the limitation "the prompt" on page 4. There is insufficient antecedent basis for this limitation in the claim. For the purposes of this examination, the examiner will interpret this as the “user input” as the generative AI will transmit a request to generate a response based on the “user input” that comprises at least one question. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-8, 10, 13-17, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lanier (US 10325407 B2) in view of Qin (US 20240346256 A1). As per claim 1, Lanier teaches the claimed: 1. A method, comprising: authenticating, by a premises computing system at a premises location, a user according to one or more authentication settings provided to the premises computing system, (Lanier (col 7, line 34-40): “In various examples, the service provider 102 can access, receive, and/or determine authentication data from a device (e.g., device 108A), access content data associated with virtual content items, send rendering data associated with individual virtual content items to the device (e.g., device 108A), and cause the individual virtual content items to be presented on a display associated with the device (e.g., device 108A).”) the user associated with user data records stored by a primary computing system in communication with the premises computing system, (Lanier (col 10, line 23-28): “The authentication data can correspond to a user identification and password associated with a user (e.g., user 106A) associated with the device (e.g., device 108A), biometric identification associated with a user (e.g., user 106A) associated with the device (e.g., device 108A), etc.”) the authentication settings authorizing presentation and processing of a subset of the user data records; (Lanier (col 10, line 36-40): “In additional and/or alternative examples, the authentication data can be utilized to determine virtual content items that are available to the user (e.g., user 106A) and the user's (e.g., user 106A) permissions corresponding to viewing and/or interacting with each of the virtual content items.”) monitoring, by the premises computing system, user input from the user via an input device of the premises computing system; (Lanier (col 5, line 14-17): “Gestures and user input may be performed by any of a number of techniques. Selection of different menu options as well as objects may be performed by airtapping the options or objects once, for example.” Lanier (col 8, line 55-65): “Device(s) that can be included in the one or more server(s) 110 can further include one or more input/output (I/O) interface(s) coupled to the bus to allow device(s) to communicate with other devices such as input peripheral devices (e.g., a keyboard, a mouse, a pen, a game controller, a voice input device, a touch input device, gestural input device, a tracking device, a mapping device, an image camera, a time-of-flight (TOF) camera, a depth sensor, a physiological sensor, and the like) and/or output peripheral devices (e.g., a display, a printer, audio speakers, a haptic output, and the like).” Lanier teaches the input devices such as touch input device, gestural input device, tracking device, etc. All of these input devices are monitored to know the actions the user will perform.) Lanier alone does not explicitly teach the remaining claim limitations. However, Lanier in combination with Qin teaches the claimed: transmitting, by the premises computing system to a generative artificial intelligence system, a request to generate a response based on the user input, (Qin [0101]: “In an embodiment, a system for augmenting a large language model includes: a processor; and a memory device that stores program code structured to cause the processor to: receive a query; generate a first feature vector based on the query; compare the first feature vector to a plurality of second feature vectors to determine a subset of the second feature vectors that satisfy a predetermined condition; retrieve the pieces of augmentation information corresponding to the determined subset of second feature vectors; provide, to the large language model, an augmented prompt generated based at least on the query and the retrieved pieces of augmentation information; and receive a response generated by the large language model.” Qin teaches an LLM, a type of generative AI, that receives a prompt and generates a response by the LLM, which indicates this is using a generative AI to produce an output.) the request regarding the subset of the user data records; and (Qin [0019]: “In embodiments, the augmented prompt may include the original query, contextual information for answering the query, the retrieved augmentation information, and/or a request to answer the original query based on the contextual information and/or the retrieved augmentation information.” Qin [0039]: “In embodiments, contextual information 215 may describe the context of the user (e.g., user identifier, user role, user profile, user location, browsing history, etc.)”. Qin teaches the contextual information that is provided along with the query in order to generate a response, and this contextual information contains information of the user, as stated in paragraph 39. Therefore, the request is regarding the user’s data.) presenting, by the premises computing system via an augmented reality device, at least one visualization generated based on the response received from the generative artificial intelligence system. (Qin [0071] describes a list of computing devices that are compatible with augmented reality, which include a “wearable computing device (e.g., a head-mounted augmented reality and/or virtual reality device including smart glasses such as Google® Glass™, Oculus Quest 2® by Reality Labs, a division of Meta Platforms, Inc, etc.)” and would be able to present the visualization generated based on the response of the generative AI.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the generative AI system as taught by Qin with the system of Lanier in order to enhance mixed reality experiences by creating dynamic and personalized environments and also to automate detailed and realistic 3D models in this mixed reality. As per claims 10 and 19, these claims are similar in scope to limitations recited in claim 1, and thus is rejected under the same rationale. As per claim 2, Lanier teaches the claimed: 2. The method of claim 1, further comprising: authenticating, by the premises computing system, the user further based on a wireless communication from a mobile device of the user. (Lanier (col 14, line 65- col 15, line 5): “Examples of device(s) 108 can include but are not limited to mobile computers, embedded computers, or combinations thereof. Example mobile computers can include laptop computers, tablet computers, wearable computers, implanted computing devices, telecommunication devices, automotive computers, portable gaming devices, media players, cameras, or the like.” Lanier (col 8, line 7-14): “In some examples, the networks 104 can be any type of network known in the art, such as the Internet. Moreover, the devices 108 can communicatively couple to the networks 104 in any manner, such as by a global or local wired or wireless connection (e.g., local area network (LAN), intranet, Bluetooth, etc.). The networks 104 can facilitate communication between the server(s) 110 and the devices 108 associated with the one or more users 106.”) As per claim 4, Lanier teaches the claimed: 4. The method of claim 1, further comprising: automatically transmitting, by the premises computing system, the request to generate the response responsive to detecting the user input. (Lanier (col 22, line 60 – col 23, line 7): “In some implementations, HoloPaint may incorporate machine learning to predict what a user is drawing. … In the case illustrated in FIG. 4, as soon as the user draws enough of the sketch of the palm tree, HoloPaint may learn and predict that the user is sketching a palm tree. Consequently, HoloPaint may automatically complete the sketch without the user performing any other actions (e.g., the user need not finish the sketch). … In some cases, HoloPaint may temporarily display several alternative sketches of a palm tree so that the user may select a preferred one.” Lanier teaches machine learning that can generate a response, responsive to detecting a user input. In the passage above, the user draws a palm tree that is unfinished, and HoloPaint, which incorporates machine learning, can predict and finish the drawing or even create a new one which the user can select and use for their own. This process of predicting and generating must include some kind of automatic transmitting of a request to generate the finished image in response to the user input.) As per claim 13, this claim is similar in scope to limitations recited in claim 4, and thus is rejected under the same rationale. As per claim 5, Lanier teaches the claimed: 5. The method of claim 1, further comprising: receiving, by the premises computing system, the response from the generative artificial intelligence system, the response comprising at least one template; and generating, by the premises computing system, the at least one visualization based on the at least one template of the response. (Lanier (col 22, line 60 – col 23, line 7): “In some implementations, HoloPaint may incorporate machine learning to predict what a user is drawing. … In some cases, HoloPaint may temporarily display several alternative sketches of a palm tree so that the user may select a preferred one.” Lanier teaches the machine learning that can predict and generate fully completed drawings, which can be considered templates for the user to select, and as stated in the in the passage above, HoloPaint can display the completed alternate drawing or sketches in response to the user’s drawings.) As per claim 14, this claim is similar in scope to limitations recited in claim 5, and thus is rejected under the same rationale. As per claim 6, Lanier teaches the claimed: 6. The method of claim 1, further comprising: authenticating, by the premises computing system, a remote user via a remote computing system; and providing, by the premises computing system, the at least one visualization to the remote computing system responsive to authenticating the remote user. (Lanier (col 18, line 34-42): “In some examples, another user may be remotely located and can be virtually present in the mixed reality environment … That is, the device (e.g., device 108A) corresponding to the user (e.g., user 106A) may receive streaming data to render the remotely located user (e.g., user 106B) in the mixed reality environment presented by the device (e.g., device 108A).” Lanier (col 18, line 55-60): “As described below, the user may have authenticated his device (e.g., device 108A). Based at least in part on authenticating the user's device, the rendering module 136 associated with the device may render the various virtual content items on the display 204 corresponding to the device.”) As per claim 15, this claim is similar in scope to limitations recited in claim 6, and thus is rejected under the same rationale. As per claim 7, Lanier teaches the claimed: 7. The method of claim 6, wherein the one or more authentication settings for the user authorize a second subset of the user data to be shared with the remote user, (Lanier (col 18, line 34-42): “In some examples, another user may be remotely located and can be virtually present in the mixed reality environment … That is, the device (e.g., device 108A) corresponding to the user (e.g., user 106A) may receive streaming data to render the remotely located user (e.g., user 106B) in the mixed reality environment presented by the device (e.g., device 108A). Lanier (col 11, line 11-15): “In at least one example, the user (e.g., user 106A) associated with a device (e.g., device 108A) that initially requests the virtual content item can be the owner of the virtual content item such that he or she can modify the permissions associated with the virtual content item.”) Lanier teaches that other users may be located remotely, and the owner of the virtual content item, which corresponds to the subset of the user data, can decide who to share it with, changing the permissions associated with that virtual content item.) and the method further comprises: determining, by the premises computing system, that the at least one visualization is to be shared with the remote computing system based on the one or more authentication settings; and (Lanier (col 10, line 36-40): “In additional and/or alternative examples, the authentication data can be utilized to determine virtual content items that are available to the user (e.g., user 106A) and the user's (e.g., user 106A) permissions corresponding to viewing and/or interacting with each of the virtual content items.” Lanier (col 11, line 43-51): “That is, the content management module 120 may access the content data to determine devices 108 with which a content item has been shared and/or interactions available for each of the devices 108. As described above, the content data may include permissions data which indicates whether a content item is public, private, or has been shared with one or more devices (e.g., device 108B, device 108C, etc.) and/or interactions available for each of the devices 108.” Similar to the claim limitation above, the owner of the virtual content item can decide who to give permissions to, which incorporate the authentication data to determine who it can be shared with.) providing, by the premises computing system, the at least one visualization to the remote computing system responsive to determining that the at least one visualization is to be shared with the remote computing system. (Lanier (col 10, line 64 – col 11, line 1): “Permissions data can include information indicating which users 106 and/or corresponding devices 108 have permission to view and/or interact with the virtual content in the mixed reality environment (i.e., which users 106 the owner has shared the virtual content with).” Similar to the claim limitations above, since the owner of the virtual content item can give permission for a remote user to access and view the content item, the providing of the content item to the remote user after determining it is to be shared must occur.) As per claim 16, this claim is similar in scope to limitations recited in claim 7, and thus is rejected under the same rationale. As per claim 8, Lanier teaches the claimed: 8. The method of claim 6, further comprising: providing, by the premises computing system, a video stream to the remote computing system for display with the at least one visualization. (Lanier (col 18, line 34-42): “In some examples, another user may be remotely located and can be virtually present in the mixed reality environment (e.g., as an avatar, a reconstructed 3D model that has been captured using various sensors and/or cameras (e.g., KINECT® or TOF camera)). That is, the device (e.g., device 108A) corresponding to the user (e.g., user 106A) may receive streaming data to render the remotely located user (e.g., user 106B) in the mixed reality environment presented by the device (e.g., device 108A).” Lanier (col 18, line 53-55): “In the example of FIG. 3, the user may be viewing various virtual content items in the user's mixed reality environment via a display 204 of his device (e.g., device 108A).”) As per claim 17, Lanier and Qin teach the claimed: 17. The system of claim 10, wherein the prompt comprises at least one question, and wherein the premises computing system is further configured to: generate the at least one visualization to include one or more answers to the at least one question. (Qin [0039]: “In embodiments, query 216 may include a question in the form of a text string or voice data.” Qin [0046 - 0047]: “Prompt generator 212 may generate a prompt for LLM 214 based on one or more of query 216, first feature vector 226, one or more of second feature vectors 228, indications 230, and/or augmentation information 232. For example, prompt generator 212 may generate an augmented prompt 236 that includes the original query, contextual information, … content information, … and a request to answer the original query based on the provided contextual information using the included content information.… In embodiments, LLM 214 receives augmented prompt 236 from prompt generator 212 and generates response 238.” Qin [0070 - 0071]: “Embodiments disclosed herein may be implemented in one or more computing devices … computing device 802 may be a mobile computing device such as … a wearable computing device (e.g., a head-mounted augmented reality and/or virtual reality device including smart glasses such as Google® Glass™, Oculus Quest 2® by Reality Labs, a division of Meta Platforms, Inc, etc.).” Qin teaches the query that can include a question, which is then answered by the prompt generator and generates a response. Additionally, since the embodiments described above can be implemented in wearable computing devices, such as a HMD or virtual reality device, it will therefore display a visualization of the answered query on these computing devices.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the question query as taught by Qin with the system of Lanier in order to guide the generative AI system into a clearer path of what the user actually wants to know or see, instead of commands that only trigger task execution. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Lanier (US 10325407 B2) in view of Qin (US 20240346256 A1) and in further view of Creamer (US 20040267527 A1). As per claim 3, Lanier teaches the claimed: 3. The method of claim 2, wherein the user input comprises audio data, and the method further comprises: monitoring, by the premises computing system, speech input from the user based on the audio data captured via the input device of the premises computing system, (Lanier (col 4, lines 49-51): “In particular examples, the main user interface in HoloPaint may be an arm-lock menu or toolkit, which a user can make appear or disappear by using voice commands.” Lanier (col 5, line29-30): “In some examples, voice commands may be used as input.” Lanier teaches the voice commands that the system responds to, which indicates there is some kind of monitoring of speech input from the user.) Lanier and Qin alone do not explicitly teach the remaining claim limitations. However, Lanier and Qin in combination with Creamer teaches the claimed: the speech input processed using a text-to-speech model. (Creamer [0012]: “In a first aspect of the invention, a method of voice-to-text reduction for real-time messaging can include the steps of receiving a speech input at a calling party, transcribing the speech input to a text message, … The rendering step can include either displaying the text message or providing an audible output using a speaker and text-to-speech conversion or synthesis. The method can further include, as mentioned above, a translation step, where the text message is translated to another language”. Please note: In this instance, the Examiner is interpreting the claimed “text-to-speech” process similar to how it is used in Applicant’s disclosure towards the end of paragraph [0034], e.g. to translate speech into a different language.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the text-to speech model as taught by Creamer with the system of Lanier as modified by Qin in order to be able to provide an audible output that is fast and scalable. Claims 9 and 18 is rejected under 35 U.S.C. 103 as being unpatentable over Lanier (US 10325407 B2) in view of Qin (US 20240346256 A1) and in further view of Siebel (US 20240202221 A1). As per claim 9, Lanier and Qin alone do not explicitly teach the claimed limitations. However, Lanier and Qin in combination with Siebel teaches the claimed: 9. The method of claim 1, further comprising: receiving, by the premises computing system, the response from the generative artificial intelligence system, the response including instructions to retrieve the subset of the user data records from the primary computing system; and ( PNG media_image1.png 778 522 media_image1.png Greyscale Siebel in FIG. 6 and [0113-0120] shows and describes the flowchart of an example generative AI method. The query is analyzed to determine which data set / models to use from a plurality of data domains, which can correspond to selecting a subset of a user from the different number of users. The generative AI processes user input and identifies relevant data sources, which the retrieval module fetches these records. The system then determines relevance scores for the data records to determine which to return as part of the output. These acts show that the generative AI’s response is based on instructions to fetch specific records from external data sources. Siebel also teaches the presentation module in paragraph [0100], where it “can generate graphical user interface enterprise search query input and response interfaces.” Therefore, given that the generative AI can determine which specific data set or model (subset of user data from a plurality of different users) to retrieve, using the presentation module, it can generate a visualization based on the retrieved subset of user data.) generating, by the premises computing system, the at least one visualization by retrieving the subset of the user data records from the primary computing system according to the instructions. (Siebel [0100]: “The presentation module 430 can function to generate graphical user interface components (e.g., server-side graphical user interface components) that can be rendered as complete graphical user interfaces on other systems. For example, the presentation module 460 can function to present an interactive graphical user interface for display and receiving information. For example, the presentation module 430 can generate graphical user interface enterprise search query input and response interfaces (e.g., as shown in FIG. 2A, FIG. 2B, FIG. 7A, and FIG. 7B).”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the retrieval of user data as instructed by generative AI as taught by Siebel with the system of Lanier as modified by Qin in order to create more personalized and context aware responses instead of generic answers, so that responses can be tailored more toward what the user may actually need. As per claim 18, this claim is similar in scope to limitations recited in claim 9, and thus is rejected under the same rationale. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lanier (US 10325407 B2) in view of Qin (US 20240346256 A1) and in further view of Kim (US 9774575 B2). As per claim 11, Lanier and Qin alone do not explicitly teach the claimed limitations. However, Lanier and Qin in combination with Kim teaches the claimed: 11. The system of claim 10, wherein the premises computing system is further configured to: authenticate the user further based on a near-field communication (NFC) signal from a mobile device of the user. (Kim (col 1, line 36-44): “Embodiments of the present invention comprise a system for authenticating a user by near field communication. The system includes a security device performing user authentication by using a Universal Subscriber Identity Module (USIM) ID and a password both being transmitted through near field communication in response to an authentication request, and a mobile device transmitting the USIM ID and password through near field communication.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the near field communication as taught by Kim with the system of Lanier as modified by Qin in order to provide secure, fast, and proximity-based authentication, directly from mobile devices, that is resistant to remote attacks. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Lanier (US 10325407 B2) in view of Qin (US 20240346256 A1) and in further view of Kim (US 9774575 B2) and in even further view of Creamer (US 20040267527 A1). As per claim 12, the reasons and rationale for the rejection of claim 3 is incorporated herein. 12. The system of claim 11, wherein the user input comprises audio data, and the premises computing system is further configured to: monitor speech input from the user based on the audio data captured via the input device of the premises computing system, the speech input processed using a text-to-speech model. (Claim 12 is rejected with the same reasons, rationale, and motivation given in claim 3.) Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Lanier (US 10325407 B2) in view of Qin (US 20240346256 A1) and in further view of Ladner (US 20240127251 A1). As per claim 20, Lanier and Qin alone do not explicitly teach the claimed limitations. However, Lanier and Qin in combination with Ladner teaches the claimed: 20. The non-transitory computer-readable medium of claim 19, wherein the operations further comprise: generating the at least one visualization to represent a predicted cash flow using the response. (Ladner [0060]: “FIG. 3 is a block diagram of an example system that may be used to view and interact with cash flow analysis system 308, according to an example implementation of the disclosed technology. … As shown, cash flow analysis system 308 may interact with a user device 302 via a network 306. In certain example implementations, the cash flow analysis system 308 may include a local network 312, a prediction system 220, a web server 310, and a database 316.” Ladner [0063]: “The prediction system 220 may include programs (scripts, functions, algorithms) to configure data for visualizations and provide visualizations of datasets and data models on the user device 302.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the visualization that represents a cash flow as taught by Ladner with the system of Lanier as modified by Qin in order to allow users to see complex financial data in an intuitive visualization to be able to make faster and better financial decisions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA SUO whose telephone number is (571) 272-8387. The examiner can normally be reached Mon-Fri 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA SUO/Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Feb 26, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597191
FACE IMAGE GENERATION METHOD AND DEVICE FOR GENERATING FULLY-CONTROLLABLE TALKING FACE
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month