Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,847

SYSTEMS AND METHODS FOR GENERIC CONTROL USING A NEURAL SIGNAL

Non-Final OA §103
Filed
Jul 19, 2024
Examiner
CERULLO, LILIANA P
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Synchron Australia Pty Limited
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
96%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
702 granted / 944 resolved
+12.4% vs TC avg
Strong +22% interview lift
Without
With
+21.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
27 currently pending
Career history
971
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
22.2%
-17.8% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 944 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/5/2026 has been entered. Currently, claims 1-2, 5, 7-8, 10-15 and 17-24 are pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 7-8, 10-15, 17-19 and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Segal in US 2015/0091791 (hereinafter Segal) in view of Sereshkeh et al. in US 2019/0107888 (hereinafter Sereshkeh). Regarding claim 1, Segal disclose a brain-computer interface system (Segal’s Fig. 1 and par. 3, 77: see 10) for use by an individual (Segal’s Fig. 1 and par. 77: see 11) having a neural interface (Segal’s Fig. 1 and par. 77, 79: see 12) configured to measure a neural-related signal (Segal’s Fig. 1 and par. 77: electric fields or signals representing brain waves) associated with a thought of the individual (Segal’s par. 77: thoughts that result in identifiable electric field or signals), the brain-computer interface system comprising: one ore more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) associated with an external device (Segal’s Fig. 1 and par. 77: device 17 and display 19) and in communication with a processing unit comprising a processor (Segal’s Fig. 1: see 21, 14 or 18), the one ore more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) having one or more input commands (Segal’s Fig. 1 and par. 77: command for performing actions or functions) that control the external device (Segal’s Fig. 1 and par. 77: see 17 and 19); the processing unit (Segal’s Fig. 1: see 21, 14 or 18) in communication with the neural interface (Segal’s Fig. 1 and par. 77: see 21/14/18 in communication with 12) and configured to: receive the neural-related signal while the individual thinks about the thought (Segal’s par. 13, 64: output of EEG [neural related signal] while thinking the directional intention [thought]); train by detecting the neural related signal (Segal’s par. 64); store the thought and the neural-related signal in a reference library (Segal’s par. 56 database of runes [reference library] storing directional intention or pattern [thought] and output of EEG [neural-related signal]); associate the thought with an input command (Segal’s par. 56: library where thought to commands/actions are stored); receive another neural-related signal (Segal’s par. 65: brain waves related to directional intention read by brain-scanning device) and determine the thought from the reference library based on the neural-related signal (Segal’s par. 64-65: determine line pattern [thought] based on brain wave); and transmit the input command (Segal’s par. 17: control signal to perform an action output to a device) associated with the thought (Segal’s par. 17: directional intention) to the external device (Segal’s Fig. 1 and par. 77: device 17 and display 19) to allow the individual to independently control the one or more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104), wherein the thought of the individual (Segal’s par. 8-9, 45, 48: movement of a body part, par. 40: motor imagery) comprises a task-irrelevant thought (Segal’s Fig. 1 and par. 77: command for performing actions or functions, e.g. execute current selection of par. 61, or command to open or browse per par. 58) of a real or imagined movement of one or more body parts (Segal’s par. 8-9, 45, 48: movement of a body part, par. 40: motor imagery). Segal fails to disclose specifics of the training, and therefore Segal fails to disclose repeatedly receive the neural-related signal while the individual thinks about the thought; train a mathematical model or algorithm by repeatedly detecting the neural related signal, wherein the mathematical model or algorithm comprises parameters or hyperparameters optimized to distinguish the thought from other neural-related signals of the individual; or provide visual or auditory feedback to the individual during the training, the feedback comprising indicating whether the neural-related signal detected matches the thought. However, in the same field of BCIs, Sereshkeh disclose repeatedly receiving a neural-related signal while the individual thinks about a thought (Sereshkeh’s par. 97-100: during a session of training, participants repeatedly think an answer -thought] and the EEG signal is received); train a mathematical model or algorithm by repeatedly detecting the neural related signal (Sereshkeh’s par. 99, 109: training sessions for classification algorithms), wherein the mathematical model or algorithm comprises parameters or hyperparameters optimized to distinguish the thought from other neural-related signals of the individual (Sereshkeh’s par. 109, 129, 200: hyperparameters or parameters that yield the lower error [optimized] to classify answers [distinguish a thought from other neural related signal]); and provide visual or auditory feedback to the individual during the training (Sereshkeh’s par. 110), the feedback comprising indicating whether the neural-related signal detected matches the thought (Sereshkeh’s Fig. 6 and par. 99-100: see feedback which indicates whether the answer was detected according to the thought). Therefore, it would have been obvious to one of ordinary skill in the art, that Segal’s training (Segal’s par. 64) is performed by Sereshkeh’s method (Sereshkeh’s Fig. 6 and par. 97-100, 109-100 and 200), in order to obtain the predictable result of training (Segal’s par. 64) and the benefit of a model that is highly accurate (Sereshkeh’s par. 65). By doing such combination, Segal in view of Sereshkeh disclose: A brain-computer interface system (Segal’s Fig. 1 and par. 3, 77: see 10) for use by an individual (Segal’s Fig. 1 and par. 77: see 11) having a neural interface (Segal’s Fig. 1 and par. 77, 79: see 12) configured to measure a neural-related signal (Segal’s Fig. 1 and par. 77: electric fields or signals representing brain waves) associated with a thought of the individual (Segal’s par. 77: thoughts that result in identifiable electric field or signals), the brain-computer interface system comprising: one ore more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) associated with an external device (Segal’s Fig. 1 and par. 77: device 17 and display 19) and in communication with a processing unit comprising a processor (Segal’s Fig. 1: see 21, 14 or 18), the one ore more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) having one or more input commands (Segal’s Fig. 1 and par. 77: command for performing actions or functions) that control the external device (Segal’s Fig. 1 and par. 77: see 17 and 19); the processing unit (Segal’s Fig. 1: see 21, 14 or 18) in communication with the neural interface (Segal’s Fig. 1 and par. 77: see 21/14/18 in communication with 12) and configured to: repeatedly receive the neural-related signal while the individual thinks about the thought (Sereshkeh’s par. 97-100: during a session of training, participants repeatedly think an answer -thought] and the EEG signal is received, this is equivalent to training while thinking a directional thought in Segal’s par. 64); train a mathematical model or algorithm by repeatedly detecting the neural related signal (Sereshkeh’s par. 99, 109: training sessions for classification algorithms which occurs in the training of Segal’s par. 64), wherein the mathematical model or algorithm comprises parameters or hyperparameters optimized to distinguish the thought from other neural-related signals of the individual (Sereshkeh’s par. 109, 129, 200: hyperparameters or parameters that yield the lower error [optimized] to classify answers [distinguish a thought from other neural related signal]); provide visual or auditory feedback to the individual during the training (Sereshkeh’s par. 110), the feedback comprising indicating whether the neural-related signal detected matches the thought (Sereshkeh’s Fig. 6 and par. 99-100: see feedback which indicates whether the answer was detected according to the thought); store the thought and the neural-related signal in a reference library (Segal’s par. 56 database of runes [reference library] storing directional intention or pattern [thought] and output of EEG [neural-related signal]); associate the thought with an input command (Segal’s par. 56: library where thought to commands/actions are stored); receive another neural-related signal (Segal’s par. 65: brain waves related to directional intention read by brain-scanning device) and determine the thought from the reference library based on the neural-related signal (Segal’s par. 64-65: determine line pattern [thought] based on brain wave); and transmit the input command (Segal’s par. 17: control signal to perform an action output to a device) associated with the thought (Segal’s par. 17: directional intention) to the external device (Segal’s Fig. 1 and par. 77: device 17 and display 19) to allow the individual to independently control the one or more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104), wherein the thought of the individual (Segal’s par. 8-9, 45, 48: movement of a body part, par. 40: motor imagery) comprises a task-irrelevant thought (Segal’s Fig. 1 and par. 77: command for performing actions or functions, e.g. execute current selection of par. 61, or command to open or browse per par. 58) of a real or imagined movement of one or more body parts (Segal’s par. 8-9, 45, 48: movement of a body part, par. 40: motor imagery). Regarding claim 2, Segal in view of Sereshkeh disclose a telemetry unit (Segal’s par. 43, 45, 62: brain scanning device [EEG] that produces digital output) adapted for facilitating communication between the neural interface (Segal’s par. 42-43: electrodes on scalp) and the processing unit (Segal’s Fig. 1). Regarding claim 5, Segal in view of Sereshkeh further disclose wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is located within a host device (Sereshkeh’s par. 291, 294-296: see servers or remote network resources). It would also have been obvious to one of ordinary skill in the art, that Segal’s processing unit is located within a host device (such as a server), in order to obtain the benefit of operating the system through remote network resources (Sereshkeh’s Figs. 1-3 and par. 296). Regarding claim 7, Segal in view of Sereshkeh disclose wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) comprises a wired connection or a wireless connection (Segal’s par. 77: device 12 outputs wirelessly) to the neural interface (Segal’s Fig. 1 and par. 77, 79: see 12). Regarding claim 8, Segal in view of Sereshkeh disclose wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is configured to transmit the input command (Segal’s Fig. 1 and par. 77: command for performing actions or functions) to the one or more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) through a wired connection or a wireless connection (Segal’s Fig. 1 and par. 34-35: wired/wireless transmitters and receives for communication). Regarding claim 10, Segal in view of Sereshkeh disclose where the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is further configured to identify a time-domain signal (Segal’s par. 6: see frequency in Hz [time-domain signal]) from the neural-related signal (Segal’s par. 6: brain’s six wave types or bands). Regarding claim 11, Segal in view of Sereshkeh disclose wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is further configured to extract one or more features (Segal’s par. 6: see frequency) from the neural-related signal (Segal’s par. 6: brain’s six wave types or bands). Regarding claim 12, Segal in view of Sereshkeh further disclose wherein the one or more features (Segal’s par. 6: see frequency) comprises a pattern of voltage fluctuations (limitation in the alternative), a fluctuation in a power in a specific band of frequencies embedded within the neural-related signals (Sereshkeh’s par. 204-205: spectral power estimates of the ranges provided), or both (limitation in the alternative). It would also have been obvious to one of ordinary skill in the art, that Segal extracts spectral power estimates of frequency bands (Sereshkeh’s par. 204-205), in order to obtain the benefit of yielding a low classification error (Sereshkeh’s par. 205). Regarding claim 13, Segal in view of Sereshkeh disclose an internal telemetry unit (Segal’s par. 43, 45, 62: brain scanning device [EEG] that produces digital output and par. 50, 63, 76: implanted processor) adapted for facilitating communication between the neural interface (Segal’s par. 42-43: electrodes on scalp) and the processing unit (Segal’s Fig. 1). Regarding claim 14, Segal in view of Sereshkeh disclose wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) comprises a portable computing device (Segal’s Fig. 1 and par. 79: device 12 is incorporated on a hat, wearable head set or inserted/work over the ear) comprising a memory (Segal’s Fig. 1: see 20) and configured to communicate with the neural interface (Segal’s par. 42-43: electrodes on scalp). Regarding claim 15, Segal in view of Sereshkeh disclose wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is further configured to re-associate the thought with a second input command (Segal’s Fig. 8 and par. 80: a vertical directional thought is associated with “main menu” but also with “thought-to-speech commands” and with “appliances ad devices”, par. 61: a down intention is associated with moving a menu selection lower and with executing the current selection). Regarding claim 17, Segal in view of Sereshkeh disclose where the one or more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) comprises at least one of a mouse cursor (Segal’s par. 41), a mobility device (Segal’s par. 8, 14: wheelchair, car or adjustable bed), a prosthetic limb (Segal’s par. 9), a smart phone (Segal’s par. 14: mobile phones and tablet computers), a smart household appliance (Segal’s par. 14, 73: appliance, television), and a smart household system (Segal’s par. 14, 73: door, television). Regarding claim 18, Segal in view of Sereshkeh disclose wherein the one or more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) comprise a user interface (Segal’s Fig. 8 and par. 104: cognitive user interface CUI) that shows the one ore more input commands (Segal’s Fig. 8 and par. 104: Main Menu, Thought-to-Speech commands, computer applications, and appliances and devices) for a plurality of additional end user devices (Segal’s Fig. 8: see thought-to-speech that includes devices producing audible output per par. 62, and appliances and devices, which includes televisions, beds, etcetera per par. 14). Regarding claim 19, Segal in view of Sereshkeh disclose wherein the one or more end applications (Segal’s Fig. 1 and par. 77: controller 18 which performs action or function of device 17, such as a computer application or control of appliances and devices of par. 104) are configured to provide visual feedback (Segal’s par. 61: visual and audible output [feedback] for thought-to-speech [command]), auditory feedback (Segal’s par. 61: visual and audible output [feedback] for thought-to-speech [command]), haptic feedback (limitation in the alternative) or a combination thereof when the input command is transmitted to the one or more end applications (Segal’s par. 61: visual and audible output [feedback] for thought-to-speech [command]). Regarding claim 23, Segal in view of Sereshkeh disclose wherein a function of the one or more input commands (Segal’s Fig. 1 and par. 77: command for performing actions or functions) is definable by the one or more end applications (Segal’s par. 60: thought-to-control component including manufactured devices with pre-defined line patterns for though-control operation of those devices). Regarding claim 24, Segal in view of Sereshkeh further disclose further comprising an application programming interface accessible by third parties (Segal’s par. 60: COS accessible by third party) and which allows the thought to be assigned and reassigned to various input commands (Segal’s par. 60: predefined line patterns, or importing line pattern definitions, par. 56: line pattern assigned to any actions for a near limitless library of though to something, par. 99). It would also have been obvious to one of ordinary skill in the art, that the thoughts are reassigned to various input commands, in order to obtain the predictable result of enabling the line patterns to be assigned to any action for a near limitless library of thought to something (Segal’s par. 56, 60). Claims 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Segal in view of Sereshkeh as applied above, in further view of Steiner et al. in US 2015/0338917 (hereinafter Steiner). Regarding claim 20, Segal in view of Sereshkeh fail to disclose wherein the one or more end applications are configurable between an active state and an inactive state, wherein the processing unit is further configured to transmit the input command associated with the thought to one or more end applications in the active state. However, in the same field of endeavor of BCIs, Steiner discloses one or more end applications (Steiner’s par. 278-279: devices to control such as a household appliance) are configurable between an active state and an inactive state (Steiner’s par. 279: between a device set to “currently active device” and not [inactive]) and transmitting the input command associated with the thought (Steiner’s par. 278-279: command On or Off) to one or more end applications in the active state (Steiner’s par. 279: command to operated currently active device). Therefore, it would have been obvious to one of ordinary skill in the art, that Segal in view of Sereshkeh’s end applications are configured as inactive or active to send the input command associate with the thought (as taught by Steiner), in order to obtain the benefit of uniquely identifying a device for control among multiple devices (Steiner’s par. 278-279). By doing such combination, Segal in view of Sereshkeh and Steiner disclose: wherein the one or more end applications (Segal’s par. 104: a computer application, control of appliances and devices equivalent to devices of Steiner’s par. 278-279) are configurable between an active state and an inactive state (Steiner’s par. 279: between a device set to “currently active device” and not [inactive]), wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is further configured to transmit the input command (Segal’s Fig. 1 and par. 77: command for performing actions or functions which is equivalent to command On or Off of Steiner’s par. 278-279) associated with the thought (Segal’s par. 17: directional intention) to one or more end applications in the active state (Steiner’s par. 279: command to operate currently active device). Regarding claim 22, Segal in view of Sereshkeh fail to disclose wherein the processing unit is further configured to transmit the input command associated with the thought to one or more end applications chosen by the individual. However, in the same field of endeavor of BCIs, Steiner discloses transmitting the input command associated with the thought (Steiner’s par. 278-280: command On or Off, or Enable/Disable) to one or more end applications chosen by an individual (Steiner’s par. 279-280: user chooses on and off for different devices, or enable/disable, for e.g. a gun or rifle). Therefore, it would have been obvious to one of ordinary skill in the art, that Segal in view of Sereshkeh’s end applications are chosen by the user (as taught by Steiner), in order to obtain the benefit of uniquely identifying a device for control among multiple devices (Steiner’s par. 278-279). By doing such combination, Segal in view of Sereshkeh and Steiner disclose: wherein the processing unit (Segal’s Fig. 1: see 21, 14 or 18) is further configured to transmit the input command (Segal’s Fig. 1 and par. 77: command for performing actions or functions equivalent to command On or Off, or Enable/Disable of Steiner’s par. 278-280) associated with the thought (Segal’s par. 17: directional intention) to one or more end applications chosen by the individual (Steiner’s par. 279-280: user chooses on and off for different devices, or enable/disable, for e.g. a gun or rifle). Allowable Subject Matter Claim 21 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 21, the prior art fails to disclose ALL limitations of claims 1+20 in addition to “wherein the processing unit is further configured to transmit the input command associated with the thought to one or more end applications in the active state and to one or more end applications in the inactive state”. The closest prior art to Steiner (par. 278-279) fail to disclose these features. Response to Arguments Applicant’s arguments, see Remarks, filed 2/5/2026, with respect to the rejection(s) of claim(s) 1 under 102(a)(1) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Segal and Sereshkeh, please see above rejection addressing the amended limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Liliana Cerullo whose telephone number is (571)270-5882. The examiner can normally be reached 8AM to 3PM MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LILIANA CERULLO/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Apr 14, 2025
Non-Final Rejection — §103
Oct 17, 2025
Response Filed
Nov 03, 2025
Final Rejection — §103
Feb 04, 2026
Examiner Interview Summary
Feb 04, 2026
Applicant Interview (Telephonic)
Feb 05, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602105
SYSTEMS AND METHODS FOR RENDERING AUGMENTED REALITY CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602120
ELECTRONIC PEN HAVING KNOCK MECHANISM TO PUSH AND RETRACT ELECTRONIC PEN MAIN BODY OUT OF AND INTO PEN HOUSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602129
TOUCH CONTROL STRUCTURE AND DISPLAY APPARATUS WITH TOUCH SIGNAL LINES WITH DOUBLE-LAYER REGION IN A CORNER AREA
2y 5m to grant Granted Apr 14, 2026
Patent 12596472
METHODS FOR DISPLAYING A VISUAL INDICATION IN A USER INTERFACE BASED ON USER INTERACTION
2y 5m to grant Granted Apr 07, 2026
Patent 12596471
DEVICE AND METHOD WITH TRAINED NEURAL NETWORK TO IDENTIFY TOUCH INPUT
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
96%
With Interview (+21.5%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 944 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month