Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,589

VISION-BASED SYSTEM TRAINING WITH SIMULATED CONTENT

Non-Final OA §103§112
Filed
Feb 16, 2024
Examiner
PATEL, JAYESH A
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Tesla Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
739 granted / 887 resolved
+21.3% vs TC avg
Moderate +5% lift
Without
With
+5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
920
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
25.0%
-15.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 887 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-4, 8-10 and 16 recites the limitation "the first and second ground truth data labels and values" in lines 1-2. There is insufficient antecedent basis for this limitation in the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-8, 10-13 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Elluswamy et al. (US20200249685) hereafter Elluswamy in view of Segal et al. (US20210150244) hereafter Segal. 1. Regarding claim 1, Elluswamy discloses a system (fig 1-3, 5-6, paras 0009, 0011-0012, 0015-0019 and 0026-0030 and 0036-0042 shows and discloses a system 100) for managing vision systems in vehicles, the system comprising: a plurality of vehicles including systems for generating and processing vision data captured from one or more vision systems according to at least one machine learned algorithm, wherein the vision data captured from one or more vision systems is associated with ground truth labels (fig 1-3, paras 0009, 0011-0012, 0015-0019, 0027, 0036-0039 discloses sensors 101 capturing images from vehicles “i.e other similar vehicles” in para 0017 to train a machine learning model using time series of the captured data and the associated ground truth labels form other similar vehicles meeting the above claim limitations); one or more computing systems including processing devices and memory, that execute computer-executable instructions, for implementing a vision system information processing component that is operative to generate the at least one machine learned algorithm for execution by the plurality of vehicles, the at least one machine learned algorithm generated from a set training data (figs 1-3, paras 0009, 0011-0012, 0015-0019, 0027 shows and discloses one or more computing systems including processing devices and memory, that execute computer-executable instructions, for implementing a vision system information processing component that is operative to generate the at least one machine learned algorithm for execution by the plurality of vehicles “i.e other similar vehicles” in para 0017 to train a machine learning model, the at least one machine learned algorithm generated from a set training data); and one or more computing systems including processing devices and memory, that execute computer-executable instructions, for implementing a vision system processing service operative to (paras 0009, 0016, 0018, 0020, 0022 discloses one or more computing systems including processing devices and memory, that execute computer-executable instructions, for implementing a vision system processing service operative to): obtain first vision system capture information associated with images captured in the operation of a vehicle, the first vision system capture information associated with a first instance of time (figs 1-3 shows sensors and receiving sensor data, paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses obtain first vision system capture information associated with images captured in the operation of a vehicle, the first vision system capture information associated with a first instance of time); obtain second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time (figs 1-3 shows sensors and receiving sensor data in time series (i.e second instance of time subsequent by the first instance of time), paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses obtain second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time); obtain ground truth data labels and values associated with the second vision system capture information (figs 1-3 shows sensors and receiving sensor data in time series (i.e first, second, third---, paras 0015, 0027 (In some embodiments, sensor data including image data and odometry data is received to create a training data set. The sensor data may include still images and/or video from one or more cameras (i.e first, second--- vision capture system). Additional sensors such as radar, lidar, ultrasonic, etc. may be used to provide relevant sensor data. In various embodiments, the sensor data is paired with corresponding odometry data to help identify features of the sensor data. For example, location and change in location data can be used to identify the location of relevant features in the sensor data such as lane lines, traffic control signals, objects, etc. In some embodiments, the sensor data is a time series of elements and is used to determine a ground truth. The ground truth of the group is then associated with a subset of the time series), also 0036, 0039, 0042 and 0045 discloses obtain ground truth data labels and values (i.e features data) associated with the second vision system capture information, para 0036 discloses At 303, data related to the elements of the time series are received. In various embodiments, the related data is received at a training server along with the elements received at 301. In some embodiments, the related data is odometry data of the vehicle. Using location, orientation, change in location, change in orientation, and/or other related vehicle data, positional data of features identified in the elements of the time series can be labeled. For example, a lane line can be labeled with very accurate position by examining the time series of elements of the lane line. Typically, the lane line nearest the vehicle cameras is accurate and closely related to the position of the vehicle. In contrast, the XYZ position of the line furthest away from the vehicle is difficult to determine. The far sections of the lane line may be occluded (e.g., behind a bend or hill) and/or difficult to accurately capture (e.g., due to distance or lighting, etc.). The data related to the elements is used to label portions of features identified in the time series that are identified with a high degree of accuracy. In various embodiments, a threshold value is used to determine whether to associated an identified portion of a feature (such as a portion of a lane line) with the related data. For example, portions of a lane line identified with a high degree of certainty (such as portions near the vehicle) are associated with the related data while portions of a lane line identified with a degree of certainty below a threshold value (such as portions far away from the vehicle) are not associated with the related data of that element. Instead, another element of the time series, such as a subsequent element, with a higher degree of certainty and its related data are used., examiner notes that the specifics of “values” are not required by the current claim): at least one of determine or (figs 1-3 shows sensors and receiving sensor data in time series, paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses at least one of determine ground truth data labels and values associated with the first vision system capture information based on the obtained ground truth data labels and values associated with the second vision system capture information); para 0038 discloses “for example, as each element in time series is captured, a corresponding set of related data is captured and saved with time series element. Elluswamy however is silent and fails to disclose and store a set of ground truth labels and values for the first and second instance of time. Segal discloses store a set of ground truth labels and values for the first and second instance of time (paras 0058, 0081-0082, 0085-0087 and store a set of ground truth labels and values for the first and second instance of time (i.e ground truth data 165 in every time stamp at different frequencies in the database)). Before the effective filing date of the invention was made Elluswamy and Segal are combinable because they are from the same field of endeavor and are analogous art of image processing. The suggestion/motivation would be an efficient, high speed and improved system at para 0058. Therefore, it would be obvious and within one of ordinary skill in the art to have recognized the advantages of Segal in the system of Elluswamy to obtain the invention as specified in claim 1. 2. Regarding claim 2, Elluswamy and Segal disclose the system as recited in Claim 1. Elluswamy discloses odometry data include acceleration, location, orientation of the vehicle. Segal also discloses state of the objects data (i.e ground truth data) for prediction such as speed, velocity, acceleration in paras 0075-0076 and 0081-0087 meeting the limitations of wherein the first and second ground truth data labels and values correspond to velocity. 3. Regarding claim 4, Elluswamy and Segal disclose the system as recited in Claim 1. Elluswamy discloses further wherein the first and second ground truth data labels and values correspond to position for detected objects (figs 5-6, para 0039 discloses wherein the first and second ground truth data labels and values correspond to position for detected objects (path of the detected moving objects) meeting the claim limitations). 4. Regarding claim 5, Elluswamy and Segal discloses the system as recited in Claim 1. Elluswamy disclose further wherein the vision system processing service operative to determine an initial set of ground truth data labels and values associated with the first vision system capture information prior to obtaining the ground truth data labels and values associated with the second vision system capture information (figs 1-3, 5- 6 paras 0015, 0027, 0036, 0039, 0042, 0044 and 0045 shows sensors and receiving sensor data (images) in time series and the associated ground truth labels and values (i.e first (prior) obtaining the vision system information and corresponding ground truth labels and values (i.e data) to the second vision system (another camera capture information) meeting the above claim limitations). 5. Regarding claim 6, Elluswamy and Segal disclose the system as recited in Claim 1. Elluswamy discloses further wherein the vision system processing service operative to determine the ground truth data labels and values associated with the second vision system capture information (figs 1-3 shows sensors and receiving sensor data in time series, paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses wherein the vision system processing service operative to determine the ground truth data labels and values associated with the second vision system capture information). 6. Claim 7 is a corresponding method claim of claim 1. See the corresponding explanation of claim 1. 7. Claim 8 is a corresponding method claim of claim 2. See the corresponding explanation of claim 2. 8. Claim 10 is a corresponding method claim of claim 4. See the corresponding explanation of claim 4. 9. Claim 11 is a corresponding method claim of claim 5. See the corresponding explanation of claim 5. 10. Claim 12 is a corresponding method claim of claim 6. See the corresponding explanation of claim 6. 11. Regarding claim 13, Elluswamy and Segal disclose the method as recited in Claim 7. Elluswamy discloses further wherein obtaining first vision system capture information associated with images captured in the operation of a vehicle, the first vision system capture information associated with a first instance of time and obtaining second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time is based on a capture rate (figs 1-3, 5-6 shows sensors and receiving sensor data in time series (i.e second instance of time subsequent by the first instance of time), paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses obtain second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time is based on the capture rate in para 0038). 12. Regarding claim 15, Elluswamy discloses a method (fig 1-3, 5-6 and paras 0009, 0011-0012, 0015-0019 and 0026-0030 and 0038-0042 shows and discloses a system 100 and a method) for managing vision systems in vehicles, the system comprising: obtaining ground truth data labels and values associated with first and second vision system capture information, wherein the first vision system capture information associated with a first instance of time (figs 1-3 shows sensors (first and second vision capture systems) and receiving sensor data (images) in time series, (figs 1-3 shows sensors and receiving sensor data in time series (i.e first, second, third---, paras 0015, 0027 (In some embodiments, sensor data including image data and odometry data is received to create a training data set. The sensor data may include still images and/or video from one or more cameras (i.e first, second--- vision capture system). Additional sensors such as radar, lidar, ultrasonic, etc. may be used to provide relevant sensor data. In various embodiments, the sensor data is paired with corresponding odometry data to help identify features of the sensor data. For example, location and change in location data can be used to identify the location of relevant features in the sensor data such as lane lines, traffic control signals, objects, etc. In some embodiments, the sensor data is a time series of elements and is used to determine a ground truth. The ground truth of the group is then associated with a subset of the time series), also 0036, 0039, 0042 and 0045 discloses obtain ground truth data labels and values (i.e features data) associated with the second vision system capture information, para 0036 discloses At 303, data related to the elements of the time series are received. In various embodiments, the related data is received at a training server along with the elements received at 301. In some embodiments, the related data is odometry data of the vehicle. Using location, orientation, change in location, change in orientation, and/or other related vehicle data, positional data of features identified in the elements of the time series can be labeled. For example, a lane line can be labeled with very accurate position by examining the time series of elements of the lane line. Typically the lane line nearest the vehicle cameras is accurate and closely related to the position of the vehicle. In contrast, the XYZ position of the line furthest away from the vehicle is difficult to determine. The far sections of the lane line may be occluded (e.g., behind a bend or hill) and/or difficult to accurately capture (e.g., due to distance or lighting, etc.). The data related to the elements is used to label portions of features identified in the time series that are identified with a high degree of accuracy. In various embodiments, a threshold value is used to determine whether to associated an identified portion of a feature (such as a portion of a lane line) with the related data. For example, portions of a lane line identified with a high degree of certainty (such as portions near the vehicle) are associated with the related data while portions of a lane line identified with a degree of certainty below a threshold value (such as portions far away from the vehicle) are not associated with the related data of that element. Instead, another element of the time series, such as a subsequent element, with a higher degree of certainty and its related data are used., paras 0015, 0027, 0036, 0039, 0042, 0044 and 0045 discloses obtain first vision system capture information associated with images captured in the operation of a vehicle, the first vision system capture information associated with a first instance of time, examiner notes that the specifics of “values” are not required by the current claim) and wherein the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time (figs 1-3 shows sensors and receiving sensor data in time series (i.e second instance of time subsequent by the first instance of time) , paras 0015, 0027, 0039, 0042 and 0045 discloses obtain second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time): updating ground truth data labels and values associated with the first vision system capture information based on the obtained ground truth data labels and values associated with the second vision system capture information (paras 0011, 0015, 0027, 0033, 0039 discloses a ground truth is determined for the time series images from the cameras and also discloses in some embodiments, in the event the moving vehicle enters into lane of the autonomous vehicle over the time series, the moving vehicle is annotated as a cut-in vehicle (updating the ground truth data label) meeting the above claim limitations, examiner notes that the specifics of updating are not required by the current claim); para 0038 discloses “for example, as each element in time series is captured, a corresponding set of related data is captured and saved with time series element. Elluswamy however is silent and fails to disclose and storing a set of ground truth labels and values for the first and second instance of time. Segal discloses storing a set of ground truth labels and values for the first and second instance of time (paras 0058, 0081-0082, 0085-0087 and store a set of ground truth labels and values for the first and second instance of time (i.e ground truth data 165 in every time stamp at different frequencies in the database)). Before the effective filing date of the invention was made Elluswamy and Segal are combinable because they are from the same field of endeavor and are analogous art of image processing. The suggestion/motivation would be an efficient, high speed and improved system at para 0058. Therefore, it would be obvious and within one of ordinary skill in the art to have recognized the advantages of Segal in the method/system of Elluswamy to obtain the invention as specified in claim 15. 13. Regarding claim 16, Elluswamy and Segal disclose the method as recited in Claim 15. Elluswamy disclose further wherein the first and second ground truth data labels and values correspond to at least one of or position for detected objects (figs 5-6, para 0039 discloses the ground truth is determined for the time series, identifying the path of the moving vehicle and annotating as a cut-in vehicle and the also the ground truth is represented as the three-dimensional trajectory (path or position) of the detected objects). 14. Regarding claim 17, Ellumswamy and Segal discloses the method as recited in Claim 15. Ellumswamy discloses further comprising determining an initial set of ground truth data labels and values associated with the first vision system capture information the ground truth data labels and values associated with the second vision system capture information (figs 1-3, 5- 6 paras 0015, 0027, 0036, 0039, 0042, 0044 and 0045 shows sensors and receiving sensor data (images) in time series and the associated ground truth labels and values (i.e first (prior) obtaining the vision system information and corresponding ground truth labels and values (i.e data) to the second vision system (another camera capture information) meeting the above claim limitations). 15. Regarding claim 18, Elluswamy and Segal disclose the method as recited in Claim 15. Elluswamy disclose further comprising determining the ground truth data labels and values associated with the second vision system capture information (figs 1-3 shows sensors and receiving sensor data in time series, paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses wherein the vision system processing service operative to determine the ground truth data labels and values associated with the second vision system capture information). 16. Regarding claim 19, Elluswamy and Segal disclose the method as recited in Claim 15. Elluswamy further comprising obtaining first vision system capture information associated with images captured in the operation of a vehicle (figs 1-3, 5-6, paras 0011, 0015, 0027 discloses capturing time series of images from the cameras (sensors or first vision system capture and second vision system capture etc) as the vehicle travels (i.e in operation) meeting the above claim limitations). 17. Regarding claim 20, Elluswamy and Segal disclose the method as recited in Claim 19. Elluswamy disclose further comprising obtaining second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information based on a capture rate (figs 1-3, 5-6 shows sensors and receiving sensor data in time series (i.e second instance of time subsequent by the first instance of time), paras 0015, 0027, 0036, 0039, 0042 and 0045 discloses obtain second vision system capture information associated with images captured in the operation of the vehicle, the second vision system capture information associated with a second instance of time, the second instance of time subsequent to the first instance of time is based on the capture rate in para 0038). Claims 3 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Elluswamy in view of Segal and in further view of NPL6 (End to End Learning for Self-Driving Cars, Mariusz Bojarski et al., arXiv, 2016, Pages 1-9) hereafter NPL4. 18. Regarding claim 3 as best understood by the examiner, Elluswamy and Segal disclose the system as recited in Claim 1. Elluswamy discloses the first and the second ground truth labels and values as seen in claim 1. Segal also discloses state of the objects data (i.e ground truth data) for prediction such as speed, velocity, acceleration in paras 0075-0076 and 0081-0087. Elluswamy and Segal are silent and however fails to disclose wherein the first and second ground truth data labels and values correspond to yaw. NPL4 discloses wherein the first and second ground truth data labels and values correspond to yaw (page 6 discloses for each frame in the video “we called this position the “ground truth” i.e the first and the second ground truth labels and values for each frames (i.e the first and second frames) and the corresponding position/distances as Yaw meeting the above claim limitations). Before the effective filing date of the invention was made, Elluswamy, Segal and NPL4 are combinable because they are from the same filed of endeavor and analogous art of image processing. The suggestion/motivation would be a robust and faster (reduced data processing) system on page 9 section 8. Therefore, it would be obvious and within one of ordinary skill in the art to have recognized the advantages of NPL4 in the system/method of Elluswamy and Segal to obtain the invention as specified in claim 3. 19. Claim 9 is a corresponding method claim of claim 3. See the corresponding explanation of claim 3. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Elluswamy in view of Segal and in further view of NPL 5 (Real-Time Webcam Heart-Rate and Variability Estimation with Clean Ground Truth for Evaluation, AmoghGudi et al., MDPI, 2020, Pages 1-24) hereafter NPL 5. 20. Regarding claim 14, Elluswamy and Segal disclose the method as recited in Claim 13. Elluswamy discloses the capture rate in para 0038. Segal also discloses processing the scenes (images or frames) sampled at 10Hz in para 0085. Elluswamy and Segal however do not recite wherein the capture rate is 24 hertz. NPL 5 shows in fig 3 the capture rates at 20, 25, 30, 50 and 61 ((fps) i.e frames per second or HZ). Examiner notes that from the above teachings of NPL 5, a capture rate of 24 Hz (fps) would be obvious and within one of ordinary skill in the art. The rationale supporting the rejection would be KSR rationale B. i.e Simple substitution of one known element for another to obtain predictable results. See MPEP 2143 I. Examiner notes that a simple substitution of capture rate from various capture rates as seen in fig 3 NPL 7 to 24Hz in NPL 5 (fig 3) would yield predictable results as claimed wherein the capture rate is 24 hertz. Examiner's Note: Examiner has cited figures, and paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested for the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Examiner has also cited references in PTO892 but not relied on, which are relevant and pertinent to the applicant’s disclosure, and may also be reading (anticipatory/obvious) on the claims and claimed limitations. Applicant is advised to consider the references in preparing the response/amendments in-order to expedite the prosecution. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAYESH PATEL whose telephone number is (571)270-1227. The examiner can normally be reached IFW Mon-FRI. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAYESH PATEL Primary Examiner Art Unit 2677 /JAYESH A PATEL/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597170
METHOD AND APPARATUS FOR IMMERSIVE VIDEO ENCODING AND DECODING, AND METHOD FOR TRANSMITTING A BITSTREAM GENERATED BY THE IMMERSIVE VIDEO ENCODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579770
DETECTION SYSTEM, DETECTION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561949
CONDITIONAL PROCEDURAL MODEL GENERATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555346
Automatic Working System, Automatic Walking Device and Control Method Therefor, and Computer-Readable Storage Medium
2y 5m to grant Granted Feb 17, 2026
Patent 12536636
METHOD AND SYSTEM FOR EVALUATING QUALITY OF A DOCUMENT
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
88%
With Interview (+5.2%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 887 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month