Prosecution Insights
Last updated: April 19, 2026
Application No. 17/936,933

System and Method for Trailer and Trailer Coupler Recognition via Classification

Non-Final OA §103§112
Filed
Sep 30, 2022
Examiner
NASHER, AHMED ABDULLALIM-M
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Continental Automotive Systems Inc.
OA Round
3 (Non-Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
80 granted / 99 resolved
+18.8% vs TC avg
Strong +34% interview lift
Without
With
+34.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
63.1%
+23.1% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/08/2025 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 (and dependent claim 4), 13 (and dependent claim 14), and 19 (and dependent claim 20) rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “at least partly” in claim 3, 13 and 19 is a relative term which renders the claim indefinite. The term “at least partly” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The examiner recommends removing “at least partly” or further define what is considered partly in the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 5-7, 11, 12, 15 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Assa (US 20210034902 A1), in view of Daga (US 20210034903 A1) and further in view of Costa (US 10744943 B1) . Regarding claims 1 and 11, Assa discloses obtaining a database of descriptor clusters, each descriptor cluster having ([0016] Prior to detection, the decision trees may be trained using annotated data. For example, the annotation data includes annotating landmark points in each image of a data set (e.g., a set of image data captured by the rear viewing camera that includes the trailer and trailer coupler point). For example, each landmark 30 point may be labeled with an x-coordinate and a y-coordinate pixel position. The landmark points may be chosen or selected or detected from the coupler region which may determine the shape of the front face or front profile of the trailer. In some implementations, one of the landmark points is the coupler point itself. The trained model may then be used on images within the trailer's bounding box (acquired from the previous trailer detection) to locate the landmarks of the trailer. The landmark corresponding to the coupler point of the trailer is then saved.); receiving, at data processing hardware, image data pertaining to one or more images of a trailer, trailer coupler, or a background ([0004] The control comprises circuitry that includes an image processor operable to process image data captured by the camera that is representative of at least the front face or front profile of the trailer.); for each determined descriptor, matching, by the data processing hardware, the determined descriptor with a descriptor cluster in the database ("[0014] To determine or detect the location of the trailer, the system may use one or more classifiers. Image data captured by the camera(s) may be split into one or more sections or patches and the control may process or evaluate the patches one at a time and determine if a trailer is present in each patch. The control may sweep or process each patch at multiple different scales (i.e., upscaled and downscaled image data). In some implementations, the system uses a two-step classifier. For example, the first stage of the classifier may include a linear Support Vector Machine (SVM) that filters out the majority (e.g., 99 percent) of negative patches (i.e., patches that do not include a trailer). [0016] However, regardless of the shape, one of the landmarks is typically the coupler point. Prior to detection, the decision trees may be trained using annotated data.") and assigning the label corresponding to the matched descriptor cluster to the determined descriptor corresponding to the trailer, trailer coupler, or the background ([0016] For example, each landmark 30 point may be labeled with an x-coordinate and a y-coordinate pixel position. The landmark points may be chosen or selected or detected from the coupler region which may determine the shape of the front face or front profile of the trailer. In some implementations, one of the landmark points is the coupler point itself. ); and based upon the determined descriptors having the assigned label corresponding to at least one of a trailer or a trailer coupler, determining, by the data processing hardware, a convex hull for the determined descriptors of the received image of the at least one of the trailer or the trailer coupler in the one or more images (fig. 3 ([0016] Referring now to FIG. 3, an ensemble of regression trees may be used to detect the landmarks 30 of the trailer. Locations of landmarks for a respective trailer is dependent upon a shape of the trailer. However, regardless of the shape, one of the landmarks is typically the coupler point. Prior to detection, the decision trees may be trained using annotated data. For example, the annotation data includes annotating landmark points in each image of a data set (e.g., a set of image data captured by the rear viewing camera that includes the trailer and trailer coupler point). For example, each landmark 30 point may be labeled with an x-coordinate and a y-coordinate pixel position. The landmark points may be chosen or selected or detected from the coupler region which may determine the shape of the front face or front profile of the trailer. In some implementations, one of the landmark points is the coupler point itself. [0018] In order to increase efficiency and reduce processing time, the bounding box may, for example, highlight an area for further processing, which may reduce overall processing by allowing the control to focus on a particular area (i.e., regions of interest) of the image instead of the entire image. The control then detects or determines or estimates the landmarks of the trailer's front face (e.g., points along an outline or perimeter of the trailer's front face or profile, which is at or within the bounding box) and determines and saves the landmark that corresponds to the trailer's coupler point at 46.)). Assa does not explicitly disclose determining, by data processing hardware, features and descriptors in the received image data. In a similar field of endeavor of trailer hitch detection, Daga teaches determining, by data processing hardware, features and descriptors in the received image data ([0020] The generated model detects landmark points on the testing set of data at 28 to validate the accuracy of the generated model. In some examples, intensity values are used as features of the generated model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa’s disclosure of trailer coupler detection with Daga’s teaching of feature extraction in order to assist a driver of the vehicle in maneuvering the vehicle and trailer in a rearward direction or the vehicle toward the trailer, and to maneuver the vehicle and trailer in a rearward direction or the vehicle toward the trailer ([0014]). Assa and Daga do not disclose or teach a plurality of descriptors, each of the descriptors being a texture. In a similar field of endeavor of trailer hitch recognition, Costa teaches obtaining a database of descriptor clusters, each descriptor cluster having a plurality of descriptors, each of the descriptors being a texture (col 12, lines 1-10: Additionally, each of the trailer types 110 and the underlying portions of the depicted trailers may comprise a variety of colors and/or surface finishes, which may create a wide range of variations in color and reflection of light depicted in the image data captured by the imaging system 60. By categorizing the image data into the various trailer portions 100a and non-trailer portions 100b, the controller 14 may provide for improved accuracy in the detection of the coupler position 24 by consistently tracking the various categories over the sequence of image frames captured by the imaging system 60.) wherein each descriptive cluster has at least one label assigned thereto, each at least one label being a label for a trailer, a trailer coupler, or a background (col 11, lines 56-65: As shown, each of FIGS. 6A, 6B, 6C, and 6D demonstrate a first trailer type 110a, a second trailer type 110b, a third trailer type 110c, and a fourth trailer type 110d, respectively. Each of the trailer types 110 may comprise one or more variations in body style (e.g. a recreational vehicle, utility trailer, boat trailer, horse trailer, etc.), tongue style (e.g. iChannel, A-frame, custom, etc.), and/or various trailer coupler styles (e.g. straight channel flat mount, collar lock, brake actuator, A-frame, adjustable height, coupler locks, etc.).); for each determined descriptor, matching, by the data processing hardware, the determined descriptor with a descriptor associated with a descriptor cluster in the database (col 12, lines 9-16: The categorization of the image data may allow the controller 14 to monitor the constituent portions of the image data in each of the image frames and compare the categorized portions to limit false detections of the coupler 16. For example, the false identifications may be limited by filtering transient variations that do not vary consistently with the identified image data categories 100 in the image data and the motion of the vehicle 12.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa and Daga’s disclosure of feature extraction with Costa’s teaching of trailer textures/finishes, in order to make an inference of the positioning of the hitch ball and hitch based on experience with a particular vehicle and trailer without having to stop and step out of the vehicle to confirm alignment (col1, lines 22-25). Regarding claims 2 and 12, Assa discloses generating labels and descriptors for the training image data ([0016] Prior to detection, the decision trees may be trained using annotated data. For example, the annotation data includes annotating landmark points in each image of a data set (e.g., a set of image data captured by the rear viewing camera that includes the trailer and trailer coupler point).); clustering the descriptors ([0016] Prior to detection, the decision trees may be trained using annotated data. For example, the annotation data includes annotating landmark points in each image of a data set (e.g., a set of image data captured by the rear viewing camera that includes the trailer and trailer coupler point). For example, each landmark 30 point may be labeled with an x-coordinate and a y-coordinate pixel position. The landmark points may be chosen or selected or detected from the coupler region which may determine the shape of the front face or front profile of the trailer. In some implementations, one of the landmark points is the coupler point itself. The trained model may then be used on images within the trailer's bounding box (acquired from the previous trailer detection) to locate the landmarks of the trailer. The landmark corresponding to the coupler point of the trailer is then saved.). Assa does not explicitly teach receiving training image data; generating the database of descriptor clusters from the clustered descriptors. In a similar field of endeavor of trailer hitch detection, Daga teaches receiving training image data ([0020] In the second step, the system divides the annotated data into a training set at 22 and a testing set at 24. That is, a portion of the annotated data becomes training data and a separate portion of the annotated data becomes testing data.); generating the database of descriptor clusters from the clustered descriptors ([0020] In the third step, the system tunes the parameters of the cascading regression trees based on the training and testing data error determined during training to generate a detection model for detecting the coupler point of the trailer. The generated model detects landmark points on the testing set of data at 28 to validate the accuracy of the generated model. In some examples, intensity values are used as features of the generated model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa’s disclosure of trailer coupler detection with Daga’s teaching of feature extraction, in order to assist a driver of the vehicle in maneuvering the vehicle and trailer in a rearward direction or the vehicle toward the trailer, and to maneuver the vehicle and trailer in a rearward direction or the vehicle toward the trailer ([0014]). Regarding claim 5, Assa discloses wherein clustering the descriptors comprises unsupervised learning ([0017] In some implementations, an unsupervised learning method (e.g., mixture models, K-Means, etc.) is used to generate a single point as the coupler's location in the camera image.). Regarding claim 6, Assa discloses wherein clustering the descriptors comprises using a k-means clustering algorithm ([0017] In some implementations, an unsupervised learning method (e.g., mixture models, K-Means, etc.) is used to generate a single point as the coupler's location in the camera image.). Regarding claim 7, Assa discloses wherein clustering the descriptors comprises using a support vector machine (SVM) learning algorithm ([0014] For example, the first stage of the classifier may include a linear Support Vector Machine (SVM) that filters out the majority (e.g., 99 percent) of negative patches (i.e., patches that do not include a trailer)). Claim 15 is a combination of claims 6 and 7 and will be rejected with the same reasoning as above. Claim 18 is a combination of claims 1 and 2 and will be rejected with the same reasoning as above. Claim(s) 3, 4, 8, 9, 13, 14, 16, 19, 20 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Assa (US 20210034902 A1), in view of Daga (US 20210034903 A1), in view of Costa (US 10744943 B1) and further in view of Dube (US 9582735 B2). Regarding claims 3, 13, and 19 Assa, Daga and Costa do not disclose but in a similar field of endeavor of scalable image matching, Dube teaches adding weights to each descriptor cluster, wherein generating the database of descriptor clusters is based at least partly upon the weights added to the descriptor clusters (claim 4: wherein training the one or more classifiers includes computing one or more parameters based on the first set of characteristics and the second set of characteristics, the one or more parameters including weight values applied to the one or more training features extracted using the ASG algorithm.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Costa’s disclosure of texture extraction with Dubes teaching of weighting clusters, in order to minimize storage of compressed descriptor representations while utilizing machine learning to compensate for information lost as a result of the compression (abstract). Regarding claims 4, 14, and 20 Assa, Daga and Costa do not disclose but in a similar field of endeavor of scalable image matching, Dube teaches wherein adding weights to each descriptor cluster uses a term frequency-inverse document frequency algorithm (col 7, lines 59-64: These assigned words are then compared against index 316 by index searcher 906 to identify or extract the best tf-idf image matches 908. Tf-idf (term frequency-inverse document frequency) is a statistic reflecting how important an assigned word is to a respective image in index 316 and is used as a weighting factor.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Costa’s disclosure of texture extraction with Dubes teaching of weighting clusters, in order to minimize storage of compressed descriptor representations while utilizing machine learning to compensate for information lost as a result of the compression (abstract). Regarding claims 8, 16 and 21, Assa, Daga and Costa do not disclose but in a similar field of endeavor of scalable image matching, Dube teaches performing a pyramid of scales algorithm on the received image data to produce scale invariant image data (col 4, lines 23-25: The image pyramid, in this example, can be the scale-space representation of a respective image (i.e., it contains various pyramid images) each of which is a representation of the respective image at a particular scale.), wherein determining features and descriptors comprises determining features and descriptors of the scale invariant image data (col 5, lines 20-24: The feature descriptors may be extracted using a feature extraction algorithm, such as Accumulated Signed Gradient (ASG), a Scale-Invariant Feature Transform (SIFT) algorithm or the like.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Costa’s disclosure of texture extraction with Dubes teaching of weighting clusters, in order to minimize storage of compressed descriptor representations while utilizing machine learning to compensate for information lost as a result of the compression (abstract). Regarding claims 9, Assa, Daga and Costa do not disclose but in a similar field of endeavor of scalable image matching, Dube teaches wherein determining descriptors of the received image data comprises performing one of a SIFT, SURF BRIEF, rBRIEF, HOG or a neural network visual descriptor algorithm (col 5, lines 20-24: The feature descriptors may be extracted using a feature extraction algorithm, such as Accumulated Signed Gradient (ASG), a Scale-Invariant Feature Transform (SIFT) algorithm or the like.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Costa’s disclosure of texture extraction with Dubes teaching of weighting clusters, in order to minimize storage of compressed descriptor representations while utilizing machine learning to compensate for information lost as a result of the compression (abstract). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Assa (US 20210034902 A1), in view of Daga (US 20210034903 A1), in view of Costa (US 10744943 B1), and further in view of Bell (US 10635933 B2). Regarding claim 10, Assa, Daga and Costa do not disclose but in a similar field of endeavor of determining trailer presence, Bell teaches wherein determining features of the received image data comprises performing one of a FAST, Harris Corners or a boundary based corner detection algorithm (col 7, lines 39-41: To identify features, the example features handler 304 of FIG. 3 computes a Harris corner response across the first frame of the burst of frames.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Costa’s disclosure of texture extraction with Bell’s teaching of Harris corner algorithms, in order to determine if a trailer is present by comparing a feature score of a feature to a threshold (abstract). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Assa (US 20210034902 A1), in view of Daga (US 20210034903 A1), in view of Costa (US 10744943 B1), in view of Dube (US 9582735 B2) and further in view of Bell (US 10635933 B2). Regarding claim 17, Assa, Daga and Costa do not disclose but in a similar field of endeavor of scalable image matching, Dube teaches wherein determining descriptors of the received image data comprises performing one of a SIFT, SURF BRIEF, rBRIEF, HOG or a neural network visual descriptor algorithm (col 5, lines 20-24: The feature descriptors may be extracted using a feature extraction algorithm, such as Accumulated Signed Gradient (ASG), a Scale-Invariant Feature Transform (SIFT) algorithm or the like.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Costa’s disclosure of texture extraction with Dubes teaching of weighting clusters, in order to minimize storage of compressed descriptor representations while utilizing machine learning to compensate for information lost as a result of the compression (abstract). Assa, Daga and Dubes do not disclose or teach and determining features of the received image data comprises performing one of a FAST, Harris Corners or a boundary based corner detection algorithm. In a similar field of endeavor of determining trailer presence, Bell teaches wherein determining features of the received image data comprises performing one of a FAST, Harris Corners or a boundary based corner detection algorithm (col 7, lines 39-41: To identify features, the example features handler 304 of FIG. 3 computes a Harris corner response across the first frame of the burst of frames.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Assa, Daga and Dube’s disclosure of weighting clusters with Bell’s teaching of Harris corner algorithms, in order to determine if a trailer is present by comparing a feature score of a feature to a threshold (abstract). Response to Arguments Applicant's arguments filed 12/08/2025 have been fully considered but they are not persuasive. Regarding the argument that Assa does not disclose clusters, the examiner most respectfully disagrees. Assa uses the word classifiers instead of clusters. Prior art Assa discloses: ("[0014] To determine or detect the location of the trailer, the system may use one or more classifiers. Image data captured by the camera(s) may be split into one or more sections or patches and the control may process or evaluate the patches one at a time and determine if a trailer is present in each patch. The control may sweep or process each patch at multiple different scales (i.e., upscaled and downscaled image data). In some implementations, the system uses a two-step classifier. For example, the first stage of the classifier may include a linear Support Vector Machine (SVM) that filters out the majority (e.g., 99 percent) of negative patches (i.e., patches that do not include a trailer). Applicants own specification states: [0029] A cluster module or algorithm 174 receives the numerous features with descriptors and labels, and clusters the descriptors. In one example, the cluster module 174 uses unsupervised learning and in particular a k-means algorithm. In another example, the cluster module 174 uses a SVM algorithm.. It would have been obvious to one of ordinary skill in the art to consider a classifier which classifies a trailer coupler as stated by prior art Assa to be the same as a cluster of a coupler as stated by the instant application. Applicant’s arguments, see page 2 (argument that prior art does not disclose textures) of applicant’s remarks, filed 12/08/2025, with respect to the rejection(s) of claim(s) 1 and 11 under U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Costa (US 10744943 B1). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A NASHER whose telephone number is (571)272-1885. The examiner can normally be reached Mon - Fri 0800 - 1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMED A NASHER/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Sep 30, 2022
Application Filed
Mar 08, 2025
Non-Final Rejection — §103, §112
Jun 17, 2025
Response Filed
Aug 01, 2025
Final Rejection — §103, §112
Dec 08, 2025
Request for Continued Examination
Jan 05, 2026
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601840
TUNING PARAMETER DETERMINATION METHOD FOR TRACKING AN OBJECT, A GROUP DENSITY-BASED CLUSTERING METHOD, AN OBJECT TRACKING METHOD, AND AN OBJECT TRACKING APPARATUS USING A LIDAR SENSOR
2y 5m to grant Granted Apr 14, 2026
Patent 12586329
MODELING METHOD, DEVICE, AND SYSTEM FOR THREE-DIMENSIONAL HEAD MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12582373
GENERATING SYNTHETIC ELECTRON DENSITY IMAGES FROM MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12567255
FEW-SHOT VIDEO CLASSIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561965
NEURAL NETWORK CACHING FOR VIDEO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+34.4%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month