Prosecution Insights
Last updated: April 19, 2026
Application No. 17/906,813

METHOD AND SYSTEM OF AUGMENTING A VIDEO FOOTAGE OF A SURVEILLANCE SPACE WITH A TARGET THREE-DIMENSIONAL (3D) OBJECT FOR TRAINING AN ARTIFICIAL INTELLIGENCE (AI) MODEL

Non-Final OA §103
Filed
Sep 20, 2022
Examiner
OMETZ, RACHEL ANNE
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Darvis Inc.
OA Round
5 (Non-Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
18 granted / 26 resolved
+7.2% vs TC avg
Strong +30% interview lift
Without
With
+30.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 3rd, 2026 has been entered. Claim Status Claims 15-21 and 23-28 were pending for examination in the amendment of Application No. 17/906,813 filed November 17th, 2025. In the remarks and amendments received on March 3rd, 2026, claims 15-16 and 24-25 are amended, no claims are cancelled, and no claims are added. Claims 1-14 and 22 were previously cancelled without any prejudice or disclaimer. Accordingly, claims 15-21 and 23-28 are currently pending for examination in the application. Priority Priority to provisional application 62/993,129 filed March 23rd, 2020, and PCT application PCT/IB2021/052393 filed March 23rd, 2021, are acknowledged. Response to Arguments Applicant’s arguments filed November 17th, 2025 have been fully considered but they are not persuasive. The examiner respectfully disagrees with applicant’s assertions that White does not teach or disclose “wherein when the first relative position of the target 3D object on the ground plane is placed behind a first relative position of the distractor object, a mask of the distractor object is applied to obscure the target 3D object” (Applicant’s Remarks, pgs. 8-9). The positions of the cat, cylinder, and cube in Figures 13A and 13B are just one example of their positions and are not indicative of their only possible positions. As stated in White, “the environmental conditions of the simulated world can be varied. In various embodiments, variable light sources, color and amounts of illumination, shadows, presence of and occlusion by other objects can be applied” (para [0027]). Therefore, regardless of what is shown in Figures 13A and 13B, it is reasonable to expect that distractor objects cylinder and cube can be placed in a variety of positions in order to make training data in identifying the cat or target object more difficult for the machine learning system. PNG media_image1.png 500 663 media_image1.png Greyscale Additionally, applicant’s “mask of the distractor object” (pp. 9 of Applicant’s Remarks) is shown in Applicant’s Fig. 7B, where the mask of the distractor object 602 is placed in front of target 3D object 504. PNG media_image2.png 460 485 media_image2.png Greyscale As shown in Fig. 7B, the mask is a simple representation of the distractor object 602. Therefore, the cube and the cylinder (not limiting, any other objects may be used) of White are also masks used to obscure the cat or target object. The examiner disagrees with Applicant’s assertions that White does not teach claim 23, “wherein calculating the coordinates of the bounding box comprises: enclosing the target 3D object in coordinates of a 3D cuboid; and calculating the coordinates of one or more camera facing corners of the coordinates of the 3D cuboid in the surveillance space” (pp. 11-12 of Applicant’s Remarks). White’s bounding box algorithm (see Para [0101]) calculates positional extrema of a 3D bounding box. It is inherent that this algorithm calculates “the coordinates of one or more camera facing corners of the coordinates of the 3D cuboid in the surveillance space” as the camera facing corners are the corners that are actually viewed by the camera at a certain time (if the camera or scene is not static). Therefore, White teaches claim 23. Additionally, Applicant asserts that the corners are calculated in a surveillance space, and that White’s “environmental model” is not a surveillance space. The examiner disagrees, as any objects viewed in White’s environmental model are being surveilled using a video sequence. See Abstract of White: “the system generates a three-dimensional model of an environment using a video sequence that includes individual frames taken from a variety of perspectives and environmental conditions”. Therefore, White teaches all the above claimed limitations. Applicant’s arguments with respect to the rejection of claim 1 (“wherein the video footage comprises a 360-degree video footage acquired by moving a 360-degree camera across the surveillance space in a predetermined pattern to calculate camera position from image timestamp”) have been fully considered but are moot because the arguments do not apply to the new combination of references, facilitated by Applicant’s newly submitted amendments being used in the current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 15, 17-18, 21, 23-24, 26-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over White et al. (US-20200302241-A1) in view of Naikal et al. (US-20140333775-A1), and further in view of Ogura (US-20160073095-A1). Regarding claim 15, White teaches a method of augmenting a video (“camera,” Para [0023]) footage of a surveillance space with a target three- dimensional (3D) object (“object,” Para [0023]) from one or more perspectives (“capture a variety of images of the object from different angles,” Para [0023]) for training an artificial intelligence (Al) model (“the individual images of the source video can be used as training data for a machine learning system,” Para [0023]), the method comprising: acquiring the video (“video”) footage from a target camera in the surveillance space (“the system obtains video of an object within an environment,” Para [0023]); determining a ground plane and one or more screen coordinates of one or more corners of the ground plane in the video footage (Fig. 2, 204 “a first 3-D model”); PNG media_image3.png 514 824 media_image3.png Greyscale preparing a model of the target 3D object (“A 3-D mapping system is used to generate a 3-D model of the object,” Para [0023]) configured for training the Al model (“the individual images of the source video can be used as training data for a machine learning system,” Para [0023]); determining a first relative position of the target 3D object in the normalized ground plane by iteratively generating a random position and a random rotation for the target 3D object in the ground plane (“Once a 3-D model of the object is obtained, additional tagged training images can be created by rendering the object into existing images, and videos in a variety of orientations,” Para [0023]) for positioning the target 3D object in front of or behind a distractor object from among the one or more objects in the ground plane (“occlusion by other objects can be applied,” Para [0027]); rendering the model of the target 3D object (“3-D models,” Para [0066]) on the ground plane and composing the rendered 3D object and the ground plane with the acquired video footage to generate a composited image (“training data can be generated using the labeled 3-D models by rendering the 3-D-models in combination with existing images, backgrounds, and simulations,” Para [0066]), wherein when the first relative position of the target 3D object (Fig. 13B, 1302, “object”) on the ground plane (Fig. 13A, 1300, “environmental model”) is placed behind a first relative position of the distractor object (Fig. 13B, objects next to object 1302, though Figs. 13A and 13B are non-limiting to the positions of the objects, see Para [0027]), a mask of the distractor object is applied to obscure the target 3D object (Fig. 13D, object to the left of object 1302); and PNG media_image4.png 581 846 media_image4.png Greyscale calculating coordinates of the bounding box (Fig. 13D, 1306, “bounding box”) that frames the first relative position of the target 3D object (Fig. 13B, 1302, “object”) in the composited image and saving the composited image along with the coordinates of the bounding box configured for training the Al model (“a cat object as labeled resulting from FIG. 13D can be used to train an image classification machine learning model to identify cats,” Para [0112]), wherein the method further comprises merging at least one of: a plurality of static reflections or a plurality of time sequential reflections from an environment scene (“if the lighting changes drastically, new objects appear in the scene or the sensor starts having higher amounts of noise, the simulator can generate new hybrid data using the data being streamed from the sensors,” Para [0080]), or one or more distractor objects (“randomly-generated objects”), and a plurality of simulated reflections (“colors”) generated by a simulated surface material property of the target 3D object (“to color randomly-generated objects in the simulation so that the simulated data contains colors that are present in the real-world scene,” Para [0079]). White fails to teach the following limitations as further claimed. However, Naikal teaches normalizing the one or more screen coordinates by calculating a homography matrix (Naikal, “homography H.sub.l.sup.r for a common ground plane between the reference camera r and any of the other cameras l that view the ground plane”) from the ground plane (Naikal, “common ground plane”); determining a first relative position (Naikal, “positions”) of each of one or more objects in the ground plane using the homography matrix (Naikal, “the multiple cameras maintain a view of the common ground plane and objects on the ground plane for the event processor 104 to identify the positions of objects in the views of the different cameras using the homographic transformation,” Para [0058]); and determining a second relative position (Naikal, “event recognition of the kick event using the feature vectors that are received from both cameras 108A and 108B over a predetermined time period,” Para [0046]) of each of the one or more objects in a normalized ground plane by multiplying the homography matrix (Naikal, “homography”) to a center position of a lower edge of a bounding box associated with each of the one or more objects (Naikal, “the center of the line connecting the bottom corners of the bounding boxes that are formed around the objects in each scene act as a proxy for the 3D location of the objects in the scene,” Para [0058]). White and Naikal fail to teach the following limitations as further claimed. Ogura, however, further teaches: and wherein the video footage comprises a 360-degree video footage acquired by moving a 360-degree camera (“controller 30 controls the movement of camera 21 by robot arm 22 so that camera 21 moves in the circumferential direction centering object 24 on a plane perpendicular to a table plane of turntable 23. Thus, camera 21 captures images of object 24 at 360-degree peripheral positions of object 24 at the moved position,” Para [0045]) across the surveillance space in a predetermined pattern (“Controller 30 repeatedly controls the rotation of turntable 23 and the movement of camera 21 by robot arm 22, so that camera 21 captures images of object 24 at spherical positions centering object 24 to generate a moving image stream,” Para [0045]) to calculate camera position from image timestamp (“Image capture position calculation unit 12 identifies image capture positions of the camera based on the time stamps of the initial sampling frame,” Para [0032]). White and Naikal are both considered to be analogous to the claimed invention because they are both in the same field of surveillance systems with object detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Naikal into White for the benefits of realigning position coordinates from multiple cameras for accurate object position and motion determination. Ogura is considered to be analogous to the claimed invention because they are in the same field of capturing images of a scene using a 360 degree moving camera. Although Ogura’s camera is not a 360-degree camera, the camera still captures a full 360 field of view, providing an enhanced field of view for image capture. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ogura into White and Naikal for the benefit an enhanced field of view for image capture. Regarding claim 17, the rejection of claim 15 is incorporated herein. White in the combination further teaches the method of claim 15, further comprising masking (Fig. 13C, “difference” 1304) the one or more objects (Fig. 13B, “object” 1302) standing on the ground plane (Fig. 13A, “environmental model” 1300) by finding the bounding box (Fig. 13D, “bounding box” 1306) around each of the one or more objects, prior to determining the first relative position of each one of the one or more objects in the ground plane (“When processing in three dimensions, the difference 1304 between two 3D point clouds can be processed with nearest neighbors of points captured in a video stream of an environment with the object,” Para [0098]). Regarding claim 18, the rejection of claim 15 is incorporated herein. White in the combination fails to teach the following limitations as further recited. However, Naikal teaches the method of claim 15, wherein the second relative position of each of the one or more objects (Naikal, “the event processor 104 uses the homography to identify the distance between objects in the views of different cameras,” Para [0058]) is represented in the form: a two-dimensional (2D)- coordinate (Naikal, “common geometric plane,” Para [0058]) representing the second relative position of each of the object on the normalized ground plane (Naikal, “the multiple cameras maintain a view of the common ground plane and objects on the ground plane for the event processor 104 to identify the positions of objects in the views of the different cameras using the homographic transformation,” Para [0058]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Naikal into White for the benefits of accurate object position determination from any angle. Regarding claim 21, White in the combination further teaches the method of claim 15, further comprising generating a bounding cube (Fig. 13D, “bounding box,” 1306) configured for training the Al model (“a cat object as labeled resulting from FIG. 13D can be used to train an image classification machine learning model to identify cats,” Para [0112]). Regarding claim 23, White in the combination further teaches the method of claim 15, wherein calculating the coordinates of the bounding box comprises: enclosing the target 3D object (“cat object”) in coordinates of a 3D cuboid (“A bounding algorithm can identify the minimum and maximum points along dimensional axes (e.g., (Xmin, Ymin, Zmin), (Xmin, Ymin, Zmax), (Xmin, Ymax, Zmin), (Xmin, Ymax, Zmax), (Xmax, Ymin, Zmin), (Xmax, Ymin, Zmax), (Xmax, Ymax, Zmin), (Xmax, Ymax, Zmax)) of the cat object,” Para [0101]); and calculating the coordinates of one or more camera facing corners of the coordinates of the 3D cuboid in the surveillance space (“A bounding algorithm can identify the minimum and maximum points along dimensional axes… of the cat object and use those points as vertices of a rectangular volume surrounding the cat object,” Para [0101]). Claim 24 is a system claim that corresponds to method claim 15. Implementation method claim 15 would necessarily encompass the system claim 24. Therefore, the rejection of method claims 15 applies to system claim 24. Furthermore, to address additional claim matter introduced in system claim 24, White in the combination further teaches a system for augmenting a video footage (“camera,” Para [0023]) of a surveillance space with a target three- dimensional (3D) object (“object,” Para [0023]) from one or more perspectives (“capture a variety of images of the object from different angles,” Para [0023]) for training an artificial intelligence (AI) model (“the individual images of the source video can be used as training data for a machine learning system,” Para [0023]), the system comprising (Figs. 11 and 12): PNG media_image5.png 604 925 media_image5.png Greyscale a target camera (“video”) disposed in the surveillance space and communicatively coupled to a server (Fig. 12, 1200, “computing device” configured as a “data server,” Para [0088]), wherein the target camera (“video”) is configured to capture the video footage of the surveillance space (“the system obtains video of an object within an environment,” Para [0023]) and transmit the captured video footage to the server (Fig. 12, 1200); and the server (Fig. 12, 1200) communicatively coupled to the target camera (“a computer system obtaining a video stream of an environment that includes an object,” Para [0081]) and comprising: a memory (Fig. 12, 1206, “storage subsystem”) configured to store a set of modules (“The storage subsystem 1206 may be used for temporary or long-term storage of information,” Para [0088]); and a processor (Fig. 12, 1202) configured to execute the set of modules (“The processors may execute computer-executable instructions,” Para [0088]) for augmenting the video footage of the surveillance space with the target 3D object (“A 3-D mapping system is used to generate a 3-D model of the object,” Para [0023]) from one or more perspectives for training the Al model (“the individual images of the source video can be used as training for a machine learning system,” Para [0023]). Claims 26, 27, and 28 are system claims that correspond to method claims 17, 18, and 23, respectively. Implementation method claims 17, 18, and 23 would necessarily encompass the system claims of 26, 27, and 28. Therefore, the rejections of method claims 17, 18, and 23 apply fully to system claims 26, 27, and 28. Claim(s) 16 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over White et al. (US-20200302241-A1), Naikal et al. (US-20140333775-A1), and Ogura (US-20160073095-A1) as applied to claims 15 and 24, and in further view of Rankawat et al. (US-20190286153-A1). Regarding claim 16, White in view of Naikal teaches a method of claim 15. White in view of the combination fails to teach the following limitations as furthered claimed. Rankawat teaches, further comprising determining one or more edges (boundary points) of the ground plane and calculating a 3D rotation (Rankawat, “3D rotation”), scale translation (Rankawat, “translation”) relative to the camera position (Rankawat, “extrinsic camera parameters (e.g., 3D rotation, R, translation, t, etc.), and/or a height of the camera with respect to a ground plane, a 3D distance from the boundary (e.g., the boundary delineated by the boundary points 106) to the camera center may be computed.translation, t, etc.), Para [0061])” and a lens characteristic using an aspect ratio of the ground plane by computer vision algorithms (Rankawat, “Using intrinsic camera parameters (e.g., focal length, f, optical center (u.sub.0, v.sub.0), pixel aspect ratio… with respect to a ground plane, a 3D distance from the boundary (e.g., the boundary delineated by the boundary points 106) to the camera center may be computed,” Para [0061]), prior to normalizing the one or more screen coordinates (Rankawat, “after post-processing in some examples—may be converted from 2D point or pixel locations of the sensor data (e.g., of an image) to 3D or 2D real-world coordinates,” Para [0061]). Rankawat is considered to be analogous to the claimed invention because they are in the same field of camera systems with area detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Rankawat into White, Naikal, and Ogura for the benefits of reduced pinhole camera distortion. Claim 25 is a system claim that corresponds to method claim 16. Implementation method claim 16 would necessarily encompass the system claim 25. Therefore, the rejection of method claim 16 applies fully to system claim 25. Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over White et al. (US-20200302241-A1), Naikal et al. (US-20140333775-A1), and Ogura (US-20160073095-A1) as applied to claim 15, and in further view of McCormac et al. (US-20190147220-A1) and Tremblay et al (US-20190251397-A1). Regarding claim 19, White in view of Naikal teaches a method of claim 15. White in view of the combination fails to following limitations as further claimed. However, McCormac teaches the method of claim 15, and Tremblay teaches prior to rendering the model of the target 3D object (Tremblay, “3D synthetic object”), the target 3D object is illuminated based on global illumination by (Fig. 1B; Para [0045] “lighting”): PNG media_image6.png 657 378 media_image6.png Greyscale determining a random image from the video footage (Tremblay, “domain randomization intentionally abandons photorealism by randomly perturbing the environment in non-photorealistic ways (e.g., by adding random textures),” Para [0020]) as texture (Tremblay, “textures”) on a large sphere (Tremblay, “3D Object of Interest,” Fig. 1B) based on the randomized position of the target 3D object relative to the ground plane by matching the first relative position of the target 3D object and a pre-determined position of recording the video footage (Tremblay, Fig. 1B); and placing the random image from the video footage (Tremblay, “random textures”) on the large sphere (Tremblay, “3D Object of Interest,” Fig. 1B) to provide a realistic lighting to the target 3D object (Tremblay, “Random Images of the Object of Interest,” Fig 1B). White, Naikal, McCormac, and Tremblay are all considered to be analogous to the claimed invention because they are all in the same field of surveillance systems with object detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of McCormac and Tremblay into White, Naikal, and Ogura for the benefits of more advanced training data for an AI. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over White et al. (US-20200302241-A1), Naikal et al. (US-20140333775-A1), and Ogura (US-20160073095-A1) as applied to claim 15, and in further view of Gottumukkal et al. (US-20190138818-A1). Regarding claim 20, White in view of the combination teaches the method of claim 15. White in view of the combination fails to teach the following limitations as further recited. However, Gottumukkal teaches a method of claim 15, wherein the ground plane (Gottumukkal, “ground planes”) is determined by applying at least one of: a computer vision algorithm (Gottumukkal, “calibration process uses additional hardware connected to the surveillance cameras to manually define the ground planes,” Para [0007]) or manual marking by a human. White, Naikal, and Gottumukkal are considered to be analogous to the claimed invention because they are in the same field of surveillance systems with object detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Gottumukkal into White, Naikal, and Ogura for the benefits of automatic ground plane detection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL A OMETZ whose telephone number is (571)272-2535. The examiner can normally be reached 6:45am-4:00pm ET Monday-Thursday, 6:45am-1:00pm ET every other Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Rachel Anne Ometz/Examiner, Art Unit 2668 3/17/26 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Sep 20, 2022
Application Filed
Feb 04, 2025
Non-Final Rejection — §103
May 13, 2025
Response Filed
May 22, 2025
Final Rejection — §103
Jul 24, 2025
Request for Continued Examination
Jul 25, 2025
Response after Non-Final Action
Aug 26, 2025
Non-Final Rejection — §103
Nov 17, 2025
Response Filed
Nov 28, 2025
Final Rejection — §103
Mar 03, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Mar 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602925
HYPERSPECTRAL IMAGE ANALYSIS USING MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12555255
ABSOLUTE DEPTH ESTIMATION FROM A SINGLE IMAGE USING ONLINE DEPTH SCALE TRANSFER
2y 5m to grant Granted Feb 17, 2026
Patent 12548354
METHOD FOR PROCESSING CELL IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541970
SYSTEM AND METHOD FOR ESTIMATING THE POSE OF A LOCALIZING APPARATUS USING REFLECTIVE LANDMARKS AND OTHER FEATURES
2y 5m to grant Granted Feb 03, 2026
Patent 12530735
IMAGE PROCESSING APPARATUS THAT IMPROVES COMPRESSION EFFICIENCY OF IMAGE DATA, METHOD OF CONTROLLING SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+30.1%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month