Prosecution Insights
Last updated: April 19, 2026
Application No. 17/218,915

MACHINE LEARNING SYSTEMS FOR MANAGING INVENTORY

Final Rejection §103
Filed
Mar 31, 2021
Examiner
TC 3600, DOCKET
Art Unit
3600
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Oracle International Corporation
OA Round
4 (Final)
4%
Grant Probability
At Risk
5-6
OA Rounds
1y 1m
To Grant
5%
With Interview

Examiner Intelligence

Grants only 4% of cases
4%
Career Allow Rate
5 granted / 142 resolved
-48.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 1m
Avg Prosecution
206 currently pending
Career history
348
Total Applications
across all art units

Statute-Specific Performance

§101
36.1%
-3.9% vs TC avg
§103
34.6%
-5.4% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Final Office Action. Claims 1-4, 8-14, 18-22, and 25-28 are rejected. Response to Amendment Applicant’s amendments are acknowledged. Response to Arguments With respect to the 35 USC 101 rejection, each of the controlling steps and the final two limitations of receiving and retraining steps, and the Applicant’s arguments on Pages 9-12 reflect the reasons why the claims overcome the 101 rejection. The claim as a whole integrates the mental processes and certain methods of organizing human activities into a practical application. Thus, the claims are eligible because they are not directed to the recited judicial exception. Accordingly 35 U.S.C. 101 rejections are withdrawn in light of 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG’). Applicant’s arguments with respect to 103 rejections have been fully considered but is non-persuasive. Applicant argues, “1. Gross does not disclose "retraining the ML model based ... on the task performance data received from the first robot." While the Office Action cites paragraph 40 of Gross as rendering these elements of claim 1 obvious, Gross does not disclose "retraining the ML model based ... on the task performance data received from the first robot." Paragraph 40 of Gross states that a model may "increase a parameter (e.g., a task score)" based on "regular trips to the same grocery store" or "decrease a parameter if travel to a particular destination has not occurred after a period of time." These statements in Gross are related to adjusting a task parameter, which does not disclose re- training an ML model. The task parameter adjustments disclosed by Gross alter a weight for a specific task. As noted below, Gross does not disclose these adjustments occurring after the initial training. However, even assuming - solely for the sake of argument - that the adjustments occur after the initial training, these adjustments would merely influence how the model prioritizes specific tasks, which is not a learning task that affects the model's overall decision- making process. Examiner responds Gross’s training of a machine learning model is interpreted as the modeling of a user’s common routes and activities based on historical data (0055) The “retraining” of the model is interpreted as the model self-adjusting its parameters/ weights/scores for specific destinations based on most recent user activity. Gross’s model increases/decreases parameters/weights/scores (ie. retrains) for specific destinations when the user deviates from common routes and activities during a recent time period (ie. the “task performance data”) (0040). Applicant argues, “2. Gross does not disclose the parameter adjustments occurring after the initial training.” Gross describes "generat[ing] and execut[ing] a task model to determine potential tasks to accomplish," based in part on a user's "history of activities performed." Gross, [0037]. Paragraph 40 of Gross states that a model may "increase a parameter (e.g., a task score)" based on "regular trips to the same grocery store" or "decrease a parameter if travel to a particular destination has not occurred after a period of time." However, Gross does not describe the timing of these adjustments. More particularly, Gross does not describe these adjustments occurring after the initial training. In fact, Gross's descriptions of adjusting task parameters could reasonably be construed as being related to adjusting a task parameter during and/or in preparation for the initial training which does not disclose re-training an ML model. In this reading of Gross, Gross describes determining the weights used to train the task model. While Gross describes that "a weighting and/or task scoring system may be used as part of the task model," Gross does not describe retraining the model based on new weights, i.e., based on subsequently adjusted parameter values after the model has already been trained. In contrast, "retraining the ML model based ... on task performance data" alters the ML model's reasoning after the model has already been trained. Retraining an ML model enables the model to infer patterns present in the retraining data, which is reflected throughout the ML model's reasoning, not just for a specific task. In fact, Gross is completely silent with respect to retraining an ML model. Gross describes "train[ing] a machine learning model using the user's travel behavior over a period of time . .. ." Gross, [0055]. Paragraphs [0084]-[0089] describe the training process. Beyond the initial training, Gross does not appear to include any language that can reasonably be construed as re-training an ML model. Examiner responds Gross’s training of a machine learning model is interpreted as the modeling of a user’s common routes and activities based on historical data (0055) The “retraining” of the model is interpreted as the model self-adjusting its parameters/ weights/scores for specific destinations based on most recent user activity. Gross’s model increases/decreases parameters/weights/scores (ie. retraining itself) for specific destinations when the user deviates from common routes and activities during a recent time period (ie. the “task performance data”) (0040). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 8-14, 18-20, 22, and 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over Gross (2023/0419262) in view of Williams (US 11466997). Regarding Claim 1, Gross discloses: One or more non-transitory computer-readable media storing instructions, which when executed by one or more hardware processors, cause performance of operations comprising: (0119-0124-computer-readable medium providing machine instructions to a processor) training a machine learning ML model to select routes for “a robot” to perform targets sets of tasks, at least by (0037, 0054-0057, 0065– training ML Model using user’s history of tasks; The task model may be based upon at least user history, user preferences, and user-identified tasks); based on 0063 – user device is an on-board vehicle computing system (robot)....) obtaining training data sets, each training data set comprising (0115-0117-inputting sample data sets) characteristics of a set of previous tasks performed by one or more task performers, the set of characteristics comprising one or more of: a location associated with a particular previous task of the set of previous tasks; (0054-0055-the user’s common routes and activities) a time duration for performing the particular previous task (0037-time period to accomplish task) a time at which the particular previous task was performed (0040-travel to merchant at particular day and time) a route taken to perform the particular previous task (0054-0055-common routes and activities) a sequence in which the tasks of the set of previous tasks were performed; (0055-common routes and activities) an attribute of the task performer that performed the particular previous task (0054-0055 – common routes, times common routes are travelled) training the ML model based on the training data sets (00116-0117-trained ML by inputting sample data sets; 0055 – the user’s common routes and activities are modeled based on historical data) receiving a first target set of tasks to be performed, wherein the first target set of tasks comprises respective locations of the first target set of tasks; (0068-0069, Figure 1 (114, 116, 122)- travelling to work, travelling to grocery store to pick up cake, travel to attend potluck dinner) receiving one or more task performer attributes comprising respective locations of one robot; (0054-0055 – common routes, times common routes are travelled) applying the trained ML model to the first target set of tasks and the one or more task performer attributes to generate a plurality of ML-generated routes for the robot to perform the first target set of tasks, based at least in part on the robots' proximity to respective tasks in the first target set of tasks; (0070-0072 - Figure 1 (106) – optimal route includes all the target tasks (work, grocery store to pick up cake, potluck dinner); waypoint 120 and 118 locations are part of the route; 0033 - An optimal travel plan may be based upon a comparative analysis of routes, each route having paths between waypoints, each waypoint associated with at least one activity (e.g., task)..... [0034] In one exemplary embodiment, an adaptive mapping (AM) computing device may be configured to determine an optimal travel plan. In particular, the AM computing device may be configured to receive task information, retrieve historical data for predictive modeling and analysis, generate task predictions based upon user preferences and historical user activity data, retrieve geographic mapping information including, for example, merchant location information, retrieve event data, identify and calculate efficient routes (plurality of routes) and present a determined optimal travel plan with waypoints overlaid onto mapping data.) controlling the robot by: transmitting, to a first robot, instructions to perform a first target task in the first target set of tasks using a first ML-generated route in the plurality of ML-generated routes; (Abstract - transmit, to the user, an optimized travel plan based upon the optimal route; after transmitting the instructions to the first robot, receiving an additional target task; modifying the first ML-generated route so that the first ML-generated route comprises the additional target task; (0072-AM computing device 102 may determine that a merchant (e.g., grocery store) located at waypoint 118 may offer similar services (e.g., cakes) needed to accomplish the task identified by user 104 at waypoint 116. [0073] AM computing device 102 may request information related to the task. Continuing the above example, AM computing device 102 may request information related to available products (e.g., cakes). In the exemplary embodiment, AM computing device 102 identifies alternatives to optimal travel plan 106 and re-routes user 104 accordingly. In some embodiments, AM computing device 102 offers and/or otherwise proposes alternatives to user 104 for approval prior to re-calculating and/or re-routing. controlling the first robot at least by transmitting, to the first robot, instructions to perform the first target task and the additional target set of tasks using the first ML-generated route; (0072-0073 – under BRI, the instructions are the original ML route and modified ML route which both used the original ML route; based on 0063-the user computing device may be an “on-board vehicle computing system”; “A user (e.g., driver, pedestrian, mass transportation rider, etc.) may use the user computing device (e.g., ....on-board vehicle computing system....) to communicate with the AM computing device. The AM computing device may periodically and/or continuously transmit, to the user computing device, updated travel planning information to the user computing device....) receiving, from the first robot, task performance data that describes performance of the first robot while performing tasks using the first ML-generated route; retraining the ML model based at least in part on the task performance data received from the first robot. (The “retraining” of the model is interpreted as the model self-adjusting its parameters/ weights/scores for the specific destinations based on most recent activity. Gross’s model increases/decreases parameters/weights/scores (ie. retrains) for specific destinations when the user deviates from common routes and activities during a recent time period (ie. the “task performance data”). 0040 - ...the model may identify regular trips to the same grocery store. If the user has not made such a trip recently, the model may increase a parameter (e.g., a task score) causing the travel plan to include a trip to the grocery store. In another example, the model may decrease a parameter if travel to a particular destination has not occurred after a period of time. For example, if a user has not travelled to a hobby store for a period of time, the likelihood that the user desires travel to the hobby store may be reduced and the task may receive a low weight or task score and/or may not be offered. As such, a weighting and/or task scoring system may be used as part of the task model.) Gross does not explicitly state: Williams in analogous art, discloses: selected routes are for “robots” to perform targets sets of tasks, receiving locations of a plurality of robots, the generated ML-generated routes are for the plurality of robots to collectively perform the target set of tasks, based at least in part on the plurality of robots’ respective proximities to respective tasks in the set of tasks, transmitting, to a first robot in the plurality of robots, instructions to perform a first target task in the first target set of tasks using a first ML-generated route in the plurality of ML-generated routes; transmitting, to a second robot in the plurality of robots, instructions to perform a second target task in the first target set of tasks using a second ML-generated route in the plurality of ML-generated routes; after transmitting the instructions to the first robot and the second robot, receiving an additional target task; determining that the first robot is more geographically proximate than the second robot to a location of the additional target task; Williams discloses transmitting optimal routes to multiple vehicles in a fleet based “closest vehicles” to tasks; the tasks can be updated (ie. changing a pick up location), the vehicles include computing devices (robots) which control operation of the vehicles; (15(60)-16(15) - The VRA computing device may apply artificial intelligence, machine learning, and/or deep learning to any data described herein to develop routing strategies that maximize revenue on a per-vehicle and/or per-vehicle-fleet basis, and/or that optimize use of a vehicle and/or a vehicle fleet. In some embodiments, the VRA computing device may develop routing strategies that take into account a most optimal class or capacity of vehicle available for a task, closest vehicle, best availability (e.g., local, regional, domestic, any, etc.), and/or greatest revenue opportunity. Additionally or alternatively, the VRA computing device may generate optimal routes based upon the availability of a plurality of vehicles..... 14(1-44) - “Task definitions,” as used herein, may refer to a record of characteristics of an available or a completed task. The vehicle routing system described herein may use the task definition of each task, when generating the optimal route for a single vehicle or a plurality of vehicles....Each task definition includes a plurality of data elements such as one or more of.... pick-up location, delivery location, task value (e.g., a dollar amount granted upon completion of the task); cargo type (e.g., person and/or object); dimensions, size, and/or weight of cargo; number of cargo (e.g., two package or three persons); cargo status (e.g., high-value, high-risk, social good).... Task definitions may additionally or alternatively include data that is continually or periodically updated—that is, task definitions may not be completely static but may change over time. For example, task definitions may further include data elements such as a current cargo location (under BRI, an updated location is an additional task for a vehicle) and/or current task value (e.g., a bonus offered or a payment reduced based upon poor service, etc.)..... 27(7-12) - In-vehicle computing device 102 (robot) may control operation of automation systems 104 for operation of vehicle 100, for example, based upon an optimal route from VRA computing device 130, one or more control signals form VRA computing device 130, and/or additional or alternative instructions. 57(50-54)...The vehicle routing system may additionally or alternatively utilize Al, model training, and/or adaptive learning to create a pickup/delivery routing strategy based upon the following factors: time; vehicle model and type; current vehicle location...) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to integrate William’s multiple vehicle (ie. fleet) allocations to Gross’s task allocations of one vehicle, helping customize allocations of tasks to multiples vehicles based on location and specific task requirements. (15(60)-16(15)) Regarding Claim 2, Gross discloses: The media of Claim 1, wherein training the ML model comprises determining that none of the set of previous tasks included a route through a particular location at a particular time of day, and wherein the first ML-generated route selected by the ML model for performing the first target task avoids the particular location at the particular time of day. (0040- if a user has not travelled to a hobby store for a period of time, the likelihood that the user desires travel to the hobby store may be reduced and the task may receive a low weight or task score and/or may not be offered (avoiding the task)) Regarding Claim 3, Gross discloses: The media of Claim 1, wherein: training the ML model comprises determining that the particular previous task of the set of previous tasks was performed first in the sequence in which the set of previous tasks were performed; (0055-a first task within the common route is learned by the model) the training further comprises assigning a high priority to completing the particular task (under BRI – “high priority” indicates the common route data will be used by the model; 0055, 0057-the common route (including first task) is part of the “contextual data” that trains the model) and wherein the high priority is assigned to a task in the first target set of tasks similar to the particular previous task of the set of previous tasks (0057, 0065- the optimal route (ie. most efficient route) would include the tasks/segments in the common route (assigned high priority)) Regarding Claim 4, Gross discloses: The media of Claim 3, the operations further comprising inferring a set of priorities for the first target set of tasks; each priority of the set of priorities corresponding to a task in the first target set of tasks. (0055-in each segment of the common route (set of priorities)- the preferred modes of travel, average speed, etc. (set of priorities); 0065(bottom) – these factors are used in the execution of the model) Regarding Claim 8, Gross discloses: The media of Claim 1, wherein the modifying operation comprises generating a revised sequence of target tasks that include the additional target task. 0072-0073 In the exemplary embodiment, AM computing device 102 may determine a more efficient route includes travel from waypoint 120 to waypoint 118 (additional target task) as opposed to travel from waypoint 120 to waypoint 116. Regarding Claim 9, Gross discloses: The media of Claim 1, wherein the modifying operation comprises generating a revised sequence of target tasks that excludes completed target tasks of the first set of target tasks. [0071] In the exemplary embodiment, AM computing device 102 may be configured to dynamically adapt the determined optimized travel plan to accommodate changing conditions. For example, user 104 may independently arrange travel to waypoints 108 and 112. AM computing device 102 may automatically re-calculate and/or re-route user 104 based upon the altered starting location and instruct user 104 to proceed with optimal travel plan 106 starting from waypoint 112. (Excluding 108) Regarding Claim 10, Gross discloses: The media of Claim 1, wherein the modifying operation comprises identifying one or both of: one or more target tasks of the first set of target tasks to be delayed in response to including the additional target task; or one or more target tasks of the first set of target tasks that is required to be completed according to the previously selected first ML-generated route despite including the additional target task. (Figure 1 (122) still need to be completed) Claims 11-14 and 18-20 stand rejected based on the same citations and rationale as applied to Claims 1-4 and 8-10, respectively. Regarding Claim 22, Gross discloses The media of Claim 1, wherein the one or more task performer attributes further comprises one or more of a remaining power level of the first robot, a number of tasks already performed by the first robot, or a distance already traveled by the first robot; [0055] ....the AM computing device may train a machine learning model using the user's travel behavior over a period of time (e.g., one week, two weeks, one month, etc.) such that the model learns the user's common routes and activities (tasks), when the user travels along the common routes (distance travelled), preferred modes of travel, how the user travels along the common route (e.g., average speed overall, average speed along certain portions of the route, number of stops, etc.) and how the contextual data (e.g., traffic, traffic Claim 26 stands rejected based on the same citations and rationale as applied to Claims 22. Regarding Claim 27, Gross discloses The media of Claim 1, the operations further comprising: analyzing the additional target task to determine if the additional target task can be added to an existing ML-generated route in the plurality of ML-generated routes. (0072- In the exemplary embodiment, AM computing device 102 may determine a more efficient route includes travel from waypoint 120 to waypoint 118 as opposed to travel from waypoint 120 to waypoint 116. (under BRI including the stop on the route; determining it could be added on the route). For example, AM computing device 102 may determine that a merchant (e.g., grocery store) located at waypoint 118 may offer similar services (e.g., cakes) needed to accomplish the task identified by user 104 at waypoint 116. [0073] AM computing device 102 may request information related to the task. Continuing the above example, AM computing device 102 may request information related to available products (e.g., cakes). In the exemplary embodiment, AM computing device 102 identifies alternatives to optimal travel plan 106 and re-routes user 104 accordingly. In some embodiments, AM computing device 102 offers and/or otherwise proposes alternatives to user 104 for approval prior to re-calculating and/or re-routing. Claim 28 stands rejected based on the same citations and rationale as applied to Claim 27. Claims 21 and 25 is rejected under 35 U.S.C. 103 as being unpatentable over Gross (2023/0419262) in view of Williams (US 11466997) in view of Selvam (2019/0011931) Regarding Claim 21, Gross discloses: The media of Claim 1, wherein the attribute of the task performer that performed the particular previous task comprises: “common routes and activities, when the user travels along the common routes, preferred modes of travel, how the user travels along the common route (e.g., average speed overall, average speed along certain portions of the route, number of stops, etc.) and how the contextual data (e.g., traffic, traffic lights, weather) affects the user's travel behaviors” (0055). Gross does not explicitly state: Selvam, in analogous art discloses: wherein the attribute of the task performer that performed the particular previous task comprises a maintenance schedule of one or more robots that performed the particular previous task. [0039] In an embodiment, autonomous fleet simulator 216 may model various maintenance, charging, cleaning, and other fixed schedules that may apply to an autonomous fleet. For example, a maintenance schedule may mandate that engine oil of an autonomous vehicle is changed every 3,000 miles traveled. The autonomous fleet simulator 216 may model a maintenance schedule that changes engine oil every 4,000 miles in order to determine effects on variables such as supply, revenue, miles traveled between maintenance incidents, etc. As another example, half of an autonomous vehicle fleet may have their maintenance schedules increased and the other half decreased, with a resulting effect on demand being simulated based on prior data..... It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Gross in view of Williams’ attribute of the task performer to include Selvam’s maintenance schedule of one or more robots (autonomous vehicles), helping determine optimized values that may be applied to the real-world autonomous vehicle fleet (Abstract) Claim 25 stands rejected based on the same citations and rationale as applied to Claim 21. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. [AltContent: rect] PNG media_image1.png 187 998 media_image1.png Greyscale Gregg, US Patent 11289200 - adjusting a model’s weights based on user interaction/feedback is further describing a re-training Claim 11 - retraining the machine learning model by adjusting weighting of parameters of the machine learning model using the interaction data and the historical interaction data to define the updated machine learning model. 31(15-21) - At 1110, the process 1100 updates generic model(s) 1014 based on the feedback data. The authorized user management system 902 updates the generic model(s) 1014 as described herein. In some examples, updating the generic models 1014 includes adjusting the weighting of parameters in the model 1014. 37(1-4) - At 1708, the process 1700 updates at least one weighted value based at least in part on the trends. Updating at least one weighted value changes how the predictive model makes predictions. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT ROSS whose telephone number is (571) 270-1555. The examiner can normally be reached on Monday-Friday 8:00 AM - 5:00 PM E.S.T.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu, can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Scott Ross/ Examiner - Art Unit 3623 /RUTAO WU/Supervisory Patent Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Mar 31, 2021
Application Filed
Aug 29, 2024
Non-Final Rejection — §103
Nov 21, 2024
Response Filed
Mar 14, 2025
Final Rejection — §103
May 20, 2025
Response after Non-Final Action
Jul 10, 2025
Request for Continued Examination
Jul 11, 2025
Response after Non-Final Action
Aug 16, 2025
Non-Final Rejection — §103
Dec 02, 2025
Response Filed
Jan 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 8813663
SEEDING MACHINE WITH SEED DELIVERY SYSTEM
2y 5m to grant Granted Aug 26, 2014
Patent null
Interconnection module of the ornamental electrical molding
Granted
Patent null
SYSTEMS AND METHODS FOR ENTITY SPECIFIC, DATA CAPTURE AND EXCHANGE OVER A NETWORK
Granted
Patent null
Systems and Methods for Performing Workflow
Granted
Patent null
DISTRIBUTED LEDGER PROTOCOL TO INCENTIVIZE TRANSACTIONAL AND NON-TRANSACTIONAL COMMERCE
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
4%
Grant Probability
5%
With Interview (+1.5%)
1y 1m
Median Time to Grant
High
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month