Prosecution Insights
Last updated: April 19, 2026
Application No. 18/358,211

METHODS AND APPARATUS FOR MULTILEVEL BALANCING OF COMPUTATIONAL TASKS

Non-Final OA §101§103
Filed
Jul 25, 2023
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
The Boeing Company
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is in response to claims filed 25 July 2023. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 9, and 17 are objected to because of the following informalities (line numbers correspond to claim 1): “distribute queued sets to the compute nodes based on the monitoring of the completion of the sets” should read “distribute queued sets to the compute nodes based on the monitoring of the completion of the respective ones of the sets”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Regarding claim 1, in step 1 of the 101 analysis set forth in MPEP 2106, the claim recites a system that distributes tasks and sets of tasks to resources of nodes based on monitoring completion. A system is one of the four statutory categories of invention. In step 2A, prong 1 of the 101 analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: i. “allocate the batch of the computational tasks into sets” (a person can mentally allocate batches of tasks into sets by simply evaluating a batch, and making a judgement of different sets of tasks from that batch (MPEP 2106.04(a))) ii. “distribute the sets to compute nodes” (a person can mentally distribute sets of tasks to nodes by simply evaluating the sets, and making a judgement of an assignment plan of the sets to nodes (MPEP 2106.04(a))) iii. “monitor the compute nodes for completion of the computational tasks” (a person can monitor nodes for completion by simply observing a node for indications of completion (MPEP 2106.04(a))). iv. “distribute ones of the computational tasks to computational resources of the respective compute nodes based on the monitoring of the completion of the computational tasks” (a person can mentally distribute tasks to resources by simply evaluating the tasks, and making a judgement of an assignment plan of the tasks to resources (MPEP 2106.04(a))) v. “monitor the compute nodes for completion of respective ones of the sets” (a person can monitor nodes for completion by simply observing a node for indications of completion (MPEP 2106.04(a))). vi. “distribute queued sets to the compute nodes based on the monitoring of the completion of the sets” (a person can mentally distribute sets of tasks to nodes by simply evaluating the sets, and making a judgement of an assignment plan of the sets to nodes (MPEP 2106.04(a))) vii. “distribute queued tasks to the computational resources based on the monitoring of the completion of the tasks” (a person can mentally distribute tasks to resources by simply evaluating the tasks, and making a judgement of an assignment plan of the tasks to resources (MPEP 2106.04(a))) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A, prong 2 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: viii. “An apparatus for multilevel distribution of computational tasks, the apparatus comprising: interface circuitry…machine readable instructions; and programmable circuitry to at least one of instantiate or execute the machine readable instructions” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). ix. “receive or access a batch of the computational tasks” (insignificant extra-solution activity of mere data gathering (MPEP 2106.05(g))). Since the claim does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined through reanalysis of the following limitations considered in step 2A prong 2, that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. viii. “An apparatus for multilevel distribution of computational tasks, the apparatus comprising: interface circuitry…machine readable instructions; and programmable circuitry to at least one of instantiate or execute the machine readable instructions” (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). ix. “receive or access a batch of the computational tasks” (well-understood, routine, and conventional activity of receiving data over a network, (MPEP 2106.05(d)(II))). Since the claim does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. Regarding claim 2, the additional element “the compute nodes are to provide an indication of a completion of a set” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (insignificant extra-solution activity of mere data output (MPEP 2106.05(g))), and under step 2B it does not amount to significantly more than the judicial exception (well-understood, routine, and conventional activity of transmitting data over a network, (MPEP 2106.05(d)(II)). Regarding claim 3, the additional elements “determine whether a quantity of the sets exceeds a quantity of available ones of the compute nodes, and when the quantity of the sets exceeds the quantity of available ones of the compute nodes, distribute at least one of the sets to the available ones of the compute nodes” do not render the claim patent eligible because under step 2A prong 1, they recite judicial exceptions (mental processes) (a person can mentally determine whether a quantity exceeds another quantity, and distribute sets to compute nodes by simply evaluating the quantities, and making a judgement of a simple assignment of nodes to sets(MPEP 2106.04(a))). Further, the additional element “provide remainder sets to a set queue to define the queued sets” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (insignificant extra-solution activity of mere data storage (MPEP 2106.05(g))), and under step 2B it does not amount to significantly more than the judicial exception (well-understood, routine, and conventional activity of storing information in memory, (MPEP 2106.05(d)(II))). Regarding claim 4, the additional element “transfer the queued sets to the compute nodes as the compute nodes become available” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (insignificant extra-solution activity of mere data output (MPEP 2106.05(g))), and under step 2B it does not amount to significantly more than the judicial exception (well-understood, routine, and conventional activity of transmitting data over a network, (MPEP 2106.05(d)(II)). Regarding claim 5, the additional element “provide the compute nodes with the queued sets such that the compute nodes are saturated with computational tasks” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (insignificant extra-solution activity of mere data output (MPEP 2106.05(g))), and under step 2B it does not amount to significantly more than the judicial exception (well-understood, routine, and conventional activity of transmitting data over a network, (MPEP 2106.05(d)(II)). Regarding claim 6, the additional element “a quantity of the sets provided to the compute nodes is based on a ratio of a number of the compute nodes to a number of tasks” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and under step 2B it does not amount to significantly more than the judicial exception (generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Regarding claim 7, the additional elements “evaluate completion of the sets, and cease monitoring based on a determination of the completion” do not render the claim patent eligible because under step 2A prong 1, they recite judicial exceptions (mental processes) (a person can mentally evaluate whether a set is complete, and stop monitoring the set by simply evaluating indicators of completion, and making a judgement to not observe the indicators (MPEP 2106.04(a))). Regarding claim 8, the additional element “provide an indication of the completion of the sets” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application (insignificant extra-solution activity of mere data output (MPEP 2106.05(g))), and under step 2B it does not amount to significantly more than the judicial exception (well-understood, routine, and conventional activity of transmitting data over a network, (MPEP 2106.05(d)(II)). Regarding claims 9-16, and 17-20, they comprise limitations similar to those of claims 1-8, and are therefore rejected for similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over SIVATHANU et al. Pub. No.: US 2022/0318052 A1 (hereafter SIVATHANU), in view of JEONG Pub. No.: US 2013/0167152 A1 (hereafter JEONG), in view of XIAO et al. Pub. No.: US 2018/0260162 A1 (hereafter XIAO). Regarding claim 1, SIVATHANU teaches the invention substantially as claimed, including: An apparatus for multilevel distribution of computational tasks, the apparatus comprising: interface circuitry to receive or access a batch of the computational tasks ([0075] The tenants provide AI workloads for execution on the platform via interfaces (i.e., “interface circuitry”) such as pluggable data planes 110 as described herein.); machine readable instructions; and programmable circuitry to at least one of instantiate or execute the machine readable instructions ([0158] The examples disclosed herein may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine) to: allocate the batch of the computational tasks into sets ([0083] The regional schedulers 504 receive the regional AI workloads 518 associated with their regions from the global scheduler 502 from the set of AI workloads 512 (i.e., a set of AI workloads, representing a “batch” is subdivided, or “allocated” into multiple different groups of regional AI workloads 518)), distribute the sets to compute nodes ([0004] The global scheduler distributes the set of AI workloads to a set of nodes (i.e., “compute nodes”) of the cloud infrastructure platform (i.e., regional AI workloads are distributed to nodes and their respective regional, or local schedulers)), monitor the compute nodes for completion of the computational tasks ([0081] Executing AI workloads are monitored based on the performance of the cloud infrastructure platform and, based on that monitoring, the scheduling of the AI workloads is adjusted. [0170] Priority tiers with which the set of AI workloads are associated include performance requirements based on a throughput fraction value indicative of a ratio of an ideal time to completion of an AI workload to a real time to completion of the AI workload (i.e., monitoring performance of the execution of the AI workloads monitors completion times of the AI workloads by the compute nodes, thereby monitoring “for completion”)), distribute ones of the computational tasks to computational resources of the respective compute nodes ([0004] A local scheduler of a first node of the set of nodes schedules a subset of AI workloads of the set of AI workloads distributed to the first node to be executed on the infrastructure resources (i.e., “computational resources”) of the first node)… distribute queued tasks to the computational resources based on the monitoring of the completion of the tasks ([0142] The scheduling subsystem 700 maintains a job queue for each user. [0108] Some schedulers that do not have inter-user fairness as a goal and optimize their scheduling decisions based on minimizing job completion time may either allow User C's job to stay in the queue or move one of the existing jobs back to queue and schedule User C's job in its place. [0170] Scheduling, by the scheduler, the set of AI workloads to a set of nodes of the cloud infrastructure platform includes scheduling AI workloads to meet the performance requirements of the priority tier of each AI workload (i.e., jobs from users are placed in queues, scheduled for execution on computing resources of compute nodes, and returned from queues based on the monitoring of real time job completion times)). While SIVATHANU discusses monitoring completion performance of workloads and scheduling of workloads to computational resources by a local scheduler, it does not explicitly teach: distribute ones of the computational tasks to computational resources of the respective compute nodes based on the monitoring…of the computational tasks, However, in analogous art that similarly monitors workloads and schedules workloads to computational resources, JEONG teaches: distribute ones of the computational tasks to computational resources of the respective compute nodes based on the monitoring…of the computational tasks ([0009] There is provided a computing apparatus comprising: a global scheduler on a first layer configured to schedule at least one job group; a load monitor configured to collect resource state information associated with states of physical resources and set a guide with reference to the collected resource state information and set policy; and a local scheduler on a second layer configured to schedule jobs belonging to the job group according to the set guide (i.e. local scheduler schedules individual jobs on virtual cores representing “computational resources”). [0053] `CPU1` and `CPU2` represent physical cores (or physical processors). `v11` and `v21` represent virtual cores (or virtual processors) that are allocated to `CPU1.` Similarly, `v12` and `v22` represent virtual cores that are allocated to `CPU2.` `j1` to `j12` represent jobs to be executed. `CPU Info` represents resource state information collected by a load monitor 133, and `Guide 1` and `Guide 2` represent guide information for the respective first local scheduler 132a and second local scheduler 132b (i.e., monitoring load on virtual cores represent monitoring of the jobs executing on those cores that creates the load)), It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined JEONG’s teaching of scheduling jobs on computational resources by a local scheduler based on monitoring a load due to jobs on those resources, with SIVATHANU’s teaching of scheduling jobs by a local scheduler, to realize, with a reasonable expectation of success, a system that schedules jobs for execution on computational resources by a local scheduler, as in SIVATHANU, based on monitoring a load due to jobs, as in JEONG. A person having ordinary skill would have been motivated to make this combination so that proper load balancing may be performed thereby avoiding degradation of system performance (JEONG [0007]-[0008]). While SIVATHANU and JEONG discuss execution of sets of jobs by resources of nodes, they do not explicitly teach: monitor the compute nodes for completion of respective ones of the sets, distribute queued sets to the compute nodes based on the monitoring of the completion of the sets, However, in analogous art that similarly discusses execution of sets of jobs by nodes, XIAO teaches: monitor the compute nodes for completion of respective ones of the sets ([0060] In step 330, the controller 105 determines that the first group of service requests (i.e., “sets”) have been completely processed. In a case, after completely processing the first group of service requests, the first disk group (i.e., “compute node”) may notify the controller 105. For example, the first disk group may actively send, to the controller 105, a message indicating that the first group of service requests have been completely processed. In another case, the controller 105 may detect whether the first group of service requests have been completely processed. For example, the controller 105 may send a query message to the first disk group to determine a processing progress of the first group of service requests (i.e., querying disk groups for completion of service request groups represents “monitoring” the groups for completion of “sets”)), distribute queued sets to the compute nodes based on the monitoring of the completion of the sets ([0069] In step 350, the first disk group processes a fourth group of service requests in the request queue. In this embodiment of the present disclosure, to further reduce power of the storage system, if in step 332, the controller 105 determines that when the first disk group completely processes the first group of service requests, the spin up time point of the third disk group has not arrived, the first disk group may continue to process the fourth group of service requests in the request queue 701 (i.e., when the third disk group has not been spun up, the controller transfers subsequent queued groups of service requests to the first disk group when it completes processing of the first group of service requests)), It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined XIAO’s teaching of a controller monitoring a disk group to determine that it has completed processing a group of service requests, and distributing subsequent groups of service requests based on the determination, with SIVATHANU and JEONG’s teaching of processing groups of tasks by sets of compute nodes, to realize, with a reasonable expectation of success, a system that processes groups of tasks by sets of compute nodes, as in SIVATHANU and JEONG, when a controller determines to distribute subsequent task groups when the compute nodes complete previous task groups, as in XIAO. A person having ordinary skill would have been motivated to make this combination to ensure that resources are being utilized optimally. Regarding claim 2, XIAO further teaches: the compute nodes are to provide an indication of a completion of a set ([0060] In step 330, the controller 105 determines that the first group of service requests (i.e., a first “set”) have been completely processed. In a case, after completely processing the first group of service requests, the first disk group (i.e., “compute node” that processed the first group of service requests) may notify the controller 105. For example, the first disk group may actively send, to the controller 105, a message indicating that the first group of service requests have been completely processed). Regarding claim 3, XIAO further teaches: the programmable circuitry is to determine whether a quantity of the sets exceeds a quantity of available ones of the compute nodes, and when the quantity of the sets exceeds the quantity of available ones of the compute nodes, distribute at least one of the sets to the available ones of the compute nodes and provide remainder sets to a set queue to define the queued sets ([0076] In step 801, a controller 105 may receive multiple groups of service requests (i.e., “sets of computational tasks”) sent by a host. In step 805, the controller 105 may obtain a request queue 901 (i.e., “set queue”)…The request queue 901 shown in FIG. 9A and FIG. 9B includes multiple groups of service requests that are sorted according to a processing sequence. [0077] In step 810, the controller 105 spins up P disk groups (i.e., “compute nodes”)…P is a natural number not less than 1 and not greater than X…For example, when the quantity of service requests in the request queue is greater than the value of X, the value of P may be equal to the value of X (i.e., the quantity of groups of service requests in the queue are greater than the number of disk groups); when the quantity of service requests in the request queue is less than the value of X, the value of P can be only less than the value of X. [0078] In step 815, the P disk groups process P groups of service requests in the request queue. In this embodiment of the present disclosure, at any moment, one disk group can process only one group of service requests…the first disk group processes the first group of service requests, the Pth disk group processes the Pth group of service requests, and so on (i.e., P groups of service requests are distributed to P disk groups, and any groups of service requests greater than P remain on the queue)). Regarding claim 4, XIAO further teaches: the programmable circuitry is to transfer the queued sets to the compute nodes as the compute nodes become available ([0069] In step 350, the first disk group processes a fourth group of service requests in the request queue. In this embodiment of the present disclosure, to further reduce power of the storage system, if in step 332, the controller 105 determines that when the first disk group completely processes the first group of service requests, the spin up time point of the third disk group has not arrived, the first disk group may continue to process the fourth group of service requests in the request queue 701 (i.e., when the third disk group has not been spun up, the controller transfers subsequent queued groups of service requests to the first disk group when it completes processing of the first group of service requests)). Regarding claim 5, XIAO further teaches: the programmable circuitry is to provide the compute nodes with the queued sets such that the compute nodes are saturated with computational tasks ([0069] In step 350, the first disk group processes a fourth group of service requests in the request queue. In this embodiment of the present disclosure, to further reduce power of the storage system, if in step 332, the controller 105 determines that when the first disk group completely processes the first group of service requests, the spin up time point of the third disk group has not arrived, the first disk group may continue to process the fourth group of service requests in the request queue 701 (i.e., when the third disk group has not been spun up, the controller transfers subsequent queued groups of service requests to the first disk group when it completes processing of the first group of service requests, thereby minimizing the time that the first disk group is inactive, or in other words, “saturating” the first disk group with groups of service requests)). Regarding claim 6, XIAO further teaches: a quantity of the sets provided to the compute nodes is based on a ratio of a number of the compute nodes to a number of tasks ([0078] In step 815, the P disk groups (i.e., number of compute “nodes”) process P groups of service requests (i.e., quantity of “sets”) in the request queue. In this embodiment of the present disclosure, at any moment, one disk group can process only one group of service requests (i.e., the number of service request groups to be processed represent a number of “tasks” which are provided to the disk groups in a 1:1 ratio)). Regarding claim 7, XIAO further teaches: the programmable circuitry is to evaluate completion of the sets, and cease monitoring based on a determination of the completion ([0115] An embodiment of the present disclosure further provides a computer program product for processing data, including a computer readable storage medium that stores program code. An instruction included in the program code is used to perform the method process described in any one of the foregoing method embodiments. [0061] In step 335, the controller 105 controls the first disk group to switch from an active state to an inactive state. Specifically, when the first disk group completely processes the first group of service requests, the controller 105 switches the first disk group from the active state to the inactive state (i.e., programmable circuitry of the computer program product enables controllers to query for processing progress, and switch disk groups to inactive states which ceases querying of processing progress)). Regarding claim 8, XIAO further teaches: the programmable circuitry is to provide an indication of the completion of the sets ([0115] An embodiment of the present disclosure further provides a computer program product for processing data, including a computer readable storage medium that stores program code. An instruction included in the program code is used to perform the method process described in any one of the foregoing method embodiments. [0060] In step 330, the controller 105 determines that the first group of service requests (i.e., a first “set”) have been completely processed. In a case, after completely processing the first group of service requests, the first disk group (i.e., “compute node” that processed the first group of service requests) may notify the controller 105. For example, the first disk group may actively send, to the controller 105, a message indicating that the first group of service requests have been completely processed (i.e., programmable circuitry of the computer program product enables disk groups to provide the messages of processing completion)). Regarding claims 9-16, and 17-20, they comprise limitations similar to those of claims 1-8, and are therefore rejected for similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jul 25, 2023
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month