Prosecution Insights
Last updated: April 19, 2026
Application No. 18/200,461

MULTI-TASK DEEP LEARNING OF EMPLOYER-PROVIDED BENEFIT PLANS

Non-Final OA §101
Filed
May 22, 2023
Examiner
POLLOCK, GREGORY A
Art Unit
3691
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Adp Inc.
OA Round
6 (Non-Final)
11%
Grant Probability
At Risk
6-7
OA Rounds
6y 9m
To Grant
24%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
71 granted / 642 resolved
-40.9% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
6y 9m
Avg Prosecution
33 currently pending
Career history
675
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
30.2%
-9.8% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
21.6%
-18.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 642 resolved cases

Office Action

§101
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to claims filed 10/08/2025 and Applicant’s communication regarding application 18/200461 filed 10/08/2025. Claims 22-29, 31, 32, and 35-44 have been examined with this office action. 37 CFR 1.132 Affidavits The 37 CFR 1.132 Affidavit filed on 06/06/2025 under 37 CFR 1.132 has been considered but is ineffective to overcome the 35 USC § 101 claim rejections. Regarding applicant’s statement that: 9. Rather, the claims provide the technical solution to the technical problem of neural networks that are improperly configured/trained or lacking sufficient cost functions, resulting in suboptimal performance in multi-task deep learning applications for generating target characteristics for a (e.g., benefit) plan, as recited hereinabove. For example, stacking the neural networks allows for the reuse of the first network’s hidden-layer representation of the aggregated first, second, and third data sets, which allows for executing temporal feature extraction once at the lower level, without repeating for each plan. Because the upper-level neural network(s) can receive such shared representation as input (from the lower-level neural network(s)), the upper-level neural network can be instantiated with fewer parameters, thereby achieving a relatively lower cost function value (e.g., the second value) in fewer optimization iterations. As such, less memory and processor bandwidth are required during both training and inference. The resulting improvement in convergence speed, resource efficiency, and predictive accuracy involves an enhancement to the operation of the machine- learning system itself, demonstrating a technical improvement to the technical problem discussed hereinabove. Indeed, the claimed invention provides an actual improvement to the functioning of technology. There is no showing that the objective evidence of patent eligibility is commensurate in scope with the claims. Specifically, there is no evidence of improvement to neural network technology described in the specification or claimed. Instead, the stacking of neural networks is merely applied to the abstract idea of the claims. The specification describes stacking in: [0055] Neural networks can be stacked to create deep networks. After training one neural net, the activities of its hidden nodes can be used as input training data for a higher level, thereby allowing stacking of neural networks. Such stacking makes it possible to efficiently train several layers of hidden nodes. [0061] By employing an RNN, the illustrative embodiments are able to model benefit plans for different employers based on benefit plans of other relevant entities and changes to those plans over time. For example, illustrative embodiments extract useful static and dynamic features based on different timestamps, which are chained together based on the natural order of timestamps for each customer. Static features (attributes) comprise features that most likely will not change at different timestamps for the same business entity such as, e.g., industry or sector, geographic location, business partner type, etc. Dynamic features comprise features that are likely to change across timestamps for a given business entity. The sequential data (both of descriptive features and outputs) can be fed into an RNN-style model to learn deep representations. For such a representation learning, the illustrative embodiments can stack multiple layers. The affidavit appears to cite possible advantages of using stacked neural networks such as it “allows for the reuse of the first network’s hidden-layer representation of the aggregated first, second, and third data sets, which allows for executing temporal feature extraction once at the lower level, without repeating for each plan. Because the upper-level neural network(s) can receive such shared representation as input (from the lower-level neural network(s)), the upper-level neural network can be instantiated with fewer parameters, thereby achieving a relatively lower cost function value (e.g., the second value) in fewer optimization iterations. As such, less memory and processor bandwidth are required during both training and inference”. However, the present application appears to be directed toward employer-provided benefit plans. Any purported improvements are merely a result of applying stacked neural networks. There is no claimed or described improvement to stacked neural networks. Thus, there is no showing that the objective evidence of patent eligibility is commensurate in scope with the claims. In view of the foregoing, when all of the evidence is considered, the totality of the rebuttal evidence of patent eligibility fails to outweigh the evidence of patent ineligibility. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 22-29, 31, 32, and 35-44 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of generating an employee benefit plan without significantly more. Subject Matter Eligibility Standard When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include fundamental economic practices; certain methods of organizing human activities; an idea itself; and mathematical relationships/formulas. Alice Corporation Pty. Ltd. v.CLS Bank International, et al., 573 U.S. _ (2014) as provided by the interim guidelines FR 12/16/2014 Vol. 79 No. 241. Analysis Step 1, the claimed invention must be to one of the four statutory categories. 35 U.S.C. 101 defines the four categories of invention that Congress deemed to be the appropriate subject matter of a patent: processes, machines, manufactures and compositions of matter. In this case independent claims 22 and all claims which depend from it are directed toward a method, independent claim 31 and all claims which depend from it are directed toward an apparatus (system), and independent claim 39 and all claims which depend from it are directed toward a computer-readable medium storing instruction to perform functions/steps. As such, all claims fall within one of the four categories of invention deemed to be the appropriate subject matter. Step 2A Prong 1, Under Step 2 A, Prong 1 of the 2019 Revised § 101 Guidance, it is determined whether the claims are directed to a judicial exception such as a law of nature, a natural phenomenon, or an abstract idea (See Alice, 134 S. Ct. at 2355) by identify the specific limitation(s) in the claim that recites abstract idea(s); and then determine whether the identified limitation(s) falls within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, claim 22 comprises inter alia the functions or steps of “A method, comprising: aggregating, by a data processing system coupled with memory, a first data set from a first source and a second data set from a second source; identifying, by the data processing system, from a plurality of plans, a first plan associated with the first source and a second plan associated with the second source; identifying, by the data processing system, first characteristics for the first plan at a first time, second characteristics for the second plan at a second time, third characteristics for the first plan at a time different from the first time and fourth characteristics for the second plan at a time different than the second time; determining, by the data processing system using the first data set, the first characteristics, and the third characteristics, a first metric associated with the first plan; determining, by the data processing system using the second data set, the second characteristics, and the fourth characteristics, a second metric associated with the second plan; identifying, by the data processing system, similarities between the first data set, the second data set, and a third data set from a third source; determining, by the data processing system, correlations between the first plan and the second plan responsive to identifying the similarities; training, by the data processing system, a first neural network for deployment using the determined correlations, and the first metric and the second metric associated with the first plan and the second plan, respectively; determining, by the data processing system, using a cost function, a first value indicative of performance of the first trained neural network; generating, by the data processing system, target characteristics using the trained neural network, the target characteristics related to one or more of the first characteristics, the second characteristics, the third characteristics, or the fourth characteristics; generating, by the data processing system, a third plan using the trained neural network having as inputs the correlations, the target characteristics, and the third data set, the third plan comprising a subset of the target characteristics;deploying, by the data processing system, the third plan by (i) generating a display comprising the third plan and (ii) providing the third plan via the display: receiving, by the data processing system, a third metric representing activities associated with the third plan during the deployment: training, by the data processing system, a second neural network based on the third plan output by the first neural network and the third metric received from the deployment: stacking, by the data processing system, the second neural network with the first neural network to create a deep neural network formed from the stack of the first neural network with the second neural network; determining, by the data processing system, using the cost function, a second value indicative of performance of the network deep neural network, wherein the second value is less than the first value, indicating that the network deep neural network has greater performance relative to the first neural network before stacking with the second neural network; based on the network deep neural network having greater performance relative to the first neural network before stacking with the second neural network, generating, by the data processing system, using the deep neural network having the second value indicative of performance, updated target characteristics for a subsequent plan; generating, by the data processing system, the display comprising the subsequent plan; and providing, by the data processing system, the subsequent plan via the display”. Claim 31 comprises inter alia the functions or steps of “A system, comprising a data processing system comprising a processor coupled with memory, the data processing system to: aggregate a first data set from a first source and a second data set from a second source; identify, from a plurality of plans, a first plan associated with the first source and a second plan associated with the second source; identify first characteristics for the first plan at a first time, second characteristics for the second plan at a second time, third characteristics for the first plan at a time different from the first time and fourth characteristics for the second plan at a time different than the second time; determine using the first data set, the first characteristics, and the third characteristics, a first metric associated with the first plan; determine using the second data set, the second characteristics, and the fourth characteristics, a second metric associated with the second plan; identify similarities between the first data set, the second data set, and a third data set from a third source; determine correlations between the first plan and the second plan responsive to identifying the similarities; train a first neural network for deployment using the determined correlations, and the first metric and the second metric associated with the first plan and the second plan, respectively; determine using a cost function, a first value indicative of performance of the first neural network; generate target characteristics using the first neural network, the target characteristics related to one or more of the first characteristics, the second characteristics, the third characteristics, or the fourth characteristics; generate a third plan using the first neural network having as inputs the correlations, the target characteristics, and the third data set, the third plan comprising a subset of the target characteristics;deploy the third plan by (i) generating a display comprising the third plan and (i) providing the third plan via the display; receive a third metric representing activities associated with the third plan during the deployment: train a second neural network based on the third plan output by the first neural network and the third metric received from the deployment: stack the second neural network with the first neural network to create a deep neural network formed from the stack of the first neural network with the second neural network; determine, using the cost function, a second value indicative of performance of the neural network, wherein the second value is less than the first value, indicating that the deep neural network has greater performance relative to the first neural network before stacking with the second neural network; based on the deep neural network having greater performance relative to the first neural network before stacking with the second neural network, generate, using the deep neural network having the second value indicative of performance, updated target characteristics for a subsequent plan; characteristics for a subsequent plan;generate the display comprising the subsequent plan; andprovide the subsequent plan via the display”. Claim 39 comprises inter alia the functions or steps of “A non-transitory computer-readable medium, comprising instructions embodied thereon, the instructions to cause a processor to: aggregate a first data set from a first source and a second data set from a second source; identify, from a plurality of plans, a first plan associated with the first source and a second plan associated with the second source; identify first characteristics for the first plan at a first time, second characteristics for the second plan at a second time, third characteristics for the first plan at a time different from the first time and fourth characteristics for the second plan at a time different than the second time; determine using the first data set, the first characteristics, and the third characteristics, a first metric associated with the first plan; determine using the second data set, the second characteristics, and the fourth characteristics, a second metric associated with the second plan; identify similarities between the first data set, the second data set, and a third data set from a third source; determine correlations between the first plan and the second plan responsive to identifying the similarities; train a first neural network for deployment using the determined correlations, and the first metric and the second metric associated with the first plan and the second plan, respectively; determine using a cost function, a first value indicative of performance of the first neural network; generate target characteristics using the first neural network, the target characteristics related to one or more of the first characteristics, the second characteristics, the third characteristics, or the fourth characteristics; generate a third plan using the first neural network having as inputs the correlations, the target characteristics, and the third data set, the third plan comprising a subset of the target characteristics; deploy the third plan by (i) generating a display comprising the third plan and (i) providing the third plan via the display; receive a third metric representing activities associated with the third plan during the deployment: train a second neural network based on the third plan output by the first neural network and the third metric received from the deployment: stack the second neural network with the first neural network to create a deep neural network formed from the stack of the first neural network with the second neural network; determine, using the cost function, a second value indicative of performance of the neural network, wherein the second value is less than the first value, indicating that the deep neural network has greater performance relative to the first neural network before stacking with the second neural network; based on the deep neural network having greater performance relative to the first neural network before stacking with the second neural network, generate, using the deep neural network having the second value indicative of performance, updated target characteristics for a subsequent plan; for a subsequent plan; generate the display comprising the subsequent plan; andprovide the subsequent plan via the display”. Those claim limits in bold are identified as claim limits directed toward the abstract idea, while those that are un-bolded are identified as additional elements. The cited limitations as drafted are systems and methods that, under their broadest reasonable interpretation, covers performance of a method of organizing human activity, but for the recitation of the generic computer components. Further, none of the limitations recite technological implementations details for any of the steps but, instead, only recite broad functional language being performed by the generic use of at least one processor. Generating an employee benefit plan is a fundamental economic practice long prevalent in commerce systems. If a claim limitation, under its broadest reasonable interpretation, covers a fundamental economic principle or practice but for the general linking to a technological environment, then it falls within the organizing human activity grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2, Next, it is determined whether the claim is directed to the abstract concept itself or whether it is instead directed to some technological implementation or application of, or improvement to, this concept, i.e., integrated into a practical application. See, e.g., Alice, 573 U.S. at 223, discussing Diamond v. Diehr, 450 U.S. 175 (1981). The mere introduction of a computer or generic computer technology into the claims need not alter the analysis. See Alice, 573 U.S. at 223—24. “[T]he relevant question is whether the claims here do more than simply instruct the practitioner to implement the abstract idea on a generic computer.” Alice, 573 U.S. at 225. In the present case, the judicial exception is not integrated into a practical application. The claim limitations are not indicative of integration into a practical application by claiming an improvement to the functioning of the computer or to any other technology or technical field. Further, the claim limitations are not indicative of integration into a practical application by applying or using the judicial exception in some other meaningful way. In particular, the claims contain the following additional elements: a data processing system coupled with memory; the trained neural network; iteratively training, by the data processing system, the neural network; stacking neural networks; a display. However, the specification description of the additional elements a data processing system coupled with memory ([Figure 9, element 900 and 906] [0081-0083]); the trained neural network ([0043-0045] [0051] “Training a neural network is conducted with standard mini-batch stochastic gradient descent-based approaches, where the gradient is calculated with the standard backpropagation procedure…”); iteratively training, by the data processing system, the neural network ([0043-0045] [0051] “Training a neural network is conducted with standard mini-batch stochastic gradient descent-based approaches, where the gradient is calculated with the standard backpropagation procedure…”); stacking neural networks ([0055] Neural networks can be stacked to create deep networks. After training one neural net, the activities of its hidden nodes can be used as input training data for a higher level, thereby allowing stacking of neural networks. Such stacking makes it possible to efficiently train several layers of hidden nodes. [0061]); a display ([Figure 9, element 914] [0083] [0088]) are at a high level of generality using exemplary language or as part of a generic technological environment and are functions any general purpose computer performs such that it amounts to no more than mere instruction to apply the exception to a particular technological environment. Further, none of the limitations recite technological implementations details for any of the steps but, instead, only recite broad functional language being performed by the generic use of at least one processor. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaning limits on practicing the abstract idea. Thus, the claim is directed toward an abstract idea. Step 2B, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more that the abstract idea(s). As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the abstract idea(s) amounts to no more than mere instructions to apply the exaction using a generic computer component. Mere instruction to apply an exertion using a generic computer component cannot provide an inventive concept. These generic computer components are claimed at a high level of generality to perform their basic functions which amount to no more than generally linking the use of the judicial exception to the particular technological environment of field of use (Specification as cited above for additional elements) and further see insignificant extra-solution activity MPEP § 2106.05 I. A. iii, 2106.05(b), 2106.05(b) III, 2106.05(g). Thus, the claims are not patent eligible. As for dependent claims 23, 25-29, 32, 35-38, 40-41, and 44 these claims recite limitations that further define the same abstract idea using previously identified additional elements noted from the respective independent claims from which they depend. Therefore, the cited dependent claims are considered patent ineligible for the reasons given above. As for dependent claims 24, 42, and 43 these claims recite limitations that further define the same abstract idea using previously identified additional elements noted from the respective independent claims from which they depend. In addition, the cited dependent claims recite the additional elements: the deep neural network comprises three layers (claim 24); iteratively training, by the data processing system, the neural network by executing a stochastic gradient descent (claims 42 and 43). However, the specification description of the additional elements the neural network comprises three layers ([0067] “In an illustrative embodiment, RNN 602 might comprise three layers (not shown). However, more layers can be used if needed…”); iteratively training, by the data processing system, the second neural network by executing a stochastic gradient descent ([0043-0045] [0051] “Training a neural network is conducted with standard mini-batch stochastic gradient descent-based approaches, where the gradient is calculated with the standard backpropagation procedure…”) are at a high level of generality using exemplary language or as part of a generic technological environment and are functions any general purpose computer performs such that it amount no more than mere instruction to apply the exception to a particular technological environment. Even in combination, these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Therefore, the cited dependent claims are ineligible. Prior Art The claims 22-41 overcome the prior art of record such that none of the cited prior art reference’s disclosures can be applied to form the basis of a 35 USC § 102 rejection nor can they be combined to fairly suggest in combination, the basis of a 35 USC § 103 rejection when the limitations are read in the particular environment of the claims. Pingali (PGPub Document No. 20190304023) is the closest prior art. However, Pingali does not teach the specific inputs/outputs to the correlation and models. Therefore, the claims may be allowable if amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action. Response to Arguments Applicant's arguments with regards to claims have been fully considered but they are not persuasive. EXAMINER’S RESPONSE TO APPLICANT REMARKS CONCERNING Claim Rejections - 35 USC § 101: Applicant's arguments with regards to 35 USC § 101 have been fully considered and are persuasive. Regarding applicant’s argument that “nothing about this invention can be considered a manual process, and the invention simply cannot be performed in the human mind, nor would it ever be performed by the human mind in the manner contemplated in the claims”, the examiner contends that the neural networks as claimed and described in the specification are merely applied to the abstract idea of the claims. There is no improvement to neural network technology. Further, the examiner has indicated that the claims are directed toward a fundamental economic practice and not a mental process. Additionally, the examiner maintains that generating an employee benefit plan is a fundamental economic practice long prevalent in commerce systems. If a claim limitation, under its broadest reasonable interpretation, covers a fundamental economic principle or practice but for the general linking to a technological environment, then it falls within the organizing human activity grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Regarding arguments directed toward the “greater performance” of the neural networks is merely by iteratively training or cascading (stacked) neural networks. However, there is no improvement to training or cascading of neural networks claimed or described in the specification. The use of a cost function (specification [0053]) and gradient descent (specification [0054]) merely apply known algorithm for its intended function of estimating how machine learning is performing. The fact that machine learning and stacked neural networks have greater performance as they learn is as expected and, indeed, the purpose of using a machine learning algorithm. Again, there is no improvement to machine learning or to measurement of performance to machine learning in the amended claims. Further, there is no improvement to the training of the neural network. Training a neural network result in intended purpose of a neural network that has greater performance. However, there are no described or claimed improvement to the training operation. The intended purpose to generate benefits plans such as insurance plans merely applies the technological environment to an abstract idea. As such, the examiner maintains the rejection. Conclusion For prior art made of record and not relied upon is considered pertinent to applicant's disclosure see Notice of References Cited items A-B submitted 12/10/2020 used as prior art and in the conclusion section in the office action submitted 12/10/2020. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Gregory A Pollock whose telephone number is (571) 270-1465. The examiner can normally be reached M-F 8 AM - 4 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abhishek Vyas can be reached on 571 270-1836. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gregory A Pollock/Primary Examiner, Art Unit 3691 12/10/2025
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Dec 29, 2023
Non-Final Rejection — §101
Feb 12, 2024
Applicant Interview (Telephonic)
Feb 12, 2024
Examiner Interview Summary
Mar 28, 2024
Response Filed
Apr 04, 2024
Final Rejection — §101
Jun 10, 2024
Response after Non-Final Action
Jun 19, 2024
Response after Non-Final Action
Jul 09, 2024
Request for Continued Examination
Jul 11, 2024
Response after Non-Final Action
Jul 17, 2024
Non-Final Rejection — §101
Sep 25, 2024
Applicant Interview (Telephonic)
Sep 25, 2024
Examiner Interview Summary
Jan 23, 2025
Response Filed
Feb 22, 2025
Final Rejection — §101
Jun 06, 2025
Response after Non-Final Action
Jun 06, 2025
Request for Continued Examination
Jun 14, 2025
Response after Non-Final Action
Jul 08, 2025
Non-Final Rejection — §101
Aug 14, 2025
Examiner Interview Summary
Aug 14, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Response Filed
Oct 08, 2025
Response after Non-Final Action
Dec 10, 2025
Non-Final Rejection — §101
Feb 20, 2026
Applicant Interview (Telephonic)
Feb 20, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12094003
Apparatus, method and system for providing an electronic marketplace for trading credit default swaps and other financial instruments, including a trade management service system
2y 5m to grant Granted Sep 17, 2024
Patent 12045786
SYSTEMS AND METHODS FOR GLOBAL TRANSFERS
2y 5m to grant Granted Jul 23, 2024
Patent 12033216
SYSTEM AND METHOD FOR A MACHINE LEARNING SERVICE
2y 5m to grant Granted Jul 09, 2024
Patent 11922381
DISTRIBUTED TRANSACTION SYSTEM
2y 5m to grant Granted Mar 05, 2024
Patent 11900455
METHOD AND APPARATUS FOR DECENTRALIZED VC FUNDS
2y 5m to grant Granted Feb 13, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
11%
Grant Probability
24%
With Interview (+12.6%)
6y 9m
Median Time to Grant
High
PTA Risk
Based on 642 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month