Prosecution Insights
Last updated: April 19, 2026
Application No. 18/313,189

CROSS-PLATFORM DISTILLATION FRAMEWORK

Non-Final OA §103
Filed
May 05, 2023
Examiner
AHMED, SYED RAYHAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
4y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
5 granted / 7 resolved
+16.4% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is sent in response to the Applicant’s Communication received on 05/05/2023 for application number 18/313,189. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, IDS, and Claims. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3, 6, 7, 11, 13, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Huh et al. (US 20240242086 A1), hereinafter Huh, in view of Yuan et al. (CN114676823A, see attached translation), hereinafter Yuan. Regarding claim 1, Huh teaches, A computer-implemented method executed by data processing hardware that causes the data processing hardware [Para 0007, there is provided a method for learning activated neurons responses transfer using sparse activation maps (SAMs) in knowledge distillation performed on a computing device including one or more processors and a memory that stores one or more programs executed by the one or more processors] to perform operations comprising: obtaining a plurality of training samples [Fig 2, Input Data]; PNG media_image1.png 254 658 media_image1.png Greyscale generating, using a student neural network model, a first output (Para 0007, sparse activation maps) based on a first training sample (Para 0007, input data) of the plurality of training samples [Para 0007, extracting student sparse activation maps (SAMs) by extracting a feature map from a learning model of the student network based on the input data and filtering the extracted feature map]; generating, using a teacher neural network model, a second output based on the first training sample of the plurality of training samples [Para 0007, extracting teacher sparse activation maps (SAMs) by extracting a feature map from a learning model of the teacher network based on input data and filtering the extracted feature map]; determining, based on the first output and the second output, a first loss [Para 0007, computing a loss function by comparing the extracted teacher sparse activation maps with the extracted student sparse activation maps]; adjusting, based on the first loss, one or more parameters of the student neural network model [Para 0007, the computing device 12 can update the learning model of the student network through weights that minimize the computed loss function]; after adjusting the one or more parameters of the student neural network model, generating, using the student neural network model, a third output based on a second training sample (Para 0059, augmented data) of the plurality of training samples [Para 0057, training can be repeated until the learning model of the learning network has the same neuron activation boundaries as the learning model of the teacher network; Para 0059, the learning model of the student network can be trained by using not only input data but also augmented data. Here, augmented data can be generated by transforming input data; Para 0007, extracting student sparse activation maps (SAMs) by extracting a feature map from a learning model of the student network based on the input data and filtering the extracted feature map]; generating, using the teacher neural network model, a fourth output based on the second training sample (Para 0059, augmented data) of the plurality of training samples [Para 0033, As illustrated in FIG. 2, the learning model of a knowledge distillation-based teacher-student network includes a teacher network and a student network; Para 0057, training can be repeated until the learning model of the learning network has the same neuron activation boundaries as the learning model of the teacher network; Para 0059, augmented data can be generated by transforming input data; Para 0007, computing a loss function by comparing the extracted teacher sparse activation maps with the extracted student sparse activation maps]; determining, based on the third output and the fourth output, a second loss [Para 0057, training can be repeated until the learning model of the learning network has the same neuron activation boundaries as the learning model of the teacher network; Para 0007, computing a loss function by comparing the extracted teacher sparse activation maps with the extracted student sparse activation maps]; and readjusting, based on the second loss, the one or more parameters of the student neural network model [Para 0062, the computing device 12 can update the student network using the loss of knowledge distillation between the teacher network and the student network, cross-entropy loss between predicted values and actual values, loss of knowledge distillation between teacher assistant networks and student assistant networks, and loss between the sparse activation maps of the teacher network and the sparse activation maps of the student network; Para 0007, the computing device 12 can update the learning model of the student network through weights that minimize the computed loss function]. Huh does not teach using a student neural network model executing on a first processing unit, and using a teacher neural network model executing on a second processing unit, the second processing unit remote from the first processing unit. Yuan teaches, using a student neural network model (Para 0008, the simulator module) executing on a first processing unit (Para 0008, the simulator module exists in each computing node), and using a teacher neural network model executing on a second processing unit (Para 0008, an evaluator module), the second processing unit remote from the first processing unit (Para 0013, in the reinforcement learning framework based on P2P networks… the current best neural network model will also provide inference services to the outside world through an RPC remote procedure call protocol) [Para 0008, A reinforcement learning framework based on a P2P network includes a simulator module, a trainer module, and an evaluator module; wherein, the simulator module exists in each computing node, loads the current best neural network model, performs self-play training to generate a large amount of data, and uploads the generated data to a common data node ReplayBuffer; the trainer module samples data from the common data node ReplayBuffer, trains a new neural network model, and saves the trained neural network model in the data node Model Pool; the evaluator module loads a new neural network model from the data node Model Pool, performs multiple rounds of competition with the current best neural network model, calculates the win rate, and iterates the new neural network model with the high win rate into the current best neural network model; Para 0013, in the reinforcement learning framework based on P2P networks, after iterating a new neural network model with a high win rate into the current best neural network model, the current best neural network model will also provide inference services to the outside world through an RPC remote procedure call protocol or the Internet]. Yuan is analogous to the claimed invention as they both relate to learning from distributed decentralized learning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Yuan and provide processing the two models on individual processing units, where the second processing unit is remote, in order to provide [Yuan, 0007] a learning framework based on P2P networks that has a low threshold, is accessible to everyone, has a high upper limit, can solve many large problems, is flexible, and is compatible with various large learning algorithms. Regarding claim 3, Huh-Yuan teach the limitations of claim 1. Yuan further teaches, Transmitting (Para 0008, iterates the new neural network model… into the current best neural network model) a remote procedure call (RPC) to the teacher neural network model (Para 0008, an evaluator module) to generate each output (Para 0008, win rate) [Para 0008, A reinforcement learning framework based on a P2P network includes a simulator module, a trainer module, and an evaluator module; wherein, the simulator module exists in each computing node, loads the current best neural network model, performs self-play training to generate a large amount of data, and uploads the generated data to a common data node ReplayBuffer; the trainer module samples data from the common data node ReplayBuffer, trains a new neural network model, and saves the trained neural network model in the data node Model Pool; the evaluator module loads a new neural network model from the data node Model Pool, performs multiple rounds of competition with the current best neural network model, calculates the win rate, and iterates the new neural network model with the high win rate into the current best neural network model; Para 0013, in the reinforcement learning framework based on P2P networks, after iterating a new neural network model with a high win rate into the current best neural network model, the current best neural network model will also provide inference services to the outside world through an RPC remote procedure call protocol or the Internet]. Yuan is analogous to the claimed invention as they both relate to learning from distributed decentralized learning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Yuan and provide an RPC to the teacher network for [Yuan, 0007] ease of remote accessibility. Regarding claim 6, Huh-Yuan teach the limitations of claim 1. Huh further teaches, wherein the teacher neural network model comprises a trained model [Para 0015, S.sup.S is the sparse activation maps of the trained model of the teacher network]. Regarding claim 7, Huh-Yuan teach the limitations of claim 1. Yuan further teaches, wherein first processing unit belongs to a first entity and second processing unit belongs to a second entity different from the first entity [Para 0008, A reinforcement learning framework based on a P2P network includes a simulator module, a trainer module, and an evaluator module; wherein, the simulator module exists in each computing node, loads the current best neural network model, performs self-play training to generate a large amount of data, and uploads the generated data to a common data node ReplayBuffer; the trainer module samples data from the common data node ReplayBuffer, trains a new neural network model, and saves the trained neural network model in the data node Model Pool; the evaluator module loads a new neural network model from the data node Model Pool, performs multiple rounds of competition with the current best neural network model, calculates the win rate, and iterates the new neural network model with the high win rate into the current best neural network model; Para 0013, in the reinforcement learning framework based on P2P networks, after iterating a new neural network model with a high win rate into the current best neural network model, the current best neural network model will also provide inference services to the outside world through an RPC remote procedure call protocol or the Internet]. Yuan is analogous to the claimed invention as they both relate to learning from distributed decentralized learning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Yuan and provide separate processing entities in order to improve processing efficiency. Regarding claim 11, Huh teaches, A system [Para 0022, The following detailed description is provided to aid in a comprehensive understanding of the methods, apparatus and/or systems described herein.] comprising: data processing hardware; and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations [Para 0028, The computing device 12 includes at least one processor 14, a computer-readable storage medium 16, and a communication bus 18. The processor 14 may cause the computing device 12 to operate according to the exemplary embodiment described above. For example, the processor 14 may execute one or more programs stored on the computer-readable storage medium 16. The one or more programs may include one or more computer-executable instructions, which, when executed by the processor 14, may be configured so that the computing device 12 performs operations according to the exemplary embodiment] comprising: obtaining a plurality of training samples [Fig 2, Input Data]; PNG media_image1.png 254 658 media_image1.png Greyscale generating, using a student neural network model, a first output (Para 0007, sparse activation maps) based on a first training sample (Para 0007, input data) of the plurality of training samples [Para 0007, extracting student sparse activation maps (SAMs) by extracting a feature map from a learning model of the student network based on the input data and filtering the extracted feature map]; generating, using a teacher neural network model, a second output based on the first training sample of the plurality of training samples [Para 0007, extracting teacher sparse activation maps (SAMs) by extracting a feature map from a learning model of the teacher network based on input data and filtering the extracted feature map]; determining, based on the first output and the second output, a first loss [Para 0007, computing a loss function by comparing the extracted teacher sparse activation maps with the extracted student sparse activation maps]; adjusting, based on the first loss, one or more parameters of the student neural network model [Para 0007, the computing device 12 can update the learning model of the student network through weights that minimize the computed loss function]; after adjusting the one or more parameters of the student neural network model, generating, using the student neural network model, a third output based on a second training sample (Para 0059, augmented data) of the plurality of training samples [Para 0057, training can be repeated until the learning model of the learning network has the same neuron activation boundaries as the learning model of the teacher network; Para 0059, the learning model of the student network can be trained by using not only input data but also augmented data. Here, augmented data can be generated by transforming input data; Para 0007, extracting student sparse activation maps (SAMs) by extracting a feature map from a learning model of the student network based on the input data and filtering the extracted feature map]; generating, using the teacher neural network model, a fourth output based on the second training sample (Para 0059, augmented data) of the plurality of training samples [Para 0033, As illustrated in FIG. 2, the learning model of a knowledge distillation-based teacher-student network includes a teacher network and a student network; Para 0057, training can be repeated until the learning model of the learning network has the same neuron activation boundaries as the learning model of the teacher network; Para 0059, augmented data can be generated by transforming input data; Para 0007, computing a loss function by comparing the extracted teacher sparse activation maps with the extracted student sparse activation maps]; determining, based on the third output and the fourth output, a second loss [Para 0057, training can be repeated until the learning model of the learning network has the same neuron activation boundaries as the learning model of the teacher network; Para 0007, computing a loss function by comparing the extracted teacher sparse activation maps with the extracted student sparse activation maps]; and readjusting, based on the second loss, the one or more parameters of the student neural network model [Para 0062, the computing device 12 can update the student network using the loss of knowledge distillation between the teacher network and the student network, cross-entropy loss between predicted values and actual values, loss of knowledge distillation between teacher assistant networks and student assistant networks, and loss between the sparse activation maps of the teacher network and the sparse activation maps of the student network; Para 0007, the computing device 12 can update the learning model of the student network through weights that minimize the computed loss function]. Huh does not teach using a student neural network model executing on a first processing unit, and using a teacher neural network model executing on a second processing unit, the second processing unit remote from the first processing unit. Yuan teaches, using a student neural network model (Para 0008, the simulator module) executing on a first processing unit (Para 0008, the simulator module exists in each computing node), and using a teacher neural network model executing on a second processing unit (Para 0008, an evaluator module), the second processing unit remote from the first processing unit (Para 0013, in the reinforcement learning framework based on P2P networks… the current best neural network model will also provide inference services to the outside world through an RPC remote procedure call protocol) [Para 0008, A reinforcement learning framework based on a P2P network includes a simulator module, a trainer module, and an evaluator module; wherein, the simulator module exists in each computing node, loads the current best neural network model, performs self-play training to generate a large amount of data, and uploads the generated data to a common data node ReplayBuffer; the trainer module samples data from the common data node ReplayBuffer, trains a new neural network model, and saves the trained neural network model in the data node Model Pool; the evaluator module loads a new neural network model from the data node Model Pool, performs multiple rounds of competition with the current best neural network model, calculates the win rate, and iterates the new neural network model with the high win rate into the current best neural network model; Para 0013, in the reinforcement learning framework based on P2P networks, after iterating a new neural network model with a high win rate into the current best neural network model, the current best neural network model will also provide inference services to the outside world through an RPC remote procedure call protocol or the Internet]. Yuan is analogous to the claimed invention as they both relate to learning from distributed decentralized learning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Yuan and provide processing the two models on individual processing units, where the second processing unit is remote, in order to provide [Yuan, 0007] a learning framework based on P2P networks that has a low threshold, is accessible to everyone, has a high upper limit, can solve many large problems, is flexible, and is compatible with various large learning algorithms. Claims 13, 16, and 17 are system claims that recite identical limitations to method claims 3, 6, and 7, respectively. Therefore, claims 13, 16, and 17 are rejected using the same rationale as claims 3, 6, and 7, respectively. Claim(s) 2, 4, 12, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huh in view of Yuan, and in further view of Jafari et al. (US 20230222326 A1), hereinafter Jafari. Regarding claim 2, Huh-Yuan teach the limitations of claim 1. Huh-Yuan do not teach first processing unit and second processing unit each comprise a respective tensor processing unit. Jafari teaches, first processing unit and second processing unit each comprise a respective tensor processing unit [Para 0055, The processing system 1200 may include one or more processing devices 1202, such as… a tensor processing unit (TPU)]. Jafari is analogous to the claimed invention as they both relate to knowledge distillation. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Jafari and provide tensor processing units in order to optimize the model for cost-efficient scaling. Regarding claim 4, Huh-Yuan teach the limitations of claim 1. Huh-Yuan do not teach first output, second output, third output, and fourth output each comprise a respective logit. Jafari further teaches, first output, second output, third output, and fourth output each comprise a respective logit [Para 0035, the student neural network model 204 is trained using a first loss function L.sub.AKD that has the objective of minimizing a difference between the outputs (e.g., logits generated by a final layer of a neural network model, before a softmax layer of the neural network model) generated by the student neural network model]. Jafari is analogous to the claimed invention as they both relate to knowledge distillation. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Jafari and provide outputs as logits in order to improve model robustness by utilizing an output with enhanced interpretability. Claims 12 and 14 are system claims that recite identical limitations to method claims 2 and 4, respectively. Therefore, claims 12 and 14 are rejected using the same rationale as claims 2 and 4, respectively. Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huh in view of Yuan, and in further view of Passban et al. (US 20220076136 A1), hereinafter Passban Regarding claim 5, Huh-Yuan teach the limitations of claim 1 including adjusting the one or more parameters of the student neural network model (Huh, Para 0007) and the first loss (Huh, Para 0007). Huh-Yuan does not teach determining, based on loss, a gradient. Passban teaches, determining, based on loss, a gradient [Para 0113, Referring back to FIG. 4, at step 408, the gradients with respect to the loss values are computed]. Passban is analogous to the claimed invention as they both relate to knowledge distillation. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Passban and provide gradients in order to improve prediction accuracy by minimizing errors. Claim 15 is a system claim that recites identical limitations to method claim 5. Therefore, claim 15 is rejected using the same rationale as claim 5. Claim(s) 8, 9, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huh in view of Yuan, and in further view of Jafari et al. (US 20210383238 A1), hereinafter Jafari(2). Regarding claim 8, Huh-Yuan teach the limitations of claim 1 including the first training sample. Huh-Yuan do not teach an unlabeled training sample. Jafari(2) teaches, an unlabeled training sample [Para 0047, In the example of FIG. 4, a trained teacher NN model 410, untrained student NN model 412 and initial unlabeled input values {x.sub.1, . . . , x.sub.i, . . . , x.sub.N} corresponding to an input training dataset X]. Jafari is analogous to the claimed invention as they both relate to knowledge distillation. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Jafari and provide an unlabeled training sample in order to improve the model’s cost by being removed from human intervention. Regarding claim 9, Huh-Yuan teach the limitations of claim 1 including the second output (Huh, Para 0007). Huh-Yuan do not teach generating, based on output from teacher neural network model, a label for first training sample. Jafari(2) further teaches, generating, based on output from teacher neural network model, a label for first training sample [Para 0048, teacher NN model 410 is first used to compute a set of output values {y.sub.1, . . . , y.sub.i, . . . , y.sub.N} that correspond to the input values {x.sub.1, . . . , x.sub.i, . . . , x.sub.N}, providing a labelled training dataset X.]. Jafari is analogous to the claimed invention as they both relate to knowledge distillation. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Jafari and provide a label for training samples in order to improve prediction precision and reliability by providing verified validation data. Claims 18 and 19 are system claims that recite identical limitations to method claims 8 and 9, respectively. Therefore, claims 18 and 19 are rejected using the same rationale as claims 8 and 9, respectively. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huh in view of Yuan and Jafari(2), and in further view of Chopde et al. (US 20230419170 A1), hereinafter Chopde. Regarding claim 10, Huh-Yuan-Jafari(2) teach the limitations of claim 9. Huh-Yuan do not teach generating, using a second student neural network model executing on a third processing unit, a fifth output based on the labeled first training sample; determining, based on the label and the fifth output, a third loss; and adjusting, based on the third loss, the second student neural network model. Chopde teaches, generating, using a second student neural network model executing on a third processing unit, a fifth output (Para 0021, Para 0022, a second data output by the trained student model) based on the labeled first training sample (Para 0021, finetuned teacher model using the labeled data… to output a trained student model) [Para 0011, a system for efficient machine learning includes a data analytics system… The data analytics system is executed by a computer processor configured to apply object detection and classification and deep learning algorithms to detect object information captured by the image; Para 0021, performing a semi-supervised learning guided by the finetuned teacher model using the labeled data, the unlabeled data, and the test data to output a trained student model; Para 0022, acquiring a first data output by the finetuned teacher model and a second data output by the trained student model]; determining, based on the label and the fifth output, a third loss [Para 0023, obtaining a distillation loss value according to a distillation loss function based on a difference between the first data output and the second data output]; and adjusting, based on the third loss, the second student neural network model [Para 0024, updating parameters of the trained student model based on the distillation loss value]. Chopde is analogous to the claimed invention as they both relate to knowledge distillation. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Huh’s teachings to incorporate the teachings of Chopde and provide a second student model in order to increase diversity by capture different aspects of the teacher’s knowledge. Claim 20 is a system claim that recites identical limitations to method claim 10. Therefore, claim 20 is rejected using the same rationale as claim 10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED RAYHAN AHMED whose telephone number is (571)270-0286. The examiner can normally be reached Mon-Fri ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED RAYHAN AHMED/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

May 05, 2023
Application Filed
Mar 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12450891
IMAGE CLASSIFIER COMPRISING A NON-INJECTIVE TRANSFORMATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+50.0%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month