Prosecution Insights
Last updated: April 19, 2026
Application No. 18/551,053

CHANNEL STATE INFORMATION TRANSMISSION METHOD AND APPARATUS, TERMINAL, BASE STATION, AND STORAGE MEDIUM

Final Rejection §102
Filed
Sep 18, 2023
Examiner
JENSEN, NICHOLAS A
Art Unit
2472
Tech Center
2400 — Computer Networks
Assignee
ZTE CORPORATION
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
5y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
81 granted / 148 resolved
-3.3% vs TC avg
Strong +58% interview lift
Without
With
+57.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 6m
Avg Prosecution
9 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 148 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/26/2025 have been fully considered but they are not persuasive. Applicant asserts Chavva does not disclose an intermediate step i.e. first generating the “first CSI” to be compressed according to a set of configuration parameters (such as the “channel base vector” of the present application). See 12/26/2025 remarks page 2, last paragraph. Examiner respectfully disagrees. The instant claims do not require compressing the first CSI. Claim 5 has a Markush group that includes the compression ratio, but there are no limitations that require compression. Limitations from the specification are not given patentable weight when giving the claims the broadest reasonable interpretation. Applicant asserts Chavva does not disclose a technical solution of acquiring the “network parameter set” according to the channel state information parameter. See 12/26/2025 remarks page 2, last paragraph. Examiner respectfully disagrees. Regarding acquiring a network parameter set according to the CSI parameter, Chiavva [118-121], [168-171] CodebookConfig/CSI-ReportConfig provide structured settings (e.g., type-1/2, ports, subband parameters); while not called “network parameter set,” these govern the neural network processing and feedback content. Regarding generating first CSI according to the CSI parameter, Chiavva [103]–[105], [107]–[109], [151]–[155] UE computes CSI parameters (RI, PMI, CQI, CRI) based on CSI-RS/SSB and the configuration; this corresponds to generating “first CSI.” Applicant asserts the technical solution of the present applicant is essentially different from Chavva in the manner of the “channel state information parameter”, as well as in the definition and generation process of to the “first CSI” and the “second CSI”. These differences bring about significant technical effects of enhancing system flexibility, improving compression efficiency, and increasing feedback accuracy, which are not obvious to a person skilled in the art. Thus, Applicant asserts Chavva does not disclose “acquiring a channel state information (CSI) parameter, and generating first CSI and acquiring a network parameter set according to the CSI parameter; generating second CSI according to the first CSI and the network parameter set”. ` Examiner respectfully disagrees. Unclaimed features are not given patentable weight and should be claimed if they are critical to technical effects such as compression efficiency and feedback accuracy. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-17 and 21-23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chavva et al. (WO 2020/213964 A1) hereinafter Chavva. Regarding claim 1, Chavva discloses: A channel state information (CSI) transmission method, comprising: acquiring a channel state information (CSI) parameter, ([104]: "At step 101, a CSI feedback configuration is initialized. The gNB can send a feedback configuration for CSI-RS to the UE. The feedback configuration, received by the UE, includes CSI-MeasConfig, CSI- ResourceConfig, and CSl-ReportConfig. The feedback configuration informs the UE about the feedback parameters that are to be included in the CSI report, periodicity of transmission of the CSI report, time/frequency resource allocation for CSI-RS, and port information for receiving the CSI-RS." [150]: "The CSIl-MeasConfig IE can indicate whether the UE 601 needs to perform at least one of interference measurement and channel measurement. The CSl-ResourceConfig IE can include information pertaining to allocation of time/frequency resources for CSI-RS reception such as time slots in which the UE 601 can expect to receive the CSI-RS, frequency of the CSI-RS, and ports through which the CSI-RS can be received. The CSI-ReportConfig IE can include time slots in which the UE 601 can send the CSI report, feedback parameters to be included in the CSI report, and so on. The CodebookConfig indicates to the UE 601 as to whether the CSI feedback configuration, provided to the UE 601, is pertaining to type-1 CSI or type-2 CSI." Fig 8 item 801) and generating first CSI and acquiring a network parameter set according to the CSI parameter; ([103]–[105], [107]–[109], [151]–[155] UE computes CSI parameters (RI, PMI, CQI, CRI) based on CSI-RS/SSB and the configuration; this corresponds to generating “first CSI.”; [118]–[121], [168]–[171] CodebookConfig/CSI-ReportConfig provide structured settings (e.g., type-1/2, ports, subband parameters); while not called “network parameter set,” these govern the neural network processing and feedback content; Fig 8 items 802, 803 [153]: " computing, by the UE 601, feedback parameters based on the information included in the CSI-RS and/or SSB. The embodiments compute the feedback parameters periodically, wherein the periodicity is indicated in the CSI- ResourceConfig and CSI-ReportConfig IEs. The embodiments include generating second CSI according to the first CSI and the network parameter set; and transmitting the second CSI. computing the feedback parameters based on at least one of the estimated channel metrics, basement metrics, sensor measurements, and RX beam pattern information. Examples of feedback parameters include, but not limited to, PMI, CQI, CRI, RI, and so on."[165]: "As depicted in FIG. 10, consider that the hierarchical neural network model is configured to compute and predict RI, PMI, and CQI, based on the CSI feedback configuration." [166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Further Chavva discloses utilizing the CSI feedback configuration to measure the CSI feedback configuration includes the ports and the frequency of the CSI-RSs and using the measured CSI and the computed number of ports and number of RBs to determine the weights of the ML network. Therefore, the CSI feedback configuration as well as the number of ports and number of RBs disclose the CSI parameter of claim 1.) generating second CSI according to the first CSI and the network parameter set;(Fig 10: PMI and CQI at the output of the respective ML networks, Fig 8 items 804, 805 [166]: "The output of the neural network, which is trained to predict the optimal rank (RI), the number of resource blocks and the number of ports, can be inputted to the neural networks, which are trained to predict the CQI and PMI respectively. The RI, along with the preprocessed channel information, can be utilized for predicting the CQI and PMI.") and transmitting the second CSI (130). (Fig 8 item 806, sending a CSI report to gNB) Regarding claim 2, Chavva discloses: The method of claim 1, wherein the CSI parameter comprises at least one of: a channel base vector, a channel base vector number, a maximum port number, a channel type indication, a codebook component type, a codebook component number, a bandwidth indication, or a frequency domain unit. ([104, 150, 153] that the port and frequency of each CSI-RS are configured in the CSI feedback configuration for the UE to measure channel information. The configured ports result implicitly in a maximum number of ports.) Regarding claim 3, Chavva discloses: The method of claim 1, further comprising: determining a first CSI according to at least one of a channel base vector, a channel base vector number, or a maximum port number in the CSI parameter. ([93]: "The embodiments include estimating the feedback parameters using a Machine Learning (ML) model, such as a neural network. The computation of the feedback parameters is based on the baseband metrics, channel metrics, RX beam pattern information, and the sensor measurements. The embodiments include extracting feature vectors by processing the information in the measurement database. The embodiments include inputting the feature vectors to the ML model for computing the feedback parameters.") The determined feature vectors which are input to each ML to determine the feedback parameters such as RI in Fig 10 (disclosing first CSI) disclose the channel base vector of claim 3.) Regarding claim 4, Chavva discloses: The method of claim 1, further comprising: determining the network parameter set according to at least one of the channel type indication, the codebook component type, or the codebook component number in the CSI parameter. ([117]: "The gNB 607 can include a feedback configuration for CSI-RS in the RRC message. The feedback configuration comprises... CodebookConfig," [120]: "The CodebookConfig can provide an indication to the UE 601 whether the CSI feedback configuration is Type-1 or Type-2. For both Type-1 and Type-2 CSI reporting in NR, the gNB 607 can specify CSI reporting configuration in the CodebookConfig." [165]: "As depicted in FIG. 10, consider that the hierarchical neural network model is configured to compute and predict RI, PMI, and CQI, based on the CSI feedback configuration.") Regarding claim 5, Chavva discloses: The method of claim 4, wherein the network parameter set comprises at least one of: a compression ratio, an activation function, a network layer number, a network layer mapping, a network layer weight, a network layer offset, or a network layer weight normalization coefficient. ([60]: “wherein matching the predicted values and the actual values comprises updating at least one weight associated with at least one activation element of at least one layer of the neural network (602c), wherein the at least one weight is updated based on at least one of channel metrics, RX beam pattern information, baseband metrics, sensor measurements, a difference between the predicted values and the actual values, and PDSCH transmission error statistics.") Therefore, D1 discloses that the weights are adapted based on the predicted value which is itself depending on the CSI feedback configuration (including itself the CSI codebook type). Regarding claim 6, Chavva discloses: The method of claim 1, wherein generating second CSI according to the first CSI and the network parameter set further comprises: determining a parameter value of an encoder of a neural network according to the network parameter set, ([165]: "As depicted in FIG. 10, consider that the hierarchical neural network model is configured to compute and predict RI, PMI, and CQI, based on the CSI feedback configuration." [166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Further Chavva discloses utilizing the CSI feedback configuration to measure the CSI feedback configuration includes the ports and the frequency of the CSI-RSs and using the measured CSI and the computed number of ports and number of RBs to determine the weights of the ML network. Therefore, the CSI feedback configuration as well as the number of ports and number of RBs disclose the CSI parameter and determining second CSI according to the first CSI and the encoder. (Fig 10: PMI and CQI at the output of the respective ML networks Fig 8 items 804, 805 [166]: "The output of the neural network, which is trained to predict the optimal rank (RI), the number of resource blocks and the number of ports, can be inputted to the neural networks, which are trained to predict the CQI and PMI respectively. The RI, along with the preprocessed channel information, can be utilized for predicting the CQI and PMI.") Regarding claim 7, Chavva discloses: The method of claim 1, further comprising: determining a normalization parameter of the second CSI according to the first CSI and the network parameter set. ([164]: "The input can be processed by a data pre-processing layer, which can use methods such as data normalization, in order to generate input feature vectors. The joint neural network model can estimate and predict feedback parameters such as CRI, RI, PMI, CQI, LI, and L1-RSRP.") When the preprocessing performs normalization, the subsequent determining of predicted CSI based on the measure CSI and the neural network is implicitly performed through normalized values.) Regarding claim 8, Chavva discloses: The method of claim 2, wherein the channel base vector number is determined by a number NF of frequency domain units.([166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Regarding claim 9, Chavva discloses: The method of claim 8, wherein determining the channel base vector number comprises: determining the channel base vector number according to a frequency domain unit value set corresponding to the number NF of frequency domain units. ([166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Regarding claim 10, Chavva discloses: A channel state information (CSI) transmission method, comprising: configuring a channel state information (CSI) parameter, ([104]: "At step 101, a CSI feedback configuration is initialized. The gNB can send a feedback configuration for CSI-RS to the UE. The feedback configuration, received by the UE, includes CSI-MeasConfig, CSI- ResourceConfig, and CSl-ReportConfig. The feedback configuration informs the UE about the feedback parameters that are to be included in the CSI report, periodicity of transmission of the CSI report, time/frequency resource allocation for CSI-RS, and port information for receiving the CSI-RS." [150]: "The CSIl-MeasConfig IE can indicate whether the UE 601 needs to perform at least one of interference measurement and channel measurement. The CSl-ResourceConfig IE can include information pertaining to allocation of time/frequency resources for CSI-RS reception such as time slots in which the UE 601 can expect to receive the CSI-RS, frequency of the CSI-RS, and ports through which the CSI-RS can be received. The CSI-ReportConfig IE can include time slots in which the UE 601 can send the CSI report, feedback parameters to be included in the CSI report, and so on. The CodebookConfig indicates to the UE 601 as to whether the CSI feedback configuration, provided to the UE 601, is pertaining to type-1 CSI or type-2 CSI." Fig 8 item 801) and generating first CSI and acquiring a network parameter set according to the CSI parameter; ([103]–[105], [107]–[109], [151]–[155] UE computes CSI parameters (RI, PMI, CQI, CRI) based on CSI-RS/SSB and the configuration; this corresponds to generating “first CSI.”; [118]–[121], [168]–[171] CodebookConfig/CSI-ReportConfig provide structured settings (e.g., type-1/2, ports, subband parameters); while not called “network parameter set,” these govern the neural network processing and feedback content; Fig 8 items 802, 803 [153]: " computing, by the UE 601, feedback parameters based on the information included in the CSI-RS and/or SSB. The embodiments compute the feedback parameters periodically, wherein the periodicity is indicated in the CSI- ResourceConfig and CSI-ReportConfig IEs. The embodiments include generating second CSI according to the first CSI and the network parameter set; and transmitting the second CSI. computing the feedback parameters based on at least one of the estimated channel metrics, basement metrics, sensor measurements, and RX beam pattern information. Examples of feedback parameters include, but not limited to, PMI, CQI, CRI, RI, and so on."[165]: "As depicted in FIG. 10, consider that the hierarchical neural network model is configured to compute and predict RI, PMI, and CQI, based on the CSI feedback configuration." [166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Further Chavva discloses utilizing the CSI feedback configuration to measure the CSI feedback configuration includes the ports and the frequency of the CSI-RSs and using the measured CSI and the computed number of ports and number of RBs to determine the weights of the ML network. Therefore, the CSI feedback configuration as well as the number of ports and number of RBs disclose the CSI parameter of claim 1.) wherein the CSI parameter is used by a terminal device to generate first CSI and/or used by the terminal device to acquire a network parameter set; (Fig 10: PMI and CQI at the output of the respective ML networks, Fig 8 items 804, 805 [166]: "The output of the neural network, which is trained to predict the optimal rank (RI), the number of resource blocks and the number of ports, can be inputted to the neural networks, which are trained to predict the CQI and PMI respectively. The RI, along with the preprocessed channel information, can be utilized for predicting the CQI and PMI.") receiving second CSI transmitted by the terminal device; (Fig 10: PMI and CQI at the output of the respective ML networks, Fig 8 items 804, 805 [166]: "The output of the neural network, which is trained to predict the optimal rank (RI), the number of resource blocks and the number of ports, can be inputted to the neural networks, which are trained to predict the CQI and PMI respectively. The RI, along with the preprocessed channel information, can be utilized for predicting the CQI and PMI.") and generating third CSI according to the network parameter set and the second CSI. (Fig 15b [186]: "The gNB 607 can decode the encoded CSI report. The decoding involves determining whether mode '1' or mode '2' was used for encoding the CSI report. If it is determined that mode '1' was used, the gNB 607 can decode ‘K' symbols of the encoded CSI report to 'N' symbols of the original CSI report, generated by the UE 6071. If it is determined that mode ‘2’ was used, the gNB 607 can decode 'P' symbols to 'N' symbols to obtain the original CSI report.") Regarding claim 11, Chavva discloses: The method of claim 10, wherein the CSI parameter comprises at least one of: a channel base vector, a channel base vector number, a maximum port number, a channel type indication, a codebook component type, a codebook component number, a bandwidth indication, or a frequency domain unit. Regarding claim 12, Chavva discloses:The method of claim 10, further comprising: acquiring the network parameter set according to at least one of a channel type indication, a codebook component type, or a codebook component number in the CSI parameter. ([104, 150, 153] that the port and frequency of each CSI-RS are configured in the CSI feedback configuration for the UE to measure channel information. The configured ports result implicitly in a maximum number of ports.) Regarding claim 13, Chavva discloses:The method of claim 12, wherein the network parameter set comprises at least one of: a compression ratio, an activation function, a network layer number, a network layer mapping, a network layer weight, a network layer offset, or a network layer weight normalization coefficient. ([93]: "The embodiments include estimating the feedback parameters using a Machine Learning (ML) model, such as a neural network. The computation of the feedback parameters is based on the baseband metrics, channel metrics, RX beam pattern information, and the sensor measurements. The embodiments include extracting feature vectors by processing the information in the measurement database. The embodiments include inputting the feature vectors to the ML model for computing the feedback parameters.") The determined feature vectors which are input to each ML to determine the feedback parameters such as RI in Fig 10 (disclosing first CSI) disclose the channel base vector of claim 3.) Regarding claim 14, Chavva discloses: The method of claim 10, wherein generating third CSI according to the network parameter set and the second CSI further comprises: determining a parameter value of a decoder of a neural network according to the network parameter set, ([165]: "As depicted in FIG. 10, consider that the hierarchical neural network model is configured to compute and predict RI, PMI, and CQI, based on the CSI feedback configuration." [166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Further Chavva discloses utilizing the CSI feedback configuration to measure the CSI feedback configuration includes the ports and the frequency of the CSI-RSs and using the measured CSI and the computed number of ports and number of RBs to determine the weights of the ML network. Therefore, the CSI feedback configuration as well as the number of ports and number of RBs disclose the CSI parameter and determining third CSI according to the second CSI and the decoder. (Fig 10: PMI and CQI at the output of the respective ML networks Fig 8 items 804, 805 [166]: "The output of the neural network, which is trained to predict the optimal rank (RI), the number of resource blocks and the number of ports, can be inputted to the neural networks, which are trained to predict the CQI and PMI respectively. The RI, along with the preprocessed channel information, can be utilized for predicting the CQI and PMI.") Regarding claim 15, Chavva discloses: The method of claim 12, further comprising: receiving a normalization parameter of the second CSI. ([164]: "The input can be processed by a data pre-processing layer, which can use methods such as data normalization, in order to generate input feature vectors. The joint neural network model can estimate and predict feedback parameters such as CRI, RI, PMI, CQI, LI, and L1-RSRP.") When the preprocessing performs normalization, the subsequent determining of predicted CSI based on the measure CSI and the neural network is implicitly performed through normalized values.) Regarding claim 16, Chavva discloses: The method of claim 11, wherein the channel base vector number is determined by a number NF of frequency domain units. ([166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;... Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Regarding claim 17, Chavva discloses: The method of claim 16, wherein determining the channel base vector number comprises: determining the channel base vector number according to a frequency domain unit value set corresponding to the number NF of frequency domain units. ([166]: "The hierarchical neural network model can be trained to customize averaging of weights of the individual neural networks, based on at least one of number of ports and number of resource blocks;. Each neural network can be individually trained to learn optimal averaging weights based on the number of ports and the number of resource blocks.") Regarding claim 21, Chavva discloses: A terminal device, comprising: one or more processors; and a memory configured to store one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the CSI transmission method of claim 1. ([228] and Fig. 23 disclose user equipment with a memory and processor for executing method of claim 1 mapped above). Regarding claim 22, Chavva discloses: A base station, comprising: one or more processors; and a memory configured to store one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the CSI transmission method claim 10. ([220] and Fig. 22 disclose a base station with a memory and processor for executing method of claim 10 mapped above). Regarding claim 23, Chavva discloses: A non-transitory computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the CSI transmission method claim 1. ([228] and Fig. 23 disclose user equipment with a memory and processor for executing method of claim 1 mapped above). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Jensen whose telephone number is (571)270-5443. The examiner can normally be reached M-F 8:30-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS A JENSEN/Supervisory Patent Examiner, Art Unit 2472
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §102
Dec 26, 2025
Response Filed
Mar 20, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 9111268
NULL
2y 5m to grant Granted Aug 18, 2015
Patent 9059826
NULL
2y 5m to grant Granted Jun 16, 2015
Patent 9053069
NULL
2y 5m to grant Granted Jun 09, 2015
Patent 9036622
Media negotiation method for IP multimedia link
2y 5m to grant Granted May 19, 2015
Patent 9032046
Method for performing a dynamic update of composed web services
2y 5m to grant Granted May 12, 2015
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
99%
With Interview (+57.7%)
5y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 148 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month