Prosecution Insights
Last updated: April 19, 2026
Application No. 19/303,931

Neural Network Representation Formats

Non-Final OA §101§103
Filed
Aug 19, 2025
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice about prior art Claim 29 is not taught or made obvious by the prior art of record. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to data per se with no structural recitation. Claims 1-34 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of a mathematical relationship without significantly more. The claims recite encoding and decoding neural network parameters using different algorithms. This judicial exception is not integrated into a practical application because the claims are (with the exception) are merely linked to computer technology. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because claim elements such as apparatus, computer and digital storage medium are generic computer parts. Claim Objections A series of singular dependent claims is permissible in which a dependent claim refers to a preceding claim which, in turn, refers to another preceding claim. A claim which depends from a dependent claim should not be separated by any claim which does not also depend from said dependent claim. It should be kept in mind that a dependent claim may refer to any preceding independent claim. In general, applicant's sequence will not be changed. See MPEP § 608.01(n). Claims 10 and 11 are improperly dependent because they are separated from the claim 4 that they depend on by claims 5-9 that do not also depend on that claim 4. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-28 and 30-34 are rejected under 35 U.S.C. 103 as being unpatentable over Neural Network Compression using Transform Coding and Clustering by Laude et al, US20180082181A1 by Brothers et al and Neural Network-Based Arithmetic Coding of Intra Prediction Modes in HEVC by Song et al. Laude teaches claims 1, 2, 3, 31, 32, 33 and 34. Data stream having a representation of a neural network encoded thereinto, (Laude abs “we propose a codec for the compression of neural networks which is based on transform coding for convolutional and dense layers and on clustering for biases and normalizations.”) the data stream comprising serialization (Laude sec. III “The processed data from the transform coding and from the clustering are entropy coded layer-wise using BZip2, serialized and written to the output file.” Laude sec. III “The trained neural network model is the input of the codec. It consists of one-dimensional and two-dimensional weights, biases, normalizations and the architecture itself (number of layers/filters, connections, etc.).” Table II shows the decoding and encoding times. Laude sec. III teaches “the decoded network models can be loaded using the same APIs used for the original models.” And encoding is taught as a compression method in Laude Sec. III.) wherein the neural network parameters are coded into the data stream using context-adaptive arithmetic coding. (Laude Han doesn’t teach a serialization parameter indicating a coding order. However, Brothers teaches data stream comprising serialization parameter indicating a coding order. (Brothers para 22 “the neural network reordering may be selected to introduce an ordering to the weights…” introducing an ordering of weights is serializing the weights.) Brothers, the claims and Laude all encode/decode data streams. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to encode according to Brother because by “reordering network layers, an ordering can be introduced to the weights that are selected to provide better weight compression.” Brothers para 22. Han doesn’t teach context-adaptive arithmetic coding. However, Song teaches data coded into the data stream using context-adaptive arithmetic coding. (Song abs “we propose to directly estimate the probability distribution of the 35 intra prediction modes with the adoption of a multi-level arithmetic codec. Instead of handcrafted context models, we utilize convolutional neural network (CNN) to perform the probability estimation.”) Song, Laude and the claims all encode neural network parameters. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use Song because their “Simulation results show that our proposed arithmetic coding leads to as high as 9.9% bits saving compared with CABAC.” Song abs. Laude teaches claim 4. Apparatus of claim 3, wherein the data stream is structured into one or more individually accessible portions, each individually accessible portion representing a corresponding neural network layer of the neural network, and (Laude abs “we propose a codec for the compression of neural networks which is based on transform coding for convolutional and dense layers and on clustering for biases and normalizations.”) wherein the apparatus is configured to decode serially, from the data stream, neural network parameters, which define neuron interconnections of the neural network within a predetermined neural network layer, and (Laude sec. III “The processed data from the transform coding and from the clustering are entropy coded layer-wise using BZip2, serialized and written to the output file. …In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well.”) use the coding (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.”) Han doesn’t teach serially coding in a particular order. However, Brothers teaches using the coding order to assign… [data] serially decoded from the data stream. (Brothers para 22 “the neural network reordering may be selected to introduce an ordering to the weights…” introducing an ordering of weights is serializing the weights.) Brothers teaches claim 5. Apparatus of claim 3, wherein the serialization parameter is indicative of a permutation using which the coding order permutes neurons of a neural network layer relative to a default order. (Brothers para 22 “the neural network reordering may be selected to introduce an ordering to the weights…” introducing an ordering of weights is serializing the weights.) Brothers teaches claim 6. Apparatus of claim 5, wherein the permutation orders the neurons of the neural network layer in a manner so that the neural network parameters monotonically increase along the coding order or monotonically decrease along the coding order. (Brothers para 22 “network feature maps can be reordered so that weight values tend to increase or the number of zero value weights increase.”) Brothers teaches claim 7. Apparatus of claim 5, wherein the permutation orders the neurons of the neural network layer in a manner so that, among predetermined coding orders signalable by the serialization parameter, a bitrate for coding the neural network parameters into the data stream is lowest for the permutation indicated by the serialization parameter. (Brothers para 34 “Sets of quantized weights within clusters may also be selected to maximize effectiveness of predictions.” Brothers para 37 “A coding scheme, such an entropy coding scheme, may be used. For example, Huffman coding may be used represent the deltas with a number of bits. Efficient compression can be achieved by representing the most common deltas with the fewest possible bits.” The deltas are prediction measurement, Brothers para 35 “deltas are computes versus prediction…” the permutation order is signaled based on prediction delta (serialization parameter), and a number of bits is chosen for the serialization parameter (delta).) Laude teaches claim 8. Apparatus of claim 3, wherein the neural network parameters comprise weights and biases. (Laude sec. III “The trained neural network model is the input of the codec. It consists of one-dimensional and two-dimensional weights, biases, normalizations and the architecture itself (number of layers/filters, connections, etc.). All layers of the network are processed individually.”) Laude teaches claim 9. Apparatus of claim 3, wherein the apparatus is configured to decode, from the data stream, individually accessible sub-portions, into which individually accessible portions the data stream is structured, each sub-portion representing a corresponding neural network portion of the neural network, so that each sub-portion is completely traversed by the coding order before a subsequent sub-portion is traversed by the coding order. (Laude sec. III “The weights of dense layers (also referred to as fully connected layers) and of 1×1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding.” The blocks are the sub portions.) Laude teaches claim 10. Apparatus of claim 4, wherein the neural network parameters are decoded from the data stream using (Laude sec. III “To achieve this, the one-dimensional weights are reshaped to the largest block size (up to a specified level of 8×8). Although these one-dimensional parameters do not directly have a spatial context, our research revealed that the transform coding still has a higher entropy-reducing impact than direct quantization.”) Laude teaches claim 11. Apparatus of claim 4, wherein the apparatus is configured to decode, from the data stream, start codes at which each individually accessible portion or sub-portion begins, and/or pointers pointing to beginnings of each individually accessible portion or sub-portion, and/or pointers data stream lengths of each individually accessible portion or sub-portion for skipping the respective individually accessible portion or sub-portion in parsing the data stream. (Laude sec. III “All layers of the network are processed individually. This simplifies partly retroactive updates of individual layers without transmitting the complete network again.” This requires a pointer or start code to identify the separate layers. The skipping is taught by “updates of individual layers without transmitting the complete network.”) Laude teaches claim 12. Apparatus of claim 3, wherein the apparatus is configured to decode, from the data stream, a numerical computation representation parameter indicating a numerical representation and bit size at which the neural network parameters are to be represented when using the neural network for inference. (Laude sec. III “The bit depth of the quantizer can be tuned according to the needs of the specific application. Typical values are 5-6 bit/coefficient with only a small accuracy impact.”) Laude teaches claim 13. Apparatus of claim 3, wherein the data stream, is structured into individually accessible sub-portions, each individually accessible sub-portion representing a corresponding neural network portion of the neural network, so that each individually accessible sub-portion is completely traversed by the coding order before a subsequent individually accessible sub-portion is traversed by the coding order, (Laude sec. III “The processed data from the transform coding and from the clustering are entropy coded layer-wise using BZip2, serialized and written to the output file.” Laude sec. III “The weights of dense layers (also referred to as fully connected layers) and of 1×1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding.”) wherein the apparatus is configured to decode, from the data stream, for a predetermined individually accessible sub-portion the neural network parameter and a type parameter indicting a parameter type of the neural network parameter decoded from the predetermined individually accessible sub-portion. (Laude sec. III “Padding to complete bytes is applied if necessary. In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters…”) Laude teaches claim 14. Apparatus of claim 13, wherein the type parameter discriminates, at least, between neural network weights and neural network biases. (Laude sec. III “Laude sec. III “Padding to complete bytes is applied if necessary. In addition to the weights, biases and normalizations, meta data is required for the decoding process…”) Laude teaches claim 15. Apparatus of claim 3, wherein the data stream, is structured into one or more individually accessible portions, each one or more individually accessible portion representing a corresponding neural network layer of the neural network, and (Laude sec. III “The processed data from the transform coding and from the clustering are entropy coded layer-wise using BZip2, serialized and written to the output file. Padding to complete bytes is applied if necessary. In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network…”) wherein the apparatus is configured to decode, from the data stream, for a predetermined neural network layer, a neural network layer type parameter indicating a neural network layer type of the predetermined neural network layer of the neural network. (Laude sec. III “It includes the architecture of the layers in the network….”) Laude teaches claim 16. Apparatus of claim 15, wherein the neural network layer type parameter discriminates, at least, between a fully-connected and a convolutional layer type. (Laude sec. III “It includes the architecture of the layers in the network….” Laude sec. III “The weights of dense layers (also referred to as fully connected layers) and of 1×1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding.”) Laude teaches claim 17. Apparatus of claim 3, wherein the apparatus is configured to decode a representation of a neural network from the data stream, wherein the data stream is structured into one or more individually accessible portions, each individually accessible portion representing a corresponding neural network layer of the neural network, and wherein the data stream is, within a predetermined portion, further structured into individually accessible sub-portions, each sub-portion representing a corresponding neural network portion of the respective neural network layer of the neural network, wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible sub-portions (Laude sec. III “The processed data from the transform coding and from the clustering are entropy coded layer-wise using BZip2, serialized and written to the output file.” Laude sec. III “The weights of dense layers (also referred to as fully connected layers) and of 1×1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding… the one-dimensional weights are reshaped to the largest block size (up to a specified level of 8×8).” The Layer-wise processing is portion-wise processing. The sub-portions are taught by the blocks, ) a start code at which the respective predetermined individually accessible sub-portion begins, and/or (The blocks are separate so they at least have an address, This address describes Applicant’s start code.) a pointer pointing to a beginning of the respective predetermined individually accessible sub-portion, and/or (The blocks are separate so they at least have an address, This address describes Applicant’s pointer.) a data stream length parameter indicating a data stream length of the respective predetermined individually accessible sub-portion for skipping the respective predetermined individually accessible sub-portion in parsing the data stream. (Laude sec. III “The bit depth of the quantizer can be tuned according to the needs of the specific application. Typical values are 5-6 bit/coefficient with only a small accuracy impact.”) Laude teaches claim 18. Apparatus of claim 17, wherein the apparatus is configured to decode, from the data stream, the representation of the neural network using (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” The meta data includes the context initialization, e.g. shapes of filters.) Laude doesn’t teach context-adaptive decoding. However, Song teaches context-adaptive artihmetic decoding. (Song abs “we propose to directly estimate the probability distribution of the 35 intra prediction modes with the adoption of a multi-level arithmetic codec. Instead of handcrafted context models, we utilize convolutional neural network (CNN) to perform the probability estimation.”) Laude teaches claim 19. Apparatus of claim 3, wherein the apparatus is configured to decode a representation of a neural network from a data stream, wherein the data stream is structured into individually accessible portions, each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible portions, an identification parameter for identifying the respective predetermined individually accessible portion. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.”) Laude teaches claim 20. Apparatus of claim 19, wherein the identification parameter is related to the respective predetermined individually accessible portion via a hash function or error detection code or error correction code. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Codebooks are error detection and error correction code.) Laude teaches claim 21. Apparatus of claim 19, wherein the apparatus is configured to decode, from the data stream, a higher-level identification parameter for identifying a collection of more than one predetermined individually accessible portion. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Codebooks are higher-level ID parameters and error detection and error correction code.) Laude teaches claim 22. Apparatus of claim 21, wherein the higher-level identification parameter is related to the identification parameters of the more than one predetermined individually accessible portion via a hash function or error detection code or error correction code. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Codebooks are higher-level ID parameters and error detection and error correction code.) Laude teaches claim 23. Apparatus of claim 3, wherein the apparatus is configured to decode a representation of a neural network from a data stream, wherein the data stream is structured into individually accessible portions, each portion representing a corresponding neural network portion of the neural network, wherein the apparatus is configured to decode from the data stream, for each of one or more predetermined individually accessible portions a supplemental data for supplementing the representation of the neural network. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Supplemental data is metadata.) Laude teaches claim 24. Apparatus of claim 23, wherein the data stream indicates the supplemental data as being dispensable for inference based on the neural network. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Supplemental data is metadata. Filters are dispensable for inference.) Laude teaches claim 25. Apparatus of claim 23, wherein the apparatus is configured to decode the supplemental data for supplementing the representation of the neural network for the one or more predetermined individually accessible portions from further individually accessible portions, wherein the data stream comprises for each of the one or more predetermined individually accessible portions a corresponding further predetermined individually accessible portion relating to the neural network portion to which the respective predetermined individually accessible portion corresponds. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Supplemental data is metadata. Laude sec. III “The weights of dense layers (also referred to as fully connected layers) and of 1×1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding.” The blocks are the sub portions.) Laude teaches claim 26. Apparatus of claim 23, wherein the supplemental data relates to relevance scores of neural network parameters, and/or perturbation robustness of neural network parameters. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” Supplemental data is metadata. Scaling factors relate to perturbation robustness.) Laude teaches claim 27. Apparatus of claim 3, for decoding a representation of a neural network from a data stream, wherein the apparatus is configured to decode from the data stream hierarchical control data structured into a sequence of control data portions, wherein the control data portions provide information on the neural network at increasing details along the sequence of control data portions. (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process and thus included in the output file as well. It includes the architecture of the layers in the network, shapes and dimensions of the filters, details on the block arrangements, scaling factors from the pre-scaling, scaling factors and offsets from the quantizer, and the code books for the clustering.” The control data is the architecture of the layers. Everything is decoded/encoded “layer-wise”, therefore there is a sequence of control data portions.) Laude teaches claim 28. Apparatus of claim 27, wherein at least some of the control data portions provide information on the neural network which is partially redundant. (Architecture from Laude sec. III is partially redundant because the blocks of different layers are arranged differently based on the layer architecture, “The weights of dense layers (also referred to as fully connected layers) and of 1×1 convolutions (no spatial filtering but filtering over the depth of the previous layer, typically used in networks for depth reduction) are arranged block-wise prior to transform coding.”) Laude teaches claim 30. Apparatus for performing an inference using a neural network, comprising an apparatus for decoding a data stream according to claim 3, so as to derive from the data stream the neural network, and (Laude sec. III “In addition to the weights, biases and normalizations, meta data is required for the decoding process… Thereby, the decoded network models can be loaded using the same APIs used for the original models.”) a processor configured to perform the inference based on the neural network. (Laude sec. III “With the deployment of neural networks on mobile devices…” Deploying a model is using a model for inference. Mobile devices have processors.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/ Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Aug 19, 2025
Application Filed
Oct 24, 2025
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month