Prosecution Insights
Last updated: April 19, 2026
Application No. 18/094,251

Palettization of Kernel Vector in Neural Network Processor

Final Rejection §103
Filed
Jan 06, 2023
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on March 3, 2026, in which claims 1, 9, 11, 17, and 18 are currently amended. Claims 1-20 are currently pending. Response to Arguments Applicant’s arguments with respect to rejection of claims 1-20 under 35 U.S.C. 103 based on amendment have been considered and are persuasive. The argument is moot in view of a new ground of rejection set forth below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 6, 9, 10, 14, and 17 are rejected under U.S.C. §103 as being unpatentable over the combination of Desoli (US11593609B2) and Wang (WO2020014590A1). Regarding claim 1, Desoli teaches A neural processor circuit, comprising: a kernel access circuit coupled to memory external to the neural processor circuit, the kernel access circuit configured to read compressed kernel data from the memory; and ([Col. 14 l. 5-6] "the method 800 includes generating encoded kernel data including index data and codebook data by performing a vector quantization process on the kernel data with an encoder external to the convolutional neural network." CNN interpreted as neural processor circuit. See also FIG. 1) a [plurality of] neural engine circuits configured to receive compressed kernel data from the kernel access circuit, ([Col. 4 l. 55-Col. 5 l. 3] "encoded kernel data 113 is provided to the decompression unit 106. The encoded kernel data 113 corresponds to the encoded kernel tensor values for that convolution layer. The decompression unit 106 receives the encoded kernel data 113" [Col. 14 l. 9-10] "a decompression unit of a convolutional neural network" decompression unit interpreted as neural engine circuit which Desoli explicitly states is part of the CNN. FIG. 3 shows that decompression unit 106 is a plurality of circuits including a plurality of buffer circuits.) each of the neural engine circuits comprising: a kernel extract circuit configured to: extract the look-up table included in the received data block by decompressing the compressed kernel data, ([Col. 5 l. 4-25] "the decompression unit 106 includes a lookup table. The lookup table includes, for each compression accelerator 104 of the CNN 102, a respective codebook. The codebook includes the codewords associated with each code vector for the corresponding kernel data" [Col. 5 l. 45-55] "The control logic receives the indices and looks up the codewords in the codebooks 258 and retrieves the corresponding code vectors. The code vectors are then provided from the lookup stream to the output buffer 259." See also FIG. 3. Control logic 256 interpreted as extract circuit.) the look-up table having a plurality of coefficient identifiers for kernel coefficients in each entry identified by an index, each coefficient identifier corresponding to a kernel coefficient value, extracting indices for an uncompressed kernel from the compressed kernel data, and([Col. 5 l. 4-25] "The encoded kernel data 113 includes the indices associated with each of the code vectors in the codebook. The decompression unit 106 looks up the codewords for each index and retrieves the code vectors") assembling the uncompressed kernel by identifying kernel coefficients in entries of the look-up table corresponding to the extracted indices; and([Col. 2 l. 61-67] "The kernel tensors can be considered weighting tensors. The values or weights in the kernel tensors are generated during the machine learning process such that when mathematical operations are performed between the input data 108 and the kernel tensors, accurate prediction data 111 is generated" [Col. 4 l. 28-65] "The CNN 102 utilizes a quantization scheme for quantizing the kernel data associated with each convolution layer 105. The kernel data, i.e. the values for the various kernel tensors can correspond to a large amount of data. If the kernel data is stored in an uncompressed manner in the hardware block of the CNN 102, this can correspond to a large amount of memory and bandwidth usage for the CNN, and a corresponding large usage of integrated circuit area. The CNN 102 utilizes a vector quantization technique to encode the kernel data after the machine learning process has taken place. Once the final kernel values are generated for the various convolution layers of the CNN 102 during the machine learning process, the kernel data is encoded using the vector quantization technique. In particular, an encoder maps the input range into a finite range of rational values called a codebook. Any values stored by the codebook is called a code vector. During the encoding phase, an index is associated with all of the code vectors of the codebook. In the case of vector quantization, each code vector can be formed by one or more codewords. [...] After the CNN 102 has been trained, the CNN 102 utilizes the decompression unit 106 to assist in convolution operations. In particular, when feature data is provided to a convolution accelerator corresponding to one of the convolution layers, encoded kernel data 113 is provided to the decompression unit 106. The encoded kernel data 113 corresponds to the encoded kernel tensor values for that convolution layer. The decompression unit 106 receives the encoded kernel data 113 and decodes the encoded kernel data 113 to reproduce the original kernel data generated during the machine learning process" Kernel values interpreted as kernel coefficients) a multiply-add (MAD) circuit coupled to the kernel decompression circuit to receive the uncompressed kernel data, the MAD circuit further configured to perform neural network operations on a portion of input data using the uncompressed kernel data.([Col. 4 l. 7-15] "the convolution operation can be decomposed into a series of multiplication and accumulate (MAC) operations. Accordingly, a key operation to consider in the convolution operation is the multiply and accumulate operation. The multiply and accumulate operation corresponds to multiplying the transpose of the kernel tensor 125 by the feature tensor 123."). However, Desoli does not explicitly teach the received compressed kernel data comprising a data block including a look-up table with entries of coefficient identifiers and a block sparse mask, a plurality of neural engine circuits. Wang, in the same field of endeavor, teaches the received compressed kernel data comprising a data block including a look-up table with entries of coefficient identifiers ([¶0005] "encoding at least one of the column swapped quantized reordered weight tensor, the 2D sparse bitmap according to the layered structure, the codebook including a plurality of centroids, or the plurality of column swapping indexes to form a representation of the compressed neural network" Wang explicitly encodes the codebook (lookup table) with entries of coefficient indices) and a block sparse mask,([¶0005] "encoding at least one of the column swapped quantized reordered weight tensor, the 2D sparse bitmap according to the layered structure, the codebook including a plurality of centroids, or the plurality of column swapping indexes to form a representation of the compressed neural network" Wang explicitly encodes a 2D sparse bitmap (block sparse mask) in the encoded representation) a plurality of neural engine circuits([¶0157] "The processor 1220 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. The processor 1220 may be configured to implement any of the schemes described herein using any one or combination of steps described in the embodiments"). Desoli as well as Wang are directed towards neural network compression with lookup table. Therefore, Desoli as well as Wang are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Desoli with the teachings of Wang by encoding the weights together with a sparse bitmap (mask) in the same compressed data structure, explicitly to reduce memory footprint and power consumption. Wang provides as additional motivation for combination ([¶0004] “in order to limit the size of the neural network for storage or transmission, the neural network may be compressed for storage and transmission, and decompressed by the computing device using the neural network”). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 2, the combination of Desoli, and Wang teaches The neural processor circuit of claim 1, wherein each of the entries in in the look-up table includes a same number of kernel coefficients.(Desoli [Col. 6 l. 23-40] "the lookup table 255 has a memory sufficient to store a codebook including 256 code vectors and eight codewords per code vector. In one embodiment, each codeword is 16 bits. Thus, in one example, the lookup table 255 has a memory allocation of 32,768 bits"). Regarding claim 6, the combination of Desoli, and Wang teaches The neural processor circuit of claim 1, wherein the compressed kernel data comprises the look-up table, (Desoli [Col. 5 l. 34-40] "The decompression unit 106 includes an index stream buffer 250, a kernel stream buffer 252, a configuration control block 254, a lookup table 255, and an output stream buffer 259. The lookup table 255 includes control logic 256 and codebooks 258." [Col. 14 l. 10-17] "At 810, the method 800 includes storing the vector quantization codebook in a lookup table of the decompression unit. At 812, the method 800 includes generating decompressed kernel data with the decompression unit by retrieving code vectors from the lookup table with the index data" Desoli explicitly teaches that the decompression unit receives kernels, look-up table, code vectors, and parameters such that all data stored and/or received by the decompression unit is interpreted as compressed kernel data in view of the instant specification.) a MAD parameter for configuring operations of the MAD circuit in each of the neural engine circuits, (Desoli [Col. 6 l. 5-15] "The index data for the first convolution accelerator is provided to the index stream buffer 250 and the code vectors associated with the kernel tensors for the first convolution accelerator 104 are output to the output stream buffer 259" index stream buffer interpreted as MAD parameter) and a post-processor parameter for configuring a post-processor in each of the neural engine circuits.(Desoli [Col. 5 l. 34-40] "The decompression unit 106 includes an index stream buffer 250, a kernel stream buffer 252, a configuration control block 254, a lookup table 255, and an output stream buffer 259. The lookup table 255 includes control logic 256 and codebooks 258." [Col. 6 l. 45-50] "the configuration control 254 stores configuration data for the decompression unit 106" Configuration data interpreted as post-processor parameter. Convolution accelerator comprising line buffer and MAC array interpreted as post-processor.). Regarding claims 9, 10, and 14, claims 9, 10, and 14 are directed towards the method performed by the circuit of claims 1, 2, and 6, respectively. Therefore, the rejections applied to claims 1, 2, and 6 also apply to claims 9, 10, and 14. Regarding claim 17, claim 17 is substantially similar to claim 1. Therefore, the rejection applied to claim 1 also applies to claim 17. Claims 3, 4, 8, 11, 12, 16, 18, and 19 are rejected under U.S.C. §103 as being unpatentable over the combination of Desoli and Wang and Cao (“SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity through Low-Bit Quantization”, 2019). Regarding claim 3, the combination of Desoli, and Wang teaches The neural processor circuit of claim 1. However, the combination of Desoli, and Wang doesn't explicitly teach, wherein the kernel extract circuit is further configured to extract the block sparse mask that indicates one or more blocks of uncompressed kernel data to be filled with zero. Cao, in the same field of endeavor, teaches the kernel extract circuit is further configured to extract the block sparse mask that indicates one or more blocks of uncompressed kernel data to be filled with zero. ([p. 11220] "Efficient sparsity-mask encoding format. A good sparse encoding format directly increases sparse convolution’s computation efficiency. We propose an efficient encoding format, as shown in Figure 6. In this encoding format, we discard all the indices of zero outputs and thus the S-CONV kernel only takes non-zero entries. In addition, we directly encode matrix indexes so that S-Conv can retrieve indices and input vectors with negligible overhead" See also FIG. 6). The combination of Desoli and Wang as well as Cao are directed towards neural network compression. Therefore, the combination of Desoli and Wang as well as Cao are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Desoli and Wang with the teachings of Cao by using a sparsity mask as or in combination to the compression and decompression of Desoli. Cao provides as additional motivation for combination ([p. 11220] "A good sparse encoding format directly increases sparse convolution’s computation efficiency"). Regarding claim 4, the combination of Desoli, Wang, and Cao teaches The neural processor circuit of claim 3, wherein the block sparse mask includes a series of bits of a first value indicating a subset of kernel coefficients in the uncompressed kernel data that are zero and a second value indicating another subset of kernel coefficients in the uncompressed kernel data that are non-zero. (Cao [p. 11220] "Efficient sparsity-mask encoding format. A good sparse encoding format directly increases sparse convolution’s computation efficiency. We propose an efficient encoding format, as shown in Figure 6. In this encoding format, we discard all the indices of zero outputs and thus the S-CONV kernel only takes non-zero entries. In addition, we directly encode matrix indexes so that S-Conv can retrieve indices and input vectors with negligible overhead" See also FIG. 6). Regarding claim 8, the combination of Desoli, and Wang teaches The neural processor circuit of claim 1. However, the combination of Desoli, and Wang doesn't explicitly teach, wherein at least one kernel coefficient in the look-up table is zero. Cao, in the same field of endeavor, teaches at least one kernel coefficient in the look-up table is zero.([p. 11220] "Efficient sparsity-mask encoding format. A good sparse encoding format directly increases sparse convolution’s computation efficiency. We propose an efficient encoding format, as shown in Figure 6. In this encoding format, we discard all the indices of zero outputs and thus the S-CONV kernel only takes non-zero entries. In addition, we directly encode matrix indexes so that S-Conv can retrieve indices and input vectors with negligible overhead" See also FIG. 6). The combination of Desoli and Wang as well as Cao are directed towards neural network compression. Therefore, the combination of Desoli and Wang as well as Cao are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Desoli and Wang with the teachings of Cao by using a sparsity mask as or in combination to the compression and decompression of Desoli. Cao provides as additional motivation for combination ([p. 11220] "A good sparse encoding format directly increases sparse convolution’s computation efficiency"). Regarding claims 11, 12, and 16, claims 11, 12, and 16 are directed towards the method performed by the circuit of claims 3, 4, and 8, respectively. Therefore, the rejections applied to claims 3, 4, and 8 also apply to claims 11, 12, and 16. Regarding claims 18 and 19, claims 18 and 19 are substantially similar to claims 3 and 4, respectively. Therefore, the rejections applied to claims 3 and 4 also apply to claims 18 and 19. Claims 5, 13, and 20 are rejected under U.S.C. §103 as being unpatentable over the combination of Desoli and Wang and Moshovos (US20210004668A1). Regarding claim 5, the combination of Desoli, and Wang teaches The neural processor circuit of claim 1. However, the combination of Desoli, and Wang doesn't explicitly teach, wherein the kernel extract circuit comprises a kernel look-ahead buffer storing information on locations where kernel coefficients in the uncompressed kernel data are zero, the information on the locations sent to the MAD circuit to skip multiply-add operations associated with the kernel coefficients that are zero. Moshovos, in the same field of endeavor, teaches the kernel extract circuit comprises a kernel look-ahead buffer storing information on locations where kernel coefficients in the uncompressed kernel data are zero, the information on the locations sent to the MAD circuit to skip multiply-add operations associated with the kernel coefficients that are zero. ([¶0052] "Many activation bits of an average set of input activations to a layer of a neural network are zero, even of the fraction of activations that are non-zero, and thus are ineffectual during multiplication" [¶0077] "As indicated in FIGS. 10 and 11, adding a small number of lookaside inputs by sacrificing lookahead inputs offers a significant marginal gain in performance in testing an embodiment employing only a weight skipping structure, as can be seen in the transition from (7, 0) to (4, 3). For example, the speedup with (7, 0), or no lookaside) is 2.3 times for AlexNet-ES, as indicated in FIG. 10, and is 2.7 times with (4, 3)" [¶0048] "As depicted in FIG. 7B, a WSU slice 7310 of accelerator tile 7000 produces N 16b×16b products per cycle, output as t1 through tN. Those products feed an adder tree whose output accumulates into an output activation over multiple cycles" [¶0049] "As depicted in FIG. 7C, ASU 7200 generates the Alane, lookahead signals the WSU 7300 uses. The ASU 7200 is provided to supply the input activation needed by the corresponding weight lane and a step distance lookahead to the multiplier 7311. ASU 7200 includes h+1 Activation Block Registers (ABRs) 7210, each holding N input activations"). The combination of Desoli and Wang as well as Moshovos are directed towards neural network accelerators. Therefore, the combination of Desoli and Wang as well as Moshovos are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Desoli and Wang with the teachings of Moshovos by using the lookahead buffer, zero-skipping MAC circuit as the MAC unit in Desoli. Moshovos provides as additional motivation for combination ([¶0069] "For image classification CNNs, often around 45% of activations are zero even after reducing their precision per layer, while often more than 90% of the activation bits are found to be zero, suggesting that the potential for performance improvement is much higher if targeting ineffectual bit content."). Regarding claim 13, claim 13 is directed towards the method performed by the circuit of claim 5. Therefore, the rejection applied to claim 5 also applies to claim 13. Regarding claim 20, claim 20 is substantially similar to claim 5. Therefore, the rejection applied to claim 5 also applies to claim 20. Claims 7 and 15 are rejected under U.S.C. §103 as being unpatentable over the combination of Desoli and Appu and in further view of Boesch (US11531873B2). Regarding claim 7, the combination of Desoli, and Wang teaches The neural processor circuit of claim 6, wherein the kernel extract circuit is further configured to: extract the MAD parameter and the post-processor parameter from the compressed kernel data,(Desoli [Col. 5 l. 34-40] "The decompression unit 106 includes an index stream buffer 250, a kernel stream buffer 252, a configuration control block 254, a lookup table 255, and an output stream buffer 259. The lookup table 255 includes control logic 256 and codebooks 258." [Col. 6 l. 45-50] "the configuration control 254 stores configuration data for the decompression unit 106" See also FIG. 3 which shows all of the control logic extracting each of the input streams). However, the combination of Desoli, and Wang doesn't explicitly teach send the MAD parameter to the MAD circuit, and send the post-processor parameter to the post-processor. Boesch, in the same field of endeavor, teaches send the MAD parameter to the MAD circuit, and send the post-processor parameter to the post-processor.([Col. 2 l. 9-50] "during operation in the second mode, use kernel decompression tables stored in the feature line buffer to provide decompressed kernel data to one or more kernel buffer memories which then provide the kernel data to one or more of the plurality of MAC circuits of the convolutional accelerator when required. In an embodiment, the one or more vector decompression engines receive encoded kernel data streams comprising one or more kernel data frames, and the one or more kernel data frames include one or more data markers that each indicate a data type of one or more subsequent portions of the encoded kernel data stream. In an embodiment, the indicated data type is a first type signifying compressed kernel data values or a second type signifying kernel decompression tables. In an embodiment, a data marker indicates: a position associated with a next additional data marker within the kernel data frame; a table indicator associated with the data marker; or combinations thereof. In an embodiment, during operation in the second mode, a number of the plurality of MAC circuits of a MAC cluster of one of the convolutional accelerators multiply and accumulate received feature data and kernel data in parallel" Boesch explicitly states that the kernel along with kernel configuration data is compressed, decompressed, and forwarded to the MAC shared buffer). The combination of Desoli and Wang as well as Boesch are directed towards neural network accelerators. Therefore, the combination of Desoli and Wang as well as Boesch are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Desoli and Wang with the teachings of Boesch by compressing and decompressing kernel metadata and configuration parameters to configure the convolutional accelerator. FIG. 4 of Desoli and Boesch show nearly identical convolutional accelerators such that interfacing with the accelerator in Desoli as suggested by Boesch would lead to obvious and expected results. Regarding claim 15, claim 15 is directed towards the method performed by the circuit of claim 7. Therefore, the rejection applied to claim 7 also applies to claim 15. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu (US20220036184A1) is directed towards a neural network encoding method which encodes the lookup table with the compressed kernel representation. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Jan 06, 2023
Application Filed
Nov 08, 2025
Non-Final Rejection — §103
Feb 09, 2026
Examiner Interview Summary
Feb 09, 2026
Applicant Interview (Telephonic)
Mar 03, 2026
Response Filed
Mar 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month