Prosecution Insights
Last updated: April 19, 2026
Application No. 18/716,519

APPARATUS AND METHOD FOR ADVERSARIAL FEATURE SELECTION CONSIDERING ATTACK FUNCTION OF VEHICLE CAN

Non-Final OA §103§112
Filed
Nov 05, 2024
Examiner
BHANDARI, SHREYAJ RAM
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Foundation Of Soongsil University-Industry Cooperation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-58.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
3 currently pending
Career history
3
Total Applications
across all art units

Statute-Specific Performance

§103
85.7%
+45.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-10 have been examined Drawings The drawings filed on November 5, 2024 are acceptable for examination proceedings. Specification The specification filed on November 5, 2024 are acceptable for examination proceedings. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 10-2021-0183721, filed on December 21, 2021. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "a data generation module configured to collect...and generate…"; "a preprocessing module configured to insert..."; and "an adversarial attack generation module configured to receive...and generate…" in claim 1; “data generation module extracts…extracts…adds…and aggregates…” in claim 2; “data generation module generates…and includes…” in claim 4; “preprocessing module inserts…” in claim 6; “preprocessing module determines…and determines…” in claim 7; “preprocessing module inserts…inserts…inserts…and inserts…” in claim 8; “a generator trained to receive………..and generate…”; “an intrusion detection system (IDS) configured to receive…”; and “a discriminator trained to receive……and classify…” in claim 9. Paragraph [35] provides sufficient structure for claims 1, 2, and 6-9. Paragraphs [35] and [51] provide sufficient structure for claim 4. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 9 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites the limitation "the result" in limitation 3. There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination, the claim is being interpreted as reciting “a result”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 9, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dowan Kim and Daeseon Choi ("Avoiding the Intrusion Detection System GAN-based adversarial for physical attack CAN frame creation method", hereinafter referred to as Kim) in view of Zilong Lin, Yong Shi, and Zhi Xue ("IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection", hereinafter referred to as Lin). Regarding claim 1, Kim discloses: An adversarial attack apparatus (Kim: Examiner's note: Fig. 3 has the same features as Fig. 8 which shows "a configuration of an adversarial attack generation module."), comprising: a data generation module configured to collect a plurality of controller area network (CAN) messages and [generate] a CAN message packet dataset based on the plurality of CAN messages (Kim: Fig. 1, Fig. 2, section 3.1. Section 3.1 states, "The dataset used in this paper uses preliminary data provided in the car hacking attack/defense section of the 2020 Cyber Security Challenge [22]. The form of the data is shown in Fig. Same as 2. Timestamp indicates the time when CAN messages are logged. Arbitration ID is CAN message." Examiner's note: "Fig. Same as 2" is interpreted as "Fig. 2" and Fig. 2 in this article is the same drawing as Fig. 3 which illustrates a "CAN message packet dataset."), but fails to explicitly disclose: generate a CAN message packet dataset based on the plurality of CAN messages. However, in the same field of endeavor, Lin discloses: generate a CAN message packet dataset based on the plurality of CAN messages (Lin: Section 2.3 states that "this generation should retain the attack function of the malicious traffic so that the malicious traffic based on the adversarial records from IDSGAN can be reproduced and launch network attacks in reality."). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the teaching of Kim and include the above limitation with the teaching of Lin in order "to prevent the non-convergence and instability of GAN, IDSGAN is proposed based on the structure of Wasserstein GAN." Kim further discloses: a preprocessing module configured to insert noise into some CAN message packets in the CAN message packet dataset (Kim: The Introduction and Section 3.4 The introduction states that "we extract and modulate some of the pre-processed packet features". Section 3.4 states, "In the features of each attack, noise between 0 and 1 is added to the features that are judged to be unrelated to the attack function and input into the generator." Examiner's note: Fig. 2 is the same as Fig. 3 in the claimed invention and it is described as "a CAN message packet dataset."); and an adversarial attack generation module configured to receive the CAN message packet into which the noise is inserted and generate an adversarial CAN message capable of avoiding an intrusion detection system (IDS) of a vehicle (Kim: The summary states that the authors propose "a hostile CAN frame generation method to avoid IDS and attack real vehicles by adding noise to features to avoid IDS and selecting and packetizing features for physical attacks on actual vehicles."). Regarding claim 2, Kim discloses: The adversarial attack apparatus of claim 1, wherein the data generation module extracts an ID in an arbitration field from the plurality of collected CAN messages, extracts a data length code (DLC) in a control field, extracts data in a data field, adds a timestamp of each CAN message and type information about each CAN message to the extracted information to generate a CAN message packet, and aggregates the generated CAN message packets to constitute the CAN message packet dataset (Kim: Sections 2.1 and 3.1. Section 2.1 states that "ID in the Arbitration Field, DLC in the Control Field, and Data in the Data Field are used." Section 3.1 states, "Timestamp indicates the time when CAN messages are logged. Arbitration ID is CAN message. It is the identification ID of the message, and DLC is the size of data (Bytes means number). Data is the data of the CAN message. It is a loaded field, and Class and Subclass respectively correspond to Whether the CAN message is normal or an attack and whether it corresponds to an attack If so, it indicates what kind of attack it corresponds to." Examiner's note: The specification of the current invention states that "type information may include class information indicating whether the CAN message packet is a normal packet or an attack packet and subclass information indicating whether the CAN message packet is any type of attack among a flooding attack, a fuzzing attack, a relay attack, and a spoofing attack, when the CAN message packet is the attack packet." Kim Fig. 2 represents the generated CAN message packet dataset which shows the timestamp of each message in the dataset.). Regarding claim 3, Kim discloses: The adversarial attack apparatus of claim 2, wherein the type information includes class information indicating whether the CAN message packet is a normal packet or an attack packet and subclass information indicating whether the CAN message packet is any type of attack among a flooding attack, a fuzzing attack, a relay attack, and a spoofing attack, when the CAN message packet is the attack packet (Kim: Sections 3.1 and 2.3. Section 3.1 states, "Timestamp indicates the time when CAN messages are logged. Arbitration ID is CAN message. It is the identification ID of the message, and DLC is the size of data (Bytes means number). Data is the data of the CAN message. It is a loaded field, and Class and Subclass respectively correspond to Whether the CAN message is normal or an attack and whether it corresponds to an attack If so, it indicates what kind of attack it corresponds to." Section 2.3 states, "Injection attack scenario for CAN is Flooding Attack, Fuzzing Attack, Replay Attack, Spoofing Ball It consists of four cases [18]. Flooding attacks come first Bulk distribution of highest-ranking CAN packet messages Even if other CAN packet messages do not work, It is an attack that locks. Fuzzing attacks are randomly selected Attack that injects random data into Arbitration ID corresponds to Replay attacks occur on normal CAN packet meshes. extract for a certain period of time and then re-extract the extracted messages. It is an injection attack. Spoofing attacks are when attackers specifically The desired attack may occur in the regular arbitration ID. This is an attack that selectively injects data." Examiner's note: "replay" and "relay" attacks are described to do exactly the same thing.). Regarding claim 4, Kim discloses: The adversarial attack apparatus of claim 2, wherein the data generation module generates statistical information about each CAN message packet based on the CAN message packet dataset and includes the generated statistical information in each CAN message packet (Kim: Section 3.1 states, "CAN Yes Preprocessing of network packets involves one-hot encoding and This is mainly done using Min-Max Scaler. Min-Max Scaler is one of the data scaling All features were readjusted to exist between 0 and 1. Among the data, Arbitration ID is composed of hexadecimal numbers. The data consists of 8 bytes, and the ID is Processed with one-hot encoding, data is 1 bar Bits were converted to bits. Previous Arbitration ID also One-hot encoding was used the same as the current ID. And, time difference from the previous same ID, previous same data Time difference between packets, number of packets with same ID among 1000 packets Number, the same number of data out of 1000 packets, these 4 Features range from 0 to 1 by applying Min-max scaler."). Regarding claim 5, Kim discloses: The adversarial attack apparatus of claim 4, wherein the statistical information includes one or more of a time difference between a corresponding packet and a previous packet with the same arbitration ID as the corresponding packet in the CAN message packet dataset, a time difference between the corresponding packet and a previous packet with the same data as the corresponding packet, the number of packets with the same arbitration ID as the corresponding packet in the CAN message packet dataset, and the number of packets with the same data as the corresponding packet in the CAN message packet dataset (Kim: Section 3.1 states, "Among the data, Arbitration ID is composed of hexadecimal numbers. The data consists of 8 bytes, and the ID is Processed with one-hot encoding, data is 1 bar Bits were converted to bits. Previous Arbitration ID also One-hot encoding was used the same as the current ID. And, time difference from the previous same ID, previous same data Time difference between packets, number of packets with same ID among 1000 packets Number, the same number of data out of 1000 packets, these 4 Features range from 0 to 1 by applying Min-max scaler."). Regarding claim 6, Kim discloses: The adversarial attack apparatus of claim 5, wherein the preprocessing module inserts noise based on type information of each CAN message packet in the CAN message packet dataset (Kim: Section 3.4 states, "Feature importance is a function that extracts the importance of features that the random forest model determines to be most important. In the features of each attack, noise between 0 and 1 is added to the features that are judged to be unrelated to the attack function and input into the generator."). Regarding claim 9, Kim discloses: The adversarial attack apparatus of claim 6, wherein the adversarial attack generation module (Kim: Examiner's note: Fig. 3 has the same features and depicted steps as Fig. 8 of the claimed invention which shows "a configuration of an adversarial attack generation module.") includes: a generator trained to receive the CAN message packet into which the noise is inserted and generate the adversarial CAN message (Kim: Section 3.3 states, "Feature-based adversarial CAN packet generation is preprocessed Jobs between 0 and 1 for all CAN features that have gone through Input data to the generator by adding noise…As learning progresses, the generator ultimately creates a CAN feature that can avoid black-box IDS."); an intrusion detection system (IDS) configured to receive the adversarial CAN message output by the generator and a normal CAN message packet in the CAN message packet dataset and label the result of classifying the adversarial CAN message and the normal CAN message packet (Kim: Section 3.3 states, "Feature-based adversarial CAN packet generation is preprocessed Jobs between 0 and 1 for all CAN features that have gone through Input data to the generator by adding noise Let it go into. Generator has 164 features and provide it as input data to Black-box IDS. It becomes. Black-box IDS checks whether the packet is normal. Classifies whether it is an attack and outputs the classified label value."), but fails to explicitly disclose: and a discriminator trained to receive the adversarial CAN message output by the generator and the normal CAN message packet in the CAN message packet dataset and classify the adversarial CAN message and the normal CAN message packet as attack or normal based on the classified result labeled by the IDS. However, Lin further discloses: and a discriminator trained to receive the adversarial CAN message output by the generator and the normal CAN message packet in the CAN message packet dataset and classify the adversarial CAN message and the normal CAN message packet as attack or normal based on the classified result labeled by the IDS (Lin: Examiner's note: Section 2.3 states, "In the training to the discriminator, the normal traffic records and the adversarial malicious traffic records are first classified by the black-box IDS. Then, for the discriminator's learning to the black-box IDS, the same dataset is shared with the discriminator as the training set, of which the target labels are the prediction from the black-box IDS."). The same motivation to modify with Lin, as in claim 1, applies. Regarding Claim 10 recites similar features to those in claim 1, therefore it is rejected in a similar manner. Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dowan Kim and Daeseon Choi ("Avoiding the Intrusion Detection System GAN-based adversarial for physical attack CAN frame creation method", hereinafter referred to as Kim) in view of Zilong Lin, Yong Shi, and Zhi Xue ("IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection", hereinafter referred to as Lin) and in further view of Alvaro Diaz and Pablo Sanchez ("Simulation of Attacks for Security in Wireless Sensor Network", hereinafter referred to as Diaz). Regarding claim 7, Kim discloses: The adversarial attack apparatus of claim 6, wherein the preprocessing module determines whether to insert the noise into the CAN message packet based on class information in the type information (Kim: Section 3.1, and 3.4. Section 3.1 states that "Class and Subclass respectively correspond to Whether the CAN message is normal or an attack and whether it corresponds to an attack If so, it indicates what kind of attack it corresponds to." Section 3.4 states, "Because flooding attacks are attacks that transmit high-priority IDs in large quantities, Random Forest determines that the current Arbitration ID and statistical features are important. To prevent flooding attacks from losing their meaning, configure the current data and statistics, excluding the current ID, to be altered. Fuzzing attacks are attacks that inject random data into randomly selected IDs, so statistics and current data features are considered important. Therefore, set to falsify the current data and statistics except for the selected ID part. Replay attacks are attacks that extract normal CAN packets that were flowing for a certain period of time and re-inject them, so statistics are considered the most important. Replay attacks have the limitation that if the ID or data is altered and changed, the meaning of the replay attack may be lost, so even if only statistics are altered, Set the lock. A spoofing attack is an attack on an ID randomly selected by an attacker. By manipulating data to enable the desired attack to occur Because it is an injection attack, there are various statistics, data, and IDs. It's spread out. Spoofing attacks are similar to fuzzing attacks. Likewise, even if data and statistics are falsified except ID, Set the lock. However, if the entire data is altered, the attacker The desired attack function may not occur. follow Therefore, spoofing attack is performed on 8 bytes of data. Select the rest of the part excluding the part that has an attack function. Select and additionally set for modulation."), but fails to explicitly disclose: and determines whether to insert the noise into any portion of the CAN message packet based on an attack type according to subclass information in the type information, when determining to insert the noise into the CAN message packet. However, in the same field of endeavor, Diaz discloses: and determines whether to insert the noise into any portion of the CAN message packet based on an attack type according to subclass information in the type information, when determining to insert the noise into the CAN message packet (Diaz: Section 5.2.2 states, "The attack will only be active for specific packet types."). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the teaching of Kim as modified by Lin and include the above limitation with the teaching of Diaz since it "enables the simulation of selective attacks" (Diaz: Section 5.2.2.) and “is able to model, simulate and estimate the impact of attacks over different kinds of networks. It evaluates the node’s software behavior under attack and enables the development of attack-aware embedded software” (Diaz, p.3, 4th paragraph). Regarding claim 8, Kim as modified by Lin and Diaz discloses: The adversarial attack apparatus of claim 7, wherein the preprocessing module inserts the noise into data and statistical information, except for an arbitration ID in the CAN message packet, when the attack type is a flooding attack, inserts the noise into the data and the statistical information, except for the arbitration ID in the CAN message packet, when the attack type is a fuzzing attack, inserts the noise into only the statistical information in the CAN message packet, when the attack type is a relay attack, and inserts the noise into the data and the statistical information, except for the arbitration ID in the CAN message packet, the noise being inserted into only a portion of the data, when the attack type is a spoofing attack (Kim: Section 3.4 states, "Because flooding attacks are attacks that transmit high-priority IDs in large quantities, Random Forest determines that the current Arbitration ID and statistical features are important. To prevent flooding attacks from losing their meaning, configure the current data and statistics, excluding the current ID, to be altered. Fuzzing attacks are attacks that inject random data into randomly selected IDs, so statistics and current data features are considered important. Therefore, set to falsify the current data and statistics except for the selected ID part. Replay attacks are attacks that extract normal CAN packets that were flowing for a certain period of time and re-inject them, so statistics are considered the most important. Replay attacks have the limitation that if the ID or data is altered and changed, the meaning of the replay attack may be lost, so even if only statistics are altered, Set the lock. A spoofing attack is an attack on an ID randomly selected by an attacker. By manipulating data to enable the desired attack to occur Because it is an injection attack, there are various statistics, data, and IDs. It's spread out. Spoofing attacks are similar to fuzzing attacks. Likewise, even if data and statistics are falsified except ID, Set the lock. However, if the entire data is altered, the he desired attack function may not occur. Therefore, spoofing attack is performed on 8 bytes of data. Select the rest of the part excluding the part that has an attack function. Select and additionally set for modulation." Examiner's note: the claim recites a "relay" attack and the reference recites a "replay" attack but the specification and the article describe them to have the same function and are depicted to have the same feature importance using the same image in Fig. 6 of both.). Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure includes: Bhonsle (US 20210218757 A1) provides “embodiments for transferring knowledge of intrusion signatures derived from a number of software-defined data centers (SDDCs), each of which has an intrusion detection system (IDS) with a convolutional neural network (CNN) to a centralized neural network.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHREYAJ RAM BHANDARI whose telephone number is (571)272-0727. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ali Shayanfar can be reached at (571)270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHREYAJ RAM BHANDARI/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Nov 05, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month