Prosecution Insights
Last updated: April 18, 2026
Application No. 18/215,028

SLICING LAYERS OF MACHINE LEARNING MODELS ACROSS DISTRIBUTED SYSTEMS

Final Rejection §101§103
Filed
Jun 27, 2023
Examiner
WHITAKER, ANDREW B
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
2 (Final)
19%
Grant Probability
At Risk
3-4
OA Rounds
4y 9m
To Grant
38%
With Interview

Examiner Intelligence

Grants only 19% of cases
19%
Career Allow Rate
103 granted / 553 resolved
-33.4% vs TC avg
Strong +19% interview lift
Without
With
+19.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
57 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
34.1%
-5.9% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 553 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of the Claims The following is a non-final Office Action in response to claims filed 27 June 2023. Claims 1-20 are pending. Claims 1-20 have been examined. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 27 June 2023 are being considered by the Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). The claims recite a method (process), computer program product, and system, however, the claim(s) recite(s) processing and satisfying a user request using a machine learning model which is an abstract idea of a mental process, as well as the abstract idea of performing computations in accordance with a mathematical formula on that data. The limitations of “processing a user request using a machine learning model having a plurality of layers, by: using information received at a central compute location to determine a first subset of the layers in the machine learning model, and a second subset of the layers in the machine learning model; causing data corresponding to the user request to be processed using the first subset of layers at an edge compute location; in response to receiving a result from the first subset of layers at the edge compute location, causing the result to be processed using the second subset of layers at the central compute location; and satisfying the user request by outputting a result of the processing by the second subset of layers,” as drafted, is a process that, under its broadest reasonable interpretation, covers a mental process—concepts performed in the human mind (including an observation, evaluation, judgment, opinion) or mathematical concepts—mathematical relationships, mathematical formulas or equations, mathematical calculations but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “A computer-implemented method, comprising... using a machine learning model,” (or “A computer program product, comprising a computer readable storage medium having program instructions embodied therewith, the program instructions readable by a processor, executable by the processor, or readable and executable by the processor, to cause the processor to:” in claim 11 or “A system, comprising: a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to:” in claim 19) nothing in the claim element precludes the step from practically being performed in the mind or from the mathematical concept grouping. For example, but for the ““A computer-implemented method, comprising... using a machine learning model” language, “processing,” “using,” “causing,” “causing, “ and “satisfying” in the context of this claim encompasses the user manually processing some sort of user requests by balancing queues or loads etc. which is a mental process or mathematical concept of using mathematical concepts to assist in processing user requests. However, if possible, the Examiner should consider the limitations together as a single abstract idea rather than as a plurality of separate abstract ideas to be analyzed individually. “For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A, Prong One to make the analysis clear on the record.” MPEP 2106.04, subsection II.B. Under such circumstances, however, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Here, the limitations are considered together as a single abstract idea for further analysis. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitations as a mathematical concept, while some of the limitations may be performed in the mind after certain limitations are performed, but for the recitation of generic computer components, then it falls within the grouping of abstract ideas. (Step 2A, Prong One: YES). Accordingly, the claim(s) recite(s) an abstract idea. This judicial exception is not integrated into a practical application (Step 2A Prong Two). In particular, the claim only recites one additional element – using a machine learning model, a processor (claims 11 and 19) to perform the steps. The machine learning model and processor in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing of data requests) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). The recitation of “machine learning model” in the limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “machine learning model” limits the identified judicial exceptions, this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole (Step 2A Prong Two: NO). The claim does not include a combination of additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the combination of additional elements of using a machine learning model and a processor to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole (Step 2B: NO). Claims 2-10, 12-18, and 20 recite additional limitations that further limit the abstract idea previously identified which is still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 11, and 19, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claims 1-20 are therefore not eligible subject matter, even when considered as a whole. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bernat et al. (US PG Pub. 2021/0397999) and further in view of Klein et al. (US PG Pub. 2022/0239758). As per claims 1, 11, and 19, Bernat discloses computer-implemented method, computer program product, comprising a computer readable storage medium having program instructions embodied therewith, the program instructions readable by a processor, executable by the processor, or readable and executable by the processor, to cause the processor to: and a system, comprising: a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to: the method comprising (processor, memory, Bernat ¶46; edge computing, cloud, gateway, network, central office, ¶21and ¶24; method, ¶102): processing a user request using a machine learning model having a plurality of layers, by (Example approaches disclosed herein utilize information about the training and/or structure of the machine learning model, operational statistics about the node, and information about other nodes, to determine which components of the machine learning model should be executed at which location within the edge network. In this manner, example approaches disclosed herein utilize connectivity telemetry (such as bandwidth and/or latency) and compute/power available in the local fog or far edge to determine the best trade-off to which layers of a machine learning model should be executed on the local edge (e.g., the node 410) and which layers should be executed on the near edge (e.g., a remote node) for a particular network topology with known behavior (e.g., compute required per each layer and data bandwidth required between each pair of layers), Bernat ¶41; requests, ¶35): using information received at a central compute location to determine a first subset of the layers in the machine learning model, and a second subset of the layers in the machine learning model (layers deep within the machine learning model involve more resource intensive tasks than layers earlier in the machine learning model. To that end, it may be more efficient to perform such resource intensive tasks at the remote node than at an edge node. Separating execution of the machine learning model in such a manner may additionally be advantageous as compared to causing execution of the entire machine learning model at the remote node, as an amount of data passed between intermediate (e.g., inner) layers of the machine learning model (e.g., between the third layer 515 and the fourth layer 520) may be smaller in comparison to the input data to an earlier layer in the machine learning model (e.g., an input to the first layer 505). For example, in an image classification scenario where an input image is analyzed to determine if a vehicle is present, an input image may be ten megabytes and data passed between intermediate layers of the machine learning model may be expected to be five megabytes. Executing a first portion of the machine learning model locally and then transmitting the intermediate data (e.g., five megabytes) effectively reduces the required bandwidth for execution of the machine learning model by the remote node (e.g., as compared to simply requesting execution of the entire machine learning model by the remote node), Bernat ¶42; determine how much time and energy required for the layers, ¶54; local and remote, ¶82-¶83); causing data corresponding to the user request to be processed using the first subset of layers at an edge compute location (Example approaches disclosed herein utilize information about the training and/or structure of the machine learning model, operational statistics about the node, and information about other nodes, to determine which components of the machine learning model should be executed at which location within the edge network. In this manner, example approaches disclosed herein utilize connectivity telemetry (such as bandwidth and/or latency) and compute/power available in the local fog or far edge to determine the best trade-off to which layers of a machine learning model should be executed on the local edge (e.g., the node 410) and which layers should be executed on the near edge (e.g., a remote node) for a particular network topology with known behavior (e.g., compute required per each layer and data bandwidth required between each pair of layers), Bernat ¶41; layers at which execution of the machine learning model is to begin, ¶44); While Bernat discloses the ability to select layers to execute machine learning models for requests (Bernat ¶59) and different portions executed at different nodes including going from local to remote (Bernat ¶48 and ¶82-¶83), Bernat does not expressly disclose in response to receiving a result from the first subset of layers at the edge compute location, causing the result to be processed using the second subset of layers at the central compute location; and satisfying the user request by outputting a result of the processing by the second subset of layers. However, Klein teaches in response to receiving a result from the first subset of layers at the edge compute location, causing the result to be processed using the second subset of layers at the central compute location; and satisfying the user request by outputting a result of the processing by the second subset of layers (In another example, where heavy machine learning processing is required, for example, a first portion of a client request can be processed by a machine learning core at a CPE layer based on a CPE device's machine learning capabilities and then a second portion of the request can be processed at the cloud layer so that different network devices can share machine learning processing (e.g., in parallel), which also improves network latency and throughput, Klein ¶35). Both the Klein and Bernat references are analogous in that both are directed towards/concerned with distributed machine learning computing and modeling. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Klein’s ability to process requests in different sequences through the layers in Bernat’s system to improve the system and method with reasonable expectation that this would result in a layered machine learning model system that is able to offload processing to improve performance and reduce latency. The motivation being that existing technologies also do not adequately route and process client requests that require machine learning processing. Consequently, throughput and network latency are negatively affected. However, because multiple network devices include machine learning cores or modules, the capability information (e.g., machine learning model type) of which is shared across the network to route clients request, throughput and latency is improved. This is because machine learning processing need not happen at one designated device or layer, such as the cloud layer. Further, computer resource characteristics (e.g., CPU, memory) can be determined for each network device to ensure that requests do not get routed to those network devices over some computer resource consumption threshold. For example, an entire client request can be processed at the edge based on a corresponding edge network device's machine learning capabilities being able to completely service the request (Klein ¶35). As per claims 2 and 12, Bernat and Klein disclose as shown above with respect to claims 1 and 11. While Berant discloses the machine learning capable of being a convultional layer (Bernat ¶40), Bernat does not expressly disclose wherein the machine learning model is a neural network. Klein further teaches wherein the machine learning model is a neural network (neural network, Klein ¶48; CNN, ¶97). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Klein’s ability to have different types of machine learning processing capabilities in Bernat’s system to improve the system and method with reasonable expectation that this would result in a layered machine learning model system that is able to offload processing to improve performance and reduce latency. The motivation being that existing technologies also do not adequately route and process client requests that require machine learning processing. Consequently, throughput and network latency are negatively affected. However, because multiple network devices include machine learning cores or modules, the capability information (e.g., machine learning model type) of which is shared across the network to route clients request, throughput and latency is improved. This is because machine learning processing need not happen at one designated device or layer, such as the cloud layer. Further, computer resource characteristics (e.g., CPU, memory) can be determined for each network device to ensure that requests do not get routed to those network devices over some computer resource consumption threshold. For example, an entire client request can be processed at the edge based on a corresponding edge network device's machine learning capabilities being able to completely service the request (Klein ¶35). As per claims 3, 13, and 20, Bernat and Klein disclose as shown above with respect to claims 1, 11, and 19. Klein further teaches training the neural network using labeled training data; and copying at least some of the layers in the trained neural network to the edge compute location, wherein the layers in the trained neural network copied to the edge compute location include at least the first subset of layers (training, for downstream purposes, Klein ¶80; testing and deployed, ¶120) (Examiner interprets deployment of a testing model as the copying of a model to another location). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Klein’s ability to process requests in different sequences through the layers in Bernat’s system to improve the system and method with reasonable expectation that this would result in a layered machine learning model system that is able to offload processing to improve performance and reduce latency. The motivation being that existing technologies also do not adequately route and process client requests that require machine learning processing. Consequently, throughput and network latency are negatively affected. However, because multiple network devices include machine learning cores or modules, the capability information (e.g., machine learning model type) of which is shared across the network to route clients request, throughput and latency is improved. This is because machine learning processing need not happen at one designated device or layer, such as the cloud layer. Further, computer resource characteristics (e.g., CPU, memory) can be determined for each network device to ensure that requests do not get routed to those network devices over some computer resource consumption threshold. For example, an entire client request can be processed at the edge based on a corresponding edge network device's machine learning capabilities being able to completely service the request (Klein ¶35). As per claims 4 and 14, Bernat and Klein disclose as shown above with respect to claims 3 and 12. Berant further discloses comprising: adjusting a number of the trained neural network layers that have been copied to the edge compute location based at least in part on a previously processed user request (While in the illustrated example of FIG. 5 eleven layers are shown using three different types of layers, any number(s) and/or type(s) of layers may additionally or alternatively be used. In general, models trained based on different training data will tend to have different arrangements, types, and/or numbers of layers. Moreover, different types of layers in a machine learning model can have different resource requirements, Bernat ¶40) (Examiner notes the ability to have different numbers of layers based upon resource requirements as the ability to adjust the number of trained layers). As per claims 5 and 15, Bernat and Klein disclose as shown above with respect to claims 1 and 11. Klein further teaches wherein processing the result using the second subset of layers at the central compute location includes: receiving one or more vectors from a final layer of the first subset of layers at the edge compute location; and inputting the received one or more vectors in an initial layer of the second subset of layers (feature vectors for determining downstream, Klein ¶106-¶108; for use in training, ¶119). As per claims 6 and 16, Bernat and Klein disclose as shown above with respect to claims 1 and 11. Berant further discloses wherein the information is used to determine the first and second subsets of layers in real-time as the information is received at the central compute location (real time, Bernat ¶16 and ¶24). As per claims 7-8 and 17-18, Bernat and Klein disclose as shown above with respect to claims 1 and 11. Berant further discloses wherein the edge compute location is connected to the central compute location by a network, wherein the received information includes performance characteristics of the network; wherein the received information includes throughput characteristics of the edge compute location (edge computing, cloud, gateway, network, central office, Bernat ¶21and ¶24; The various use cases 205 may access resources under usage pressure from incoming streams, due to multiple services utilizing the Edge cloud. To achieve results with low latency, the services executed within the Edge cloud 110 balance varying requirements in terms of: (a) Priority (throughput or latency) and Quality of Service (QoS) (e.g., traffic for an autonomous car may have higher priority than a temperature sensor in terms of response time requirement; or, a performance sensitivity/bottleneck may exist at a compute/accelerator, memory, storage, or network resource, depending on the application); (b) Reliability and Resiliency (e.g., some input streams need to be acted upon and the traffic routed with mission-critical reliability, where as some other input streams may be tolerate an occasional failure, depending on the application); and (c) Physical constraints (e.g., power, cooling and form-factor). The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the “terms” described may be managed at each layer in a way to assure real time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to SLA, the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement steps to remediate, ¶27-¶28). As per claim 9, Bernat and Klein disclose as shown above with respect to claim 1. Berant further discloses wherein the user request is received from a computer in communication with the edge compute location (user equipment, endpoint, Bernat ¶21-¶22). As per claims 10 and 18, Bernat and Klein disclose as shown above with respect to claims 1 and 11. Klein further teaches wherein processing the user request using the machine learning model includes: using the received information to determine a third subset of the layers in the machine learning model; in response to the result from the first subset of layers being received at a secondary edge compute location, causing the result from the first subset of layers to be processed using the third subset of layers at the secondary edge compute location; and in response to a result from the third subset of layers being received at the central compute location, causing the result from the third subset of layers to be processed using the second subset of layers at the central compute location (route to a third, hierarchical downstream, Klein ¶173). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Klein’s ability to process requests in different sequences through the layers in Bernat’s system to improve the system and method with reasonable expectation that this would result in a layered machine learning model system that is able to offload processing to improve performance and reduce latency. The motivation being that existing technologies also do not adequately route and process client requests that require machine learning processing. Consequently, throughput and network latency are negatively affected. However, because multiple network devices include machine learning cores or modules, the capability information (e.g., machine learning model type) of which is shared across the network to route clients request, throughput and latency is improved. This is because machine learning processing need not happen at one designated device or layer, such as the cloud layer. Further, computer resource characteristics (e.g., CPU, memory) can be determined for each network device to ensure that requests do not get routed to those network devices over some computer resource consumption threshold. For example, an entire client request can be processed at the edge based on a corresponding edge network device's machine learning capabilities being able to completely service the request (Klein ¶35) Furthermore, one of ordinary skill, before the effective filing date of the claimed invention, would have found it obvious to repeat the processes in claims 1 and 11 for third or additional subset of the layers because duplication is obvious, MPEP 2144.04.VI.B. The duplication of parts (or steps) has no patentable significance unless a new and unexpected result is produced. Examiner finds no evidence that performing the processes in claims 1 and 11 for third or additional subset of the layers would produce new and unexpected results as compared to performing the processes in claims 1 and 11 for only a first and second subset of the layers. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure (additional art can be located on the PTO-892): Nimmagadda et al. (US PG Pub. 2022/0222584) Heterogeneous compute-based artificial intelligence model partitioning. Ishizaki (US PG Pub. 2022/0114442) Machine learning apparatus and method for machine learning. Ravi (US PG Pub. 2023/0267372) Hyper-efficient, privacy-preserving artificial intelligence system. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to ANDREW B WHITAKER whose telephone number is (571)270-7563. The examiner can normally be reached on M-F, 8am-5pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Lynda Jasmin can be reached on (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto- automated- interview-request-air-form /ANDREW B WHITAKER/Primary Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §103
Mar 13, 2026
Examiner Interview Summary
Mar 13, 2026
Applicant Interview (Telephonic)
Mar 16, 2026
Response Filed
Apr 09, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600221
REAL ESTATE NAVIGATION SYSTEM FOR REAL ESTATE TRANSACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12530700
SYSTEM AND METHOD FOR DETERMINING BLOCKCHAIN-BASED CRYPTOCURRENCY CORRESPONDING TO SCAM COIN
2y 5m to grant Granted Jan 20, 2026
Patent 12443963
License Compliance Failure Risk Management
2y 5m to grant Granted Oct 14, 2025
Patent 12299696
METHODS AND SYSTEMS FOR PROCESSING SMART GAS REGULATORY INFORMATION BASED ON REGULATORY INTERNET OF THINGS
2y 5m to grant Granted May 13, 2025
Patent 12282962
DISTRIBUTED LEDGER FOR RETIREMENT PLAN INTRA-PLAN PARTICIPANT TRANSACTIONS
2y 5m to grant Granted Apr 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
19%
Grant Probability
38%
With Interview (+19.2%)
4y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 553 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month