Prosecution Insights
Last updated: April 19, 2026
Application No. 18/225,420

DYNAMIC TUNING OF PRE-INITIALIZATION ENVIRONMENT PROVISIONING AND MANAGEMENT

Non-Final OA §103
Filed
Jul 24, 2023
Examiner
ESPANA, CARLOS ALBERTO
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
17 granted / 23 resolved
+18.9% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
29 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 07/24/2023 and 12/13/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 9-14, 16-17, 19-21 and 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Doshi (US 20180025289 A1) in view of Subramanian (US 11507430 B2) and Zhu (US 20230376800 A1). Regarding claim 1, Doshi teaches: A computer-implemented method comprising. (Claim 1. A method for resource provisioning using work classification, comprising: ) generating the performance-based index table for the workload, wherein the performance-based index is based on a memory efficiency. ([0034] The various aspects may monitor or observe various system metrics as software application work items execute in order to properly classify work items into one or more work groups. The computing device may monitor computing device metrics including one or more of graphical processing unit (GPU) frequency range, central processing unit (CPU) frequency for a cluster of little CPUs, CPU frequency for a cluster of big CPUs, CPU utilization of the cluster of little CPUs, CPU utilization of the cluster of big CPUs, and/or advanced RISC machine (ARM) instructions. These features are for illustration purposes and are not intended to be limiting. Additional features may be monitored according to various aspects. In most SoCs, there are many more processing blocks apart from the CPU and GPU. For example, SoCs have video processing blocks, one or more modems, a Wi-Fi block, a Bluetooth block, etc. To make the performance provisioning model more accurate, various aspects may expose and add features in addition to the examples listed above. One way to add more features is to apply similar performance provisioning to processing blocks that also have discrete performance steps and are provisioned using utilization-based metrics. Even for the main subsystems, like the CPU and GPU, there are additional metrics that may be monitored, like the number of inputs/outputs (IOs) initiated, cache utilization, cache hits/miss rates, Dial on Demand Routing (DDR) traffic, number instances of certain types of load/store instructions, time consuming multiplication/division instructions, etc. which may improve accuracy of the model. [0036] For servers, which receive power at all times (as compared to battery-powered devices), performance-first provisioning enables an incoming request to be processed fast as possible, which is most important for providing service to client devices. However, in mobile devices that are battery powered, consideration of battery power usage is more important that fast-as-possible processing. Thus, the various aspects adjust performance provisioning for requests to meets an acceptable processing rate targets that, though slower than performance-first provisioning, do not interfere with normal functioning of the mobile device or result in a user-perceptible in performance.) and updating input to the model in response to monitoring a traffic of requests and collecting runtime data of the workload, wherein updating the model comprises adjusting the model for provisioning the pre-initialization environment. ([0088] FIGS. 5A-5B illustrate process flow diagrams of methods 500, 550 for updating or retraining a work classification model for use in performance provisioning of work processing in any application in accordance with various aspects. The methods 500, 550 may be implemented on a computing device (e.g., 100) and carried out by a processor (e.g., 110) in communication with the communications subsystem (e.g., 130), and the memory (e.g., 125). See also [0090-0099]) Doshi does not appear to teach: accepting a request from a group of applications to generate a performance-based index table for a workload based at least in part on a feature of the applications. However Subramanian teaches: (col 2, line 9-20. Workload requests requested to be performed by an edge platform or cloud resources can be sent directly to the pod resource manager. The pod resource manager can provide the request to an accelerator to run the workload request through an Artificial Intelligence (AI) model and so that the AI model suggests a resource configuration for the same. Use of an AI model can allow for smarter and faster resource management and allocation decisions. The AI model does not have to be trained as it will be configured to continuously learn on-the-go using, for example, reinforcement learning that develops based on rewards or penalties from resources it has suggested for use. Col 2 line 44- 65 Various embodiments provide for an accelerated pod resource manager to receive workload requests from one or more client devices. For example, workload requests can be from applications or other software run by one or more client devices made to an edge computing system to perform computations or use or access memory or storage. The edge computing system can be located physically proximate to at least some of the client devices. In addition, or alternatively, the edge computing system can be located physically proximate to the local or wide area network access point(s) or device(s) that provide wireless or wired communications between the client devices and the edge computing system. In some examples, the pod resource manager can leverage AI or machine learning (ML) to determine a computing resource allocation to allocate for a workload request. A reinforcement learning scheme can be used to generate resource allocation suggestions based on positive or negative rewards from prior resource allocation suggestions. When a resource allocation suggestion is applied, its associated KPI is compared to the previous workload run's performance (e.g., KPI) and a reward is calculated and accumulated in a workload table.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Doshi and Subramanian before them, to include Subramanian’s request and workload performance table Doshi’s workload classification using machine learning. One would have been motivated to make such a combination to more efficiently generate and manage performance indices for multiple workloads and application. Doshi does not appear to teach: building a label feature by analyzing a static program feature of the application in the group of applications and the performance-based index table. However, Zhu teaches: [0082 ] Clusterer 130 may cluster runtime distributions from historic job runs to create a set (e.g., classes) of runtime distributions for which new/proposed jobs may be classified. Clustering may support estimation of probabilities of outliers without predicting individual job runtimes directly. Prediction may associate a proposed job with a runtime distribution class that it most likely belongs to. Runtime distributions may be single mode or multimode. A set of metrics may be selected (e.g., determined, identified, defined) to depict each type of distribution (e.g., whether single mode or multimode) and to quantify the variation in numeric terms, which may be understood by user(s) 102 and operator admin (e.g., visualized in a GUI).] [0095] Predictor(s) 134 is/are configured to predict the runtime distribution shape for a proposed job based on information that is available at compile time. Predictor(s) 134 may map each proposed job (e.g., a job instance) to a particular clustered runtime distribution shape class (e.g., runtime distributions shape classes labeled 0R-7R and/or 0D-7D as shown by example in FIGS. 2A and 2B). Zhu also teaches constructing, using clustering algorithms, a model for provisioning a pre-initialization environment using a label feature;. ([0083] Clusterer 130 may be configured to perform a clustering analysis. Clusterer 130 may receive, as inputs to the clustering analysis, the PMF probabilities of each bin of each histogram representing a runtime distribution for a job group, for example, rather than the job features (e.g., input size, etc.). A clustering analysis may generate a representative (e.g., reference or “typical”) distribution shape representing multiple histograms for multiple recurring jobs (e.g., using Table 1, dataset D1). Histograms for jobs with a specified number of runtime instances (e.g., more than 20 occurrences) may be included in a clustering analysis. A greater number of instances may provide a more accurate estimation of runtime distribution. Clusterer 130 may use a machine learning (ML) algorithm (e.g., an unsupervised ML algorithm) to cluster the distributions of normalized runtimes across job groups.[0084] Clusterer 130 may implement runtime distribution clustering based on the histogram bin size and range, a clustering algorithm, a number of clusters, and smoothing histograms) Zhu also teaches loading, using the label feature, the applications in the group of applications into the pre-initialization environment.( [0055] Historical job info 122 may indicate sources of runtime variation that may be useful to predict sources of runtime variation in proposed jobs. Runtimes of job instances within each job group may vary, for example, due to one or more of the following: intrinsic characteristics, resource allocation, physical cluster environment, etc. [0059] Runtimes may vary based on a physical cluster environment, which may include the availability of spare tokens and/or the load on the individual machines. There may be significant differences in CPU utilization of machines with different SKUs in a cluster of compute nodes among runtime server(s) 108. For example, CPU utilization by SKU may vary from 2% to 33% with an average of 17% for a first SKU while varying from 10% to 100% with an average of 68% for a second SKU. Higher utilization (e.g., load) may cause more contention for shared resources. A larger range of loads may increase runtime variation. See also [0076]) Zhu also teaches introducing a selection policy for a switch in a pre-initialization environment in an application to balance usage of at least one resource; ([0025] Job runtime distribution prediction methods (e.g., using ML models) may predict runtime distributions for proposed jobs and (e.g., also) prospective (e.g., what-if) scenarios, for example, by analyzing the impact of resource allocation, scheduling, and physical cluster provisioning decisions on a job's runtime consistency and predictability. Operators and/or users may receive predicted runtime distributions, explanation of sources of runtime variance and/or proposed edits to decrease runtime variance.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Doshi and Zhu before them, to include Zhu’s clustering classification methods in Doshi’s workload classification using machine learning. One would have been motivated to make such a combination to improve the accuracy of Doshi’s classification groups. This combination applies a known clustering techniques to an existing classification system. Regarding claim 2, Doshi teaches: The computer-implemented method of claim 1, wherein the resource of the pre- initialization environment comprises space, memory, and speed. ([0034] To make the performance provisioning model more accurate, various aspects may expose and add features in addition to the examples listed above. One way to add more features is to apply similar performance provisioning to processing blocks that also have discrete performance steps and are provisioned using utilization-based metrics. Even for the main subsystems, like the CPU and GPU, there are additional metrics that may be monitored, like the number of inputs/outputs (IOs) initiated, cache utilization, cache hits/miss rates, Dial on Demand Routing (DDR) traffic, number instances of certain types of load/store instructions, time consuming multiplication/division instructions, etc. which may improve accuracy of the model. [0041] Some aspect methods may include creating a machine learning model using a combination of orthogonal system metrics (i.e., computing device metrics). For example, the work group classification models may be built using machine learning techniques as applied to multiple system metrics of a computing device. The metrics may include graphical processing unit (GPU) frequency range, central processing unit (CPU) frequency for a cluster of little CPUs, CPU frequency for a cluster of big CPUs, CPU utilization of the cluster of little CPUs, CPU utilization of the cluster of big CPUs, and advanced RISC machine (ARM) instructions. Many more features or classes may be used in various aspects. Each of the possible classes may be further correlated (or compared) to GPU usage and ARM instruction calls. These metrics may be evaluated to obtain numerical values, which are then subjected to a polynomial function. The resulting polynomial expressions (e.g., system metric expressions) may be mapped to n N-dimensional graph in which N is defined by the number of orthogonal system metrics, and as such, define borders between classification groups. The classification groups may be spatial regions within an N-dimensional space in which the boundaries are defined by “N” equations.) Regarding claim 3, Doshi teaches: The computer-implemented method of claim 1, further comprising providing a manager to support scaling of the pre-initialization environment based on collection of runtime data of the workload. ([0075] FIG. 3 illustrates operations of the resource management system 160 according to various embodiments. In general, the resource management system 160 first profiles and clusters 302 a dynamic workload 304 during a learning phase resulting in a clustering 306 of the workload 304. The resource management system 160 then performs a tuning process 308 wherein the workload clusters are mapped to virtualized resources, and the resulting resource allocation map 310 is stored by the resource management system 160. During runtime, either periodically or on-demand, the resource management system 160 profiles and classifies the workload 312, and then reuses previous resource allocation decisions 314 to allow the service to quickly adapt to workload changes. Details of these various operations are further discussed herein.) Regarding claim 9, Doshi teaches: The computer-implemented method of claim 1, wherein a static program features of the application comprises sorting applications using an intensity of input/output operations of the application, a memory efficiency of the application and an actual response time of the application. ([0189] In block 1304, a processor (e.g., 110) of the computing device (e.g., 100) may identify KPI. These indicators may have been previously identified during method 700 of FIG. 7, or may be previously unknown, as in new work groups. The KPI may be behaviors that provide an indication of the quality of performance for an executing work item. Each work group may have different KPI. For example, game applications may have visual lag and input response time KPI. In block 1306, a processor (e.g., 110) of the computing device (e.g., 100) may monitor KPI of the executing work item. See also [0034]) Regarding claim 10, Doshi teaches: The computer-implemented method of claim 1, further comprising predicting using an artificial intelligence algorithm, a usage of the pre-initialization environments by applying a program feature and a resource across the program features. ([0123] The various aspects may implement supervised machine learning techniques to generate a set of classification model equations that may be used to categorize work items into classes based on their performance provisioning needs. In a supervised machine learning scheme, the work classification model may be trained on a given set of known inputs and their corresponding outputs, such as the sample workloads and identified acceptable performance ranges. Examples of machine learning algorithms suitable for use with the various aspects includes multinomial logistical regression, recursive neural networks, support vector machines, etc.) Regarding claim 11, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Regarding claim 12, Doshi teaches: The computer program product of claim 11, wherein the stored program instructions are stored in a computer readable storage device in a data processing system, and wherein the stored program instructions are transferred over a network from a remote data processing system. ([0011] Further aspects include a computing device having one or more processors configured with processor-executable instructions to perform operations of the methods summarized above. Further aspects include a computing device having means for performing functions of the methods summarized above. Further aspects include a non-transitory processor-readable storage medium on which is stored processor-executable instructions configured to cause a processor of a computing device to perform operations of the methods summarized above.) Regarding claim 13, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding claim 14, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding claim 16, the claim recites similar limitation as corresponding claim 10 and is rejected for similar reasons as claim 10 using similar teachings and rationale. Regarding claim 17, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Regarding claim 19, the claim recites similar limitation as corresponding claim 10 and is rejected for similar reasons as claim 10 using similar teachings and rationale. Regarding claim 20, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding claim 21, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Regarding claim 23, the claim recites similar limitation as corresponding claim 10 and is rejected for similar reasons as claim 10 using similar teachings and rationale. Regarding claim 24, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Regarding claim 25, the claim recites similar limitation as corresponding claim 10 and is rejected for similar reasons as claim 10 using similar teachings and rationale. Claims 4-8, 15, 18 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Doshi (US 20180025289 A1) in view of Subramanian (US 11507430 B2) and Zhu (US 20230376800 A1) and further view of Vasic (US 20130185729 A1). Regarding claim 4, Doshi does not appear to teach: The computer-implemented method of claim 3, wherein scaling comprises at least one of inserting, updating, and deleting the pre-initialization environment. However, Vasic teaches: ([0164] We further compared the resource management system with an existing autoscaling platform called RightScale, reproduced based on publicly available information. The RightScale algorithm reacts to workload changes by running an agreement protocol among the virtual instances. If the majority of VMs report utilization that is higher than the predefined threshold, the scale-up action is taken by increasing the number of instances (by two at a time, by default). In contrast, if the instances agree that the overall utilization is below the specified threshold, the scaling down is performed (decrease the number of instances by one, by default). To ensure that the comparison is fair, we ran the Cassandra benchmark, which is CPU and memory intensive, as assumed by the RightScale default configuration.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Vasic and Doshi before them, to include Vasic’s auto scaling techniques in Doshi’s workload classification using machine learning. One would have been motivated to make such a combination to more efficiently increase and decrease the number of execution environments by inserting, updating or deleting based on runtime data. Regarding claim 5, Vasic teaches: The computer-implemented method of claim 1, wherein adjusting the model for provisioning the pre-initialization environment comprises increasing a size of the pre- initialization environment. (([0164] We further compared the resource management system with an existing autoscaling platform called RightScale, reproduced based on publicly available information. The RightScale algorithm reacts to workload changes by running an agreement protocol among the virtual instances. If the majority of VMs report utilization that is higher than the predefined threshold, the scale-up action is taken by increasing the number of instances (by two at a time, by default). In contrast, if the instances agree that the overall utilization is below the specified threshold, the scaling down is performed (decrease the number of instances by one, by default). To ensure that the comparison is fair, we ran the Cassandra benchmark, which is CPU and memory intensive, as assumed by the RightScale default configuration.)) Same motivation as claim 4 Regarding claim 6, Vasic teaches: The computer-implemented method of claim 1, wherein adjusting the model for provisioning the pre-initialization environment comprises decreasing a size of the pre- initialization environment. ([0164] We further compared the resource management system with an existing autoscaling platform called RightScale, reproduced based on publicly available information. The RightScale algorithm reacts to workload changes by running an agreement protocol among the virtual instances. If the majority of VMs report utilization that is higher than the predefined threshold, the scale-up action is taken by increasing the number of instances (by two at a time, by default). In contrast, if the instances agree that the overall utilization is below the specified threshold, the scaling down is performed (decrease the number of instances by one, by default). To ensure that the comparison is fair, we ran the Cassandra benchmark, which is CPU and memory intensive, as assumed by the RightScale default configuration.) Same motivation as claim 4 Regarding claim 7, Vasic teaches: The computer-implemented method of claim 1, wherein adjusting the model for provisioning the pre-initialization environment comprises deleting the pre-initialization environment. ([0150] We evaluated the resource management system 160 by running widely-used benchmarks on Amazon's EC2 cloud platform. We ran all our experiments within an EC2 cluster of 20 virtual machines (both clients and servers were running on EC2). To demonstrate the resource management system's ability to scale out, we varied the number of active instances from 2 to 10 as the workload intensity changes, but resorted only to EC2's large instance type. In contrast, we demonstrated its ability to scale up by varying the instance type from large to extra-large, while keeping the number of active instances constant. [0151] To focus on the resource management system 160 rather than on the idiosyncrasies of EC2, our scale out experiments assume that the VM instances to be added to a service have been pre-created and stopped. In our scale up experiments, we also pre-create VM instances of both types (large and extra large). Pre-created VMs are ready for instant use, except for short warm-up time. In all cases, state management across VM instances, if needed, is the responsibility of the service itself, not the resource management system 160.) Same motivation as claim 4 Regarding claim 8, Vasic teaches: The computer-implemented method of claim 1, wherein adjusting the model for provisioning the pre-initialization environment comprises creating the pre-initialization environment. ([0164] We further compared the resource management system with an existing autoscaling platform called RightScale, reproduced based on publicly available information. The RightScale algorithm reacts to workload changes by running an agreement protocol among the virtual instances. If the majority of VMs report utilization that is higher than the predefined threshold, the scale-up action is taken by increasing the number of instances (by two at a time, by default). In contrast, if the instances agree that the overall utilization is below the specified threshold, the scaling down is performed (decrease the number of instances by one, by default). To ensure that the comparison is fair, we ran the Cassandra benchmark, which is CPU and memory intensive, as assumed by the RightScale default configuration.) Same motivation as claim 4 Regarding claim 15, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding claim 18, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding claim 22, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS A ESPANA whose telephone number is (703)756-1069. The examiner can normally be reached Monday - Friday 8 a.m - 5 p.m EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LEWIS BULLOCK JR can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.A.E./Examiner, Art Unit 2199 /LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Jan 27, 2026
Non-Final Rejection — §103
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 03, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554553
DYNAMIC SCALING FOR WORKLOAD EXECUTION
2y 5m to grant Granted Feb 17, 2026
Patent 12541404
ADMISSION CONTROL BASED ON UNIVERSAL REFERENCES FOR HARDWARE AND/OR SOFTWARE CONFIGURATIONS
2y 5m to grant Granted Feb 03, 2026
Patent 12511126
DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND DATA PROCESSING PROGRAM
2y 5m to grant Granted Dec 30, 2025
Patent 12474952
TRAFFIC MANAGEMENT ON AN INTERNAL FABRIC OF A STORAGE SYSTEM
2y 5m to grant Granted Nov 18, 2025
Patent 12436790
SCALABLE ASYNCHRONOUS COMMUNICATION FOR ENCRYPTED VIRTUAL MACHINES
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
91%
With Interview (+17.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month