Prosecution Insights
Last updated: April 19, 2026
Application No. 18/350,710

NODE DETERMINATION METHOD FOR DISTRIBUTED TASK AND COMMUNICATION DEVICE

Non-Final OA §102
Filed
Jul 11, 2023
Examiner
CHU JOY, JORGE A
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
314 granted / 408 resolved
+22.0% vs TC avg
Strong +37% interview lift
Without
With
+37.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
3.2%
-36.8% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§102
DETAILED ACTION Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDSs) submitted on 07/11/2023 and 05/03/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Prakash (US 2019/0220703 A1). Prakash was cited in IDS. Regarding claim 1, Prakash teaches the invention as claimed including a node determination method for a distributed task ([0020] methods, apparatuses and computer-readable medium (CRM) that are related to distributed iterative computing (such as distributed machine learning (ML)) in distributed heterogeneous computing environments, where computational resources of multiple edge compute nodes are utilized (e.g., for collaborative learning for an underlying ML model). For the disclosed embodiments, distributed heterogeneous computing environments are computing environments that may include compute (processing) and storage resources available at multiple edge compute nodes, with varying capabilities and operational constraints (including communication constraints).; [0044] Different load balancing policies or configurations may be used by the MEC system 200 to select offloading targets and/or partition mechanisms based on the operational parameters. The policies/configurations may emphasize or prioritize different operational parameters and/or for different ML training applications.), performed in a sub-node of a mobile communication system ([0001] MEC (“Multi-access Edge Computing” or “Mobile Edge Computing”); [0087]; [0089] at operation 215, the master node 2112 provides computational tasks (compute partial gradients) to the respective edge compute nodes 2101 for calculating output data, such as partial gradients when the underlying ML algorithm is a GD algorithm. At operation 218, each edge compute node 2101 computes a partial gradient, and at operation 221, the edge compute nodes 2101 individually provide their respective partial gradients to the master node 2112 once they complete their local calculations.), comprising: transmitting, by the sub-node, available resource information to a master node of the mobile communication system (Fig. 2, edge computing node 2101, Maester node 2112; [0088] Procedure 200 begins at operation 203 where edge compute nodes 2101 provide operational parameters to the master node 2112, which includes indications of compute node capabilities and operational constraints as discussed previously. The edge compute nodes 2101 may identify their operational parameters using suitable APIs and/or application binary interfaces (ABIs), middleware, drivers, configuration files, trusted application(s), RF measurement mechanisms, and/or other like mechanisms to obtain or identify their respective operational parameters), the available resource information being configured to determine a target sub-node participating in the distributed task ([0044] In some embodiments, a selection of edge compute nodes 101, 201 may be compiled into a shortlist of target nodes based on a first set of operational parameters, and a subset of the target nodes may be selected from the shortlist based on a second set of operational parameters. For example, a shortlist of candidate edge compute nodes 101, 201 having a threshold link quality measurement could be compiled, and a set of the candidate edge compute nodes 101, 201 having a best computational performance among the candidates may be selected from the shortlist as the optimum offloading candidate edge compute nodes 101, 201. In some embodiments, a suitable weighting algorithm may be used to emphasize some operational parameters over other operational parameters. Other weighting, ranking, prioritization, and selection mechanisms or methods may be used in various embodiments.; [0089] At operation 209, the master node 2112 determines load partitions based on the operational parameters and a load balancing policy to ensure the same epoch time or nearly the same epoch times for individual edge compute nodes 2101 to accomplish their individual partial gradient calculation. At operation 212, the master node 2112 provides the partitioned training datasets to respective edge compute nodes 2101, and at operation 215, the master node 2112 provides computational tasks (compute partial gradients) to the respective edge compute nodes 2101 for calculating output data, such as partial gradients when the underlying ML algorithm is a GD algorithm.). Regarding claim 2, Prakash teaches wherein the available resource information comprises at least one of: computing capability information, storage capability information, transmission capability information, and energy capability information ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.); network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.); energy budget (e.g., battery power budget); and/or other like capabilities.). Regarding claim 3, Prakash teaches wherein the computing capability information comprises at least one of: floating-point computing capability per unit of time, the number of graphics processing units (CPUs), a cache capacity of the GPUs, the number of neural network processing units (NPUs), a cache capacity of the NPUs, and the number of central processing units (CPUs) ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.); network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.); energy budget (e.g., battery power budget); and/or other like capabilities.; [0184] The processor(s) of processor circuitry 1002 may include, for example, one or more processor cores (CPUs), one or more application processors, one or more graphics processing units (GPUs)). Regarding claim 4, Prakash teaches wherein the storage capability information comprises at least one of: an available memory capacity, an available cache capacity, and an available storage capacity ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.); network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.); energy budget (e.g., battery power budget); and/or other like capabilities.).. Regarding claim 5, Prakash teaches wherein the transmission capability information comprises at least one of: a transmission rate, a transmission delay, a communication signal strength, channel quality state information, a transmission bit error rate, a transmission information block error rate, and spectrum efficiency information ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.); network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.); energy budget (e.g., battery power budget); and/or other like capabilities.). Regarding claim 6, Prakash teaches wherein the energy capability information comprises at least one of: residual power, available power for the distributed task, and a predicted value of endurance ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.); network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.); energy budget (e.g., battery power budget); and/or other like capabilities.). Regarding claim 7, Prakash teaches wherein the transmitting, by the sub-nodes, available resource information to a master node, comprises: determining, by the sub-node, a capability level corresponding to the available resource information according to corresponding relationship information, the corresponding relationship information comprising a corresponding relationship between different available resource information and different capability levels ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.); network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.); energy budget (e.g., battery power budget); and/or other like capabilities.; [0088] Procedure 200 begins at operation 203 where edge compute nodes 2101 provide operational parameters to the master node 2112, which includes indications of compute node capabilities and operational constraints as discussed previously. The edge compute nodes 2101 may identify their operational parameters using suitable APIs and/or application binary interfaces (ABIs), middleware, drivers, configuration files, trusted application(s), RF measurement mechanisms, and/or other like mechanisms to obtain or identify their respective operational parameters.); transmitting, by the sub-node, the capability level corresponding to the available resource information to the master node ([0088] Procedure 200 begins at operation 203 where edge compute nodes 2101 provide operational parameters to the master node 2112, which includes indications of compute node capabilities and operational constraints as discussed previously.). Regarding claim 8, Prakash teaches wherein the determining, by the sub-node, a capability level corresponding to the available resource information according to corresponding relationship information, comprises: determining a computing capability level corresponding to the computing capability information according to first corresponding relationship information, in response to the available resource information comprising the computing capability information, the first corresponding relationship information comprising a corresponding relationship between different computing capability information and different computing capability levels ([0046] The operational parameters of the edge compute nodes 101, 201 includes compute node capabilities and operational constraints or contexts. The compute node capabilities may include, for example, configuration information (e.g., a hardware platform make and model, hardware component types and arrangement within the hardware platform, associated peripheral and/or attached devices/systems, processor architecture, currently running operating systems and/or applications and/or their requirements, subscription data (e.g., data plan and permissions for network access), security levels or permissions (e.g., possible authentication and/or authorization required to access the edge compute node 101, 201), etc.); computational capacity (e.g., a total processor speed of one or more processors, a total number of VMs capable of being operated by the edge compute node 101, 201, a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.).); determining a storage capability level corresponding to the storage capability information according to second corresponding relationship information, in response to the available resource information comprising the storage capability information, the second corresponding relationship information comprising a corresponding relationship between different storage capability information and different storage capability levels ([0046] a memory or storage size, an average computation time per workload, a reuse degree of computational resources, etc.); current or predicted computational load and/or computational resources (e.g., processor utilization or occupied processor resources, memory or storage utilization, etc.); current or predicted unoccupied computational resources (e.g., available or unused memory and/or processor resources, available VMs, etc.).; determining a transmission capability level corresponding to the transmission capability information according to third corresponding relationship information, in response to the available resource information comprising the transmission capability information, the third corresponding relationship information comprising a corresponding relationship between different transmission capability information and different transmission capability levels (network capabilities (e.g., link adaptation capabilities, configured and/or maximum transmit power, achievable data rate per channel usage, antenna configurations, supported radio technologies or functionalities of a device (e.g., whether a UE 101 supports Bluetooth/BLE; whether an (R)AN node 111 supports LTE-WLAN aggregation (LWA) and/or LTE/WLAN Radio Level Integration with IPsec Tunnel (LWIP), etc.), subscription information of particular UEs 101, etc.);); and determining an energy capability level corresponding to the energy capability information according to fourth corresponding relationship information, in response to the available resource information comprising the energy capability information, the fourth corresponding relationship information comprising a corresponding relationship between different energy capability information and different energy capability levels ([0046] energy budget (e.g., battery power budget); and/or other like capabilities). Regarding claim 9, Prakash teaches wherein the available resource information is carried in at least one of: a radio resource control (RRC) message ([0173] radio resource control (RRC)), up link control information (UCI), information carried in a physical uplink control channel (PUCCH), and information carried in a physical uplink shared channel (PUSCH); or system information, a system information block (SIB), a RRC message, a medium access control-control element (MAC-CE), downlink control information (DCI), information carried in a physical downlink control channel (PDCCH), and information carried in a physical downlink shared channel (PDCCH). Regarding claim 10, Prakash teaches further comprising: receiving, by the sub-node, indication information transmitted by the master node, the indication information being configured to indicate that the sub-node is the target sub-node participating in the distributed task (Fig. 2, Master node 2112; [0089] At operation 212, the master node 2112 provides the partitioned training datasets to respective edge compute nodes 2101, and at operation 215, the master node 2112 provides computational tasks (compute partial gradients) to the respective edge compute nodes 2101 for calculating output data, such as partial gradients when the underlying ML algorithm is a GD algorithm.). Regarding claim 11, it is a method type claim having similar limitations as claim 1 above but recited from the perspective of the master node. The citations provided for claim 1 cover both sides of process. Therefore, it is rejected under the same rationale above. Regarding claim 12, it is a method type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above. Regarding claim 13, it is a method type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above. Regarding claim 14, it is a method type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above. Regarding claim 15, it is a method type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above. Regarding claim 16, it is a method type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above. Regarding claim 17, it is a method type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale above. Regarding claim 18, it is a method type claim having similar limitations as claim 8 above. Therefore, it is rejected under the same rationale above. Regarding claim 19, it is a method type claim having similar limitations as claim 9 above. Therefore, it is rejected under the same rationale above. Regarding claim 20, it is a system type claim having similar limitations as claim 11 above. Therefore, it is rejected under the same rationale above. Further, the additional limitations of a processor, a transceiver connected to the processor, and a memory, configured to load and execute the executable instructions to implement a node determination method are taught by Prakash in at least [0190] “The computational logic 1083 may be stored or loaded into memory circuitry 1004 as instructions 1082, or data to create the instructions 1082, for execution by the processor circuitry 1002 to provide the functions described herein.” And [0227] “Each of the IoT devices 1204 may include appropriate communications circuitry (e.g., transceiver(s), modem, antenna elements, etc.) to communicate (e.g., transmit and receive) captured and stored/recorded data.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (US 2012/0284410 A1) CLOUD WORKLOAD MANAGEMENT WITH AUTOMATED WORKLOAD BIDDING. See at least [0019]. (US 2006/0167984 A1) Estimating Future Grid Job Costs By Classifying Grid Jobs And Storing Results Of Processing Grid Job Microcosms. See at least Claim 3 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jul 11, 2023
Application Filed
Dec 03, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602244
OFFLOADING PROCESSING TASKS TO DECOUPLED ACCELERATORS FOR INCREASING PERFORMANCE IN A SYSTEM ON A CHIP
2y 5m to grant Granted Apr 14, 2026
Patent 12596565
USER ASSIGNED NETWORK INTERFACE QUEUES
2y 5m to grant Granted Apr 07, 2026
Patent 12591821
DYNAMIC ADJUSTMENT OF WELL PLAN SCHEDULES ON DIFFERENT HIERARCHICAL LEVELS BASED ON SUBSYSTEMS ACHIEVING A DESIRED STATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585490
MIGRATING VIRTUAL MACHINES WHILE PERFORMING MIDDLEBOX SERVICE OPERATIONS AT A PNIC
2y 5m to grant Granted Mar 24, 2026
Patent 12579065
LIGHTWEIGHT KERNEL DRIVER FOR VIRTUALIZED STORAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+37.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month