Prosecution Insights
Last updated: April 19, 2026
Application No. 18/196,025

On-Demand Intercloud Data Storage Transfer

Final Rejection §103
Filed
May 11, 2023
Examiner
KAMRAN, MEHRAN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
434 granted / 484 resolved
+34.7% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
510
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the amendment filed 01/05/2026. Claims 1-20 are pending in this application. Claims 1 and 13 are independent claims. Claims 1, 6, 13, 16, and 17 are currently amended. This Office Action is made final. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-3,9,11,12,13,14,18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Anglin (US 2018/0101588) in view of Yap (US 2016/0321275 A1). As per claim 1, Anglin teaches A data transfer system comprising: one or more processors; (Anglin [0026] The source replication manager 6a, target replication manager 6b, and deduplication manager 26 may comprise software programs in a memory executed by a processor. In an alternative embodiment, some portion or all of the programs 6a, 6b, and 26 may be implemented in a hardware component, such as a dedicated integrated circuit, e.g., Application Specific Integrated Circuit (ASIC), expansion card, etc.) and memory having programmed thereon instruction for causing the one or more processors to: detect a transfer request transmitted from a location remote from the data transfer system, wherein the transfer request specifies one or more object-level data objects to be transferred, a source location of the one or more data objects, and a destination location to which the one or more data objects are to be transferred; (Anglin [0035] FIG. 6 illustrates an embodiment of operations performed by the source 6a and target 6b replication managers to replicate objects at the source server 4a to the target server 4b. Control begins with the source replication manager 6a receiving (at block 100) a replication request to replicate objects based on one or more criteria, such as the client node owning the object, filespace within client node including the object, and a data type of the object. In response to the request, the source replication manager 6a validates (at block 102) the target server 4b configuration to determine if the target server 4b supports replication. If (at block 104) the target server 4b was not validated, then the replication operation fails (at block 106). Otherwise, if the target server 4b is replication compatible, then the servers 4a, 4b swap (at block 108) unique identifiers if this is the first time that replication has occurred between the servers 4a, 4b. Servers 4a, 4b may maintain the server unique identifiers of a server available for replication in the replication databases 16a, 16b. [0048] In certain embodiments, the replication target server 4b may provide a hot standby at a remote location with respect to the source server 4a. If the source server 4a fails, client operations such as backup and restore can be redirected to the target server 4b, which is already operational for replication. see Fig 3 also) Anglin does not teach upon detection of the transfer request: create a transfer event for the one or more data objects and notify a data transfer service included in the data transfer system of the transfer event, wherein the data transfer service is configured to control one or more worker nodes for migration of data, whereby execution of the transfer event causes the one or more worker nodes to move the one or more data objects specified in the transfer event from the source location to the destination location. However, Yap teaches upon detection of the transfer request: (Yap Fig 2 Block 228 (Migration call with migration package)) create a transfer event for the one or more data objects; (Yap Fig 2 Block 226 (Work item queue) and [0024] FIG. 2 is a block diagram with portions of content migration system 104 illustrated in more detail. FIG. 2, for instance, shows that migration queue system 166 includes a work item queue generator 220 that generates work items 222-224 in a work item queue 226, based upon migration calls that are sent from on-premise system 106 (or other systems calling for migration) that include migration packages 140. The migration call is indicated by block 228 in FIG. 2.) notify a data transfer service included in the data transfer system of the transfer event, wherein the data transfer service is configured to control one or more worker nodes for migration of data, whereby execution of the transfer event causes the one or more worker nodes to move the one or more data objects specified in the transfer event from the source location to the destination location. (Yap [0030] Work item queue generator 220 (FIG. 2) then queues a work item or job 222-224 corresponding to the requested migration. This is indicated by block 288. Bot/thread control component 230 then sets up the content destination sites 108 for the content to be migrated. In doing so, it can provide the site identifier 182, web identifier 190, and provision a destination store 196. Setting up the destination site is indicated by block 290. [0031] Bot/thread component 230 then selects a job 222-224 from the work item queue 226. This is indicated by block 292. [0032] Manifest accessing component 232 reads the manifest to identify a first ( or next) content object to be imported. Import component 234 then imports that content object to the destination site 108. Component 230 then places a log in the corresponding manifest and updates the log as the import completes. [0025] FIG. 2 also shows that, in one example, resource scale system 170 includes a bot/thread control component 230 that, itself, includes a manifest accessing component 232, an object import component 234 and overwrite/ignore controller 236. Manifest accessing component 232 accesses the manifest in the migration package corresponding to a selected work item 222. It identifies the next object to be imported, and import component 234 begins the import process. Bot/thread control component 230 also controls a number of bots 238-240 [worker nodes] that are performing import processing. Each bot can be performing on multiple different threads, and component 230 controls the thread number and the number of bots 238-240 to scale (e.g., increase or decrease) based upon the workload (e.g., the number of work items 222-224 in work item queue 226). As will be described in greater detail below, overwrite/ignore controller 236 is used during the resumability processing.) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Yap with the system of Anglin to create and dispatch a transfer event. One having ordinary skill in the art would have been motivated to use Yap into the system of Anglin for the purpose of migrating data in a cloud based system (Yap paragraph 02). As per claim 2, Anglin teaches wherein each of the source location and the destination location are located in different cloud platforms. (Anglin [0048] In certain embodiments, the replication target server 4b may provide a hot standby at a remote location with respect to the source server 4a. If the source server 4a fails, client operations such as backup and restore can be redirected to the target server 4b, which is already operational for replication. [0049] In further embodiments, multiple source servers 4a (e.g., at remote offices) can be replicated to a single target server 4b (e.g., at a central data center). [0079] Further, FIG. 10 shows a single cloud. However, certain cloud embodiments may provide a deployment model including a separate “Backup” or “Data Protection” cloud, in addition to the cloud having the customer/production data. Providing a separate and distinct additional cloud as the data protection cloud in order to separate whatever primary cloud model (provide, community, hybrid, etc) from the data protection cloud prevents a single point of failure and provides a greater degree of protection of the customer data in the separate backup cloud.). As per claim 3, Yap teaches wherein the memory further comprises a queue configured to store the transfer request, and wherein each transfer request stored in the queue corresponds to a separate transfer event. (Yap Fig 2 Blocks 226 and 222) As per claim 9, Yap teaches comprising the data transfer service, wherein the data transfer service is configured to: receive the transfer event from the memory; (Yap Block 282 and 288) in response to the notification received from the one or more processors, select at least one of the one or more worker nodes for execution of the transfer event; (Yap [0025] FIG. 2 also shows that, in one example, resource scale system 170 includes a bot/thread control component 230 that, itself, includes a manifest accessing component 232, an object import component 234 and overwrite/ignore controller 236. Manifest accessing component 232 accesses the manifest in the migration package corresponding to a selected work item 222. It identifies the next object to be imported, and import component 234 begins the import process. Bot/thread control component 230 also controls a number of bots 238-240 that are performing import processing. Each bot can be performing on multiple different threads, and component 230 controls the thread number and the number of bots 238-240 to scale (e.g., increase or decrease) based upon the workload (e.g., the number of work items 222-224 in work item queue 226). As will be described in greater detail below, overwrite/ignore controller 236 is used during the resumability processing. [0031] Bot/thread component 230 then selects a job 222-224 from the work item queue 226. This is indicated by block 292. [0103] a bot thread controller that scales the number of virtual machines based on a number of work items in the work item queue.) forward the transfer event to the selected worker nodes for execution. (Yap [0030] Bot/thread control component 230 then sets up the content destination sites 108 for the content to be migrated. In doing so, it can provide the site identifier 182, web identifier 190, and provision a destination store 196. Setting up the destination site is indicated by block 290). As per claim 11, Yap teaches further comprising the one or more worker nodes. (Yap Fig 2 Blocks 226, 222 and 224 and [0024] FIG. 2 is a block diagram with portions of content migration system 104 illustrated in more detail. FIG. 2, for instance, shows that migration queue system 166 includes a work item queue generator 220 that generates work items 222-224 in a work item queue 226, based upon migration calls that are sent from on-premise system 106 (or other systems calling for migration) that include migration packages 140. The migration call is indicated by block 228 in FIG. 2.) As per claim 12, Anglin teaches wherein the transfer request is a request to replicate the one or more data objects in a plurality of destination locations (Anglin [0018] Described embodiments replicate data objects from a source server to a target server in a manner that more optimally utilizes transmission bandwidth by avoiding the transmission of data that is already available in the target server. The source server further sends metadata on objects having data or chunks already available at the target server to cause the target server to add an entry to a replication database for objects already at the target server and to ensure consistency of data and metadata. The described embodiments allow the user to provide replication criteria to allow selection and filtering of objects to replicate at an object level. Further embodiments also employ deduplication to avoid sending over chunks of objects being replicated that are already stored on the target server.[0019] FIG. 1 illustrates an embodiment of a computing environment 2 having a source server 4a and target server 4b including a source replication manager 6a and target replication manager 6b, respectively, to replicate the data for objects at a source storage 8a to a target storage 8b.) Yap teaches wherein the data transfer service is configured to assign the transfer event to a plurality of worker nodes, wherein each worker node is assigned to copy the one or more data objects to a respective one of the plurality of destination locations. (Yap [0025] FIG. 2 also shows that, in one example, resource scale system 170 includes a bot/thread control component 230 that, itself, includes a manifest accessing component 232, an object import component 234 and overwrite/ignore controller 236. Manifest accessing component 232 accesses the manifest in the migration package corresponding to a selected work item 222. It identifies the next object to be imported, and import component 234 begins the import process. Bot/thread control component 230 also controls a number of bots 238-240 that are performing import processing. Each bot can be performing on multiple different threads, and component 230 controls the thread number and the number of bots 238-240 to scale (e.g., increase or decrease) based upon the workload (e.g., the number of work items 222-224 in work item queue 226). As will be described in greater detail below, overwrite/ignore controller 236 is used during the resumability processing[0032] Manifest accessing component 232 reads the manifest to identify a first (or next) content object to be imported Import component 234 then imports that content object to the destination site 108. Component 230 then places a log in the corresponding manifest and updates the log as the import completes. This is indicated by block 294. As is described below with respect to FIG. 4, this can be done while resumability system 172 performs resumability processing so that, if the job fails, it can be resumed from approximately where it failed. Using resumability processing is indicated by block 296. [0034] Once the objects identified in the manifest have all been imported (or migrated) to their destination sites, bot/thread control component 230 determines whether there are any more jobs in work item queue 226. This is indicated by block 302. If so, processing reverts to block 292 where another job is selected and import begins again for that job.). As to claim 13, it is rejected based on the same reason as claim 1. As to claim 14, it is rejected based on the same reason as claim 2. As to claim 18, it is rejected based on the same reason as claim 9. As to claim 20, it is rejected based on the same reason as claim 12. Claim 4,5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Anglin(US 2018/0101588) in view of Yap (US 2016/0321275 A1) in further view of Kalley (US 2023/0359508 A1). As per claim 4, Anglin and Yap do not teach wherein the one or more processors are configured to detect the transfer request stored in the queue using a serverless listener. However, Kalley teaches wherein the one or more processors are configured to detect the transfer request stored in the queue using a serverless listener. (Kalley [0066] In step S31, the forwarder agent 307 pools a queue (e.g., a queue associated with the SQS service 311) to check for presence of a new message in the queue. In step S33, as no new messages have been posted on the queue, the forwarder agent 307 receives an indication notifying it that there is no new message on the queue. In step S35, a publisher 501 publishes a new message on the queue. In step S37, the forwarder agent 307 polls the queue to determine presence of a new message in the queue. In step S39, the forwarder agent 307 obtains the new message from the queue. In step S41, the forwarder agent proceeds to forward a notification to the listener agent 305. As stated previously with reference to FIG. 5A, the listener agent 305 may perform processing with respect to determining a function that is to be invoked based on the notification, as well as obtaining (from an identity management service of the execution cloud environment 231), a token that is to be used to invoke the function in the customer tenancy of the execution cloud environment. Further, by some embodiments, the listener agent 305 uses a serverless function service to invoke the particular function deployed in the customer tenancy of the execution cloud environment in step S43). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Kalley with the system of Anglin and Yap to use a serverless listener. One having ordinary skill in the art would have been motivated to use Kalley into the system of Anglin and Yap for the purpose of implementing a framework that provides execution of serverless functions in a cloud environment based on occurrence of events/notifications from services in an entirely different cloud environment. (Kalley paragraph 02) As per claim 5, Kalley teaches wherein the queue is managed by a message queuing service, and wherein the serverless listener is configured to periodically check the message queuing service for new transfer requests on an order of minutes or faster. (Kalley [0066] In step S31, the forwarder agent 307 pools a queue (e.g., a queue associated with the SQS service 311) to check for presence of a new message in the queue. In step S33, as no new messages have been posted on the queue, the forwarder agent 307 receives an indication notifying it that there is no new message on the queue. In step S35, a publisher 501 publishes a new message on the queue. In step S37, the forwarder agent 307 polls the queue to determine presence of a new message in the queue. In step S39, the forwarder agent 307 obtains the new message from the queue. In step S41, the forwarder agent proceeds to forward a notification to the listener agent 305. As stated previously with reference to FIG. 5A, the listener agent 305 may perform processing with respect to determining a function that is to be invoked based on the notification, as well as obtaining (from an identity management service of the execution cloud environment 231), a token that is to be used to invoke the function in the customer tenancy of the execution cloud environment. Further, by some embodiments, the listener agent 305 uses a serverless function service to invoke the particular function deployed in the customer tenancy of the execution cloud environment in step S43.) As to claim 15, it is rejected based on the same reason as claim 5. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Anglin(US 2018/0101588) in view of Yap (US 2016/0321275 A1) in further view of Boshev (US 2022/0300531 A1). As per claim 7, Anglin and Yap do not teach wherein the memory further comprises an application storage layer memory of the data transfer service configured to store the transfer event. However, Boshev teaches wherein the memory further comprises an application storage layer memory of the data transfer service configured to store the transfer event. (Boshev [0013] In some instances, the in-memory data grid stores the metadata in a queue data structure, where data from the data grid is read in a first-in-first-out mode). The examiner believes this is consistent with what is disclosed in the specification ([0011] In some examples, the memory may further include an application storage layer memory of the data transfer service configured to store the transfer event) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Boshev with the system of Anglin and Yap to use a memory to store transfer events. One having ordinary skill in the art would have been motivated to use Boshev into the system of Anglin and Yap for the purpose of implementing data processing in a cloud environment. (Boshev paragraph 01) Claim 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Anglin(US 2018/0101588) in view of Yap (US 2016/0321275 A1) in further view of Boshev (US 2022/0300531 A1) and Kalley (US 2023/0359508 A1). As per claim 8, Anglin and Yap and Boshev do not teach wherein the one or more processors are configured to direct the transfer request to the memory using a serverless listener, wherein the transfer request is directed to the memory in response to the transfer request being sent to an address of the serverless listener. However, Kalley teaches wherein the one or more processors are configured to direct the transfer request to the memory using a serverless listener, wherein the transfer request is directed to the memory in response to the transfer request being sent to an address of the serverless listener. (Kalley [0066] In step S31, the forwarder agent 307 pools a queue (e.g., a queue associated with the SQS service 311) to check for presence of a new message in the queue. In step S33, as no new messages have been posted on the queue, the forwarder agent 307 receives an indication notifying it that there is no new message on the queue. In step S35, a publisher 501 publishes a new message on the queue. In step S37, the forwarder agent 307 polls the queue to determine presence of a new message in the queue. In step S39, the forwarder agent 307 obtains the new message from the queue. In step S41, the forwarder agent proceeds to forward a notification to the listener agent 305. As stated previously with reference to FIG. 5A, the listener agent 305 may perform processing with respect to determining a function that is to be invoked based on the notification, as well as obtaining (from an identity management service of the execution cloud environment 231), a token that is to be used to invoke the function in the customer tenancy of the execution cloud environment. Further, by some embodiments, the listener agent 305 uses a serverless function service to invoke the particular function deployed in the customer tenancy of the execution cloud environment in step S43.) The part about directing the event to a memory for storing (i.e. a queue) is taught in claim 7 using art of Bashev. It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Kalley with the system of Anglin and Yap and Boshev to direct the transfer to a memory. One having ordinary skill in the art would have been motivated to use Kalley into the system of Anglin and Yap and Boshev for the purpose of implementing a framework that provides execution of serverless functions in a cloud environment based on occurrence of events/notifications from services in an entirely different cloud environment. (Kalley paragraph 02) As to claim 17, it is rejected based on the same reason as claims 7 and 8 (It is a combination of these two claims). Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Anglin(US 2018/0101588) in view of Yap (US 2016/0321275 A1) in further view of Kramer (US 2017/0046199 A1). As per claim 10, Anglin and Yap do not teach determine whether the transfer event is an object-level transfer event; in response to the transfer event being an object-level transfer event, forward the transfer event to the selected worker nodes for execution; and in response to the transfer event not being an object-level transfer event, holding the transfer event for batch migration. However, Kramer teaches determine whether the transfer event is an object-level transfer event; in response to the transfer event being an object-level transfer event, forward the transfer event to the selected worker nodes for execution; and in response to the transfer event not being an object-level transfer event, holding the transfer event for batch migration. (Kramer [0041] As mentioned above, the method (300) includes loading (302) a number of object migration jobs into a network-accessible work queue, the object migration jobs representing tasks that define how the objects migrate. As mentioned above, the network-accessible work queue may be a SQL queue, an optimized queue, or a queue service. With millions of objects and dozens of workers, the network-accessible work queue performance may become a potential bottleneck. SQL based queues are easy to deploy and can scale to multi-million object migration workloads, but table read lock tuning is an issue. Further, object migration jobs may be dequeued in large batches to reduce congestion in the network-accessible work queue). Examiner believes this is consistent with what is disclosed in the specification ([0050] For instance, the transfer event may be treated on a per-event basis in response to the transfer event being an object-level event, whereas larger events that are at scales larger than object-level may be held in the memory for batch migration. Filtering the non-object-level transfer events from the object-level events may ensure that the data transfer service can operate efficiently on small requests on an on-demand basis by avoiding consuming bandwidth over larger events at the same time. Instead, the larger events are held for batch processing) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Kramer with the system of Anglin and Yap to batch larger transfers. One having ordinary skill in the art would have been motivated to use Kramer into the system of Anglin and Yap for the purpose of transferring the customers data to another storage provider (Kramer paragraph 12). As to claim 19, it is rejected based on the same reason as claim 10. Allowable Subject Matter Claims 6 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed on 01/05/2026 have been fully considered but they are not persuasive. Applicant’s arguments with respect to claims 1 and 13 have been considered but are moot because the arguments do not apply because of the introduction of new art by Anglin. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAN KAMRAN whose telephone number is (571)272-3401. The examiner can normally be reached on 9-5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor April Blair, can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAN KAMRAN/Primary Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Oct 09, 2025
Non-Final Rejection — §103
Jan 05, 2026
Response Filed
Mar 08, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591444
Hardware Virtual Machine for Controlling Access to Physical Memory Space
2y 5m to grant Granted Mar 31, 2026
Patent 12585486
SYSTEMS AND METHODS FOR DEPLOYING A CONTAINERIZED NETWORK FUNCTION (CNF) BASED ON INFORMATION REGARDING THE CNF
2y 5m to grant Granted Mar 24, 2026
Patent 12585497
AMBIENT COOPERATIVE CANCELLATION WITH GREEN THREADS
2y 5m to grant Granted Mar 24, 2026
Patent 12572394
METHODS, SYSTEMS AND APPARATUS TO DYNAMICALLY FACILITATE BOUNDARYLESS, HIGH AVAILABILITY SYSTEM MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12561158
DEPLOYMENT OF A VIRTUALIZED SERVICE ON A CLOUD INFRASTRUCTURE BASED ON INTEROPERABILITY REQUIREMENTS BETWEEN SERVICE FUNCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month