Prosecution Insights
Last updated: April 19, 2026
Application No. 18/268,049

METHOD AND SYSTEM FOR DETERMINING A THREE DIMENSIONAL POSITION

Non-Final OA §101§102§112
Filed
Jun 16, 2023
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Augmented Robotics GmbH
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 – 12 were preliminarily amended in the amendment filed on 06/16/2023. Claims 1 – 12, all of the claims pending in this application, have been rejected. Specification The abstract of the disclosure is objected to because it exceeds the 150 maximum word limit. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Such claim limitations are: "processing means of the smart device are configured to ” in claim 7. “data processing means configured to” in claim 10. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 – 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the term “preferably” in the limitation “preferably on a virtual plane”. According to MPEP 2173.05(d) the term preferably is in a category of words that are considered Exemplary Claim Language. Examiner notes that the intended scope of the claim in unclear because it can not be determined if that claim language is needed, or not. Claim 5 also recites this same terminology. Claims 2 – 12 are likewise rejected. Claim 9 recites the limitation "the computer vision algorithm" in “wherein the computer vision algorithm is one of: YOLO, RetinaNet, SSD, and their derivatives”. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 11 does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to towards a "computer program" that includes "code instructions", which broadly encompasses a computer program per se. Such computer programs, per se, are not, in and of themselves, methods or machines, nor are they physical products of manufacture or compositions of matter. Therefore, such programs do not fall into any of the categories of eligible subject matter defined in 35 U.S.C. § 101 and are not, by themselves, eligible for patent protection. Such programs can be eligible for patent protection if claimed as embodied on or in a computer readable storage device or medium, but only if the claim clearly and unambiguously excludes transitory, propagating signals from the full scope of the claimed subject matter, as such signals are also not eligible under 35 U.S.C. § 101. It is suggested that amending the claim language to define the computer program product as having the code instructions embodied on a "non-transitory computer-readable medium" would satisfy these requirements and would limit the claimed invention to eligible subject matter. Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Per claim 12, based upon consideration of all of the relevant factors with respect to the claim as a whole, claim 12 held to claim a computer program product encoded on a computer readable medium that does not preclude signals or carrier waves from serving as said medium, and is therefore rejected as ineligible subject matter. The broadest reasonable interpretation of the claim covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. 101, Aug. 24, 2009; p. 2. Claims directed toward a non-transitory computer readable medium may qualify as a manufacture and make the claim patent-eligible subject matter. MPEP 2106.03(I). Therefore, the Examiner recommends amending the claims to recite a “non-transitory computer-readable storage medium” in order to resolve this issue. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 3 – 12 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Publication No. 2019/0147600 A1 to Karasev et al. (hereinafter Karasev). Claim 1 Regarding claim 1, an independent method claim, Karasev teaches a method for determining a three dimensional, 3D, position of an object (Figure 1, #'s 134, 136, and 138; "At operation 134, the process can include determining a three-dimensional (3-D) bounding box and information (e.g., position, velocity, width, length, etc.) associated with the object based at least in part on the object contact point(s) and the distance. In an example 136, a three-dimensional (3-D) bounding box 138 is illustrated as being associated with the vehicle 108.", Paragraph [0039]); PNG media_image1.png 738 556 media_image1.png Greyscale Karasev further teaches wherein the object is located in a field of view of an optical sensor (Figure 1, #'s 126 and 128); wherein the object is located on or above a plane or preferably on a virtual plane (Figure 1, #'s 108 and 110), where the vehicle (object) being viewed by the sensor is on the ground; PNG media_image2.png 738 556 media_image2.png Greyscale the method comprising the steps of: a) acquiring an image from the optical sensor comprising the object (Figure 1, #102; "At operation 102, the process can include receiving image data including a representation of an object, such as a vehicle.", Paragraph [0030]); b) obtaining the 3D position of the optical sensor in a 3D map comprising at least a portion the field of view of the optical sensor (Figure 1, #'s 126 and 128; "A location of the autonomous vehicle capturing the image data can be determined with respect to a three-dimensional surface mesh or map of the environment. For example, the autonomous vehicle can utilize one or more light detection and ranging (LIDAR) sensors, RADAR sensors, GPS sensors, inertial measurement units (IMUs), etc., to localize the autonomous vehicle with respect to the three-dimensional surface or map. Further, since the location of the image sensor relative to the autonomous vehicle is known (or can be determined), a ray can be determined originating from an endpoint associated with the image sensor or autonomous vehicle and passing through an individual object contact point.", Paragraph [0017]); c) recognizing the object in the image ("In some instances, the operation 112 can include determining a two-dimensional bounding box associated with a particular object to identify the vehicles 108 and 110 in the image data.", Paragraph [0031]); d) determining a 2D position of the object in the image ("In an example 114, a two-dimensional (2-D) bounding box 116 is shown identifying boundaries of the vehicle 108 in the image data 106.", Paragraph [0031]); e) determining the plane in the 3D map ("In some instances, the three-dimensional surface mesh component 210 can unproject the rays onto a flat surface or an approximation of the environment.In some instances, the three-dimensional surface mesh component 210 can unproject the rays onto a flat surface or an approximation of the environment. That is, in some instances, the three-dimensional surface mesh component can utilize a simplified model of a surface depending on an amount of information available, a level of accuracy required or desired, and the like. In some instances, the three-dimensional surface mesh component 210 can utilize depth data such as LIDAR data and/or RADAR data to confirm a depth estimate or to verify an accuracy of the model, for example.", Paragraph [0056]); f) determining, in the 3D map, at least one line of sight vector from the 3D position of the optical sensor to the object based on the projection of the 2D position of the object in the 3D map (Figure 1, #'s 126, 128, 130, and 132, Figures 3A and 3B; "In general, the operation 126 can include determining a ray of the rays 132 and unprojecting the ray onto a three-dimensional surface or map. As noted above, an unprojection can refer to a transformation from a two-dimensional frame of reference into a three-dimensional frame of reference, while a projection can refer to a transformation from a three-dimensional frame of reference to into a two-dimensional frame of reference. In some instances, the operation 126 can include determining a location of the image sensor relative to the three-dimensional surface and unprojecting the ray onto the three-dimensional surface based at least in part on the geometry of the ray, intrinsic and extrinsic information associated with the image sensor 130 (e.g., focal length, center, lens parameters, height, direction, tilt, etc.), and the known location of the image sensor 130. In some instances, the ray can be unprojected onto the three-dimensional surface, and the distances between the image capture device and the various object contact points unprojected onto the three-dimensional surface can be determined.", Paragraph [0036]); and PNG media_image3.png 371 492 media_image3.png Greyscale PNG media_image4.png 308 497 media_image4.png Greyscale g) determining the 3D position of the object based on the intersection of the at least one line of sight vector with the plane (Figures 3A and 3B; "At operation 134, the process can include determining a three-dimensional (3-D) bounding box and information (e.g., position, velocity, width, length, etc.) associated with the object based at least in part on the object contact point(s) and the distance.", Paragraph [0039]). Claim 3 Regarding claim 3, dependent on claim 1, Karasev teaches the invention as claimed in claim 1. Karasev further teaches wherein at least one of the steps c) and d) is based on a computer vision algorithm ("For example, the object contact point component can include a machine learning algorithm trained to detect contact points between wheels of a vehicle and the ground. For an individual vehicle contact point (e.g., a left-front wheel or tire of the vehicle), a ray can be determined that originates from an endpoint (e.g., an origin) associated with the image sensor and passes through the object contact point", Paragraph [0013]; " In some instances, the operation 112 can be performed by a machine learning algorithm that has been trained to detect vehicle contact points in image data. For example, the operation 112 can be performed, at least in part, by a neural network trained to receive image data (with or without the two-dimensional bounding box 116 identifying the vehicle 108) and return the vehicle contact points 118, 120, 122, and 124.", Paragraph [0034]), and/or wherein at least one of the steps b) and e) to g) is based on an augmented reality, AR, algorithm (Due to the and/or, a rejection with relevant prior art is not needed for this limitation since the previous limitation has been satisfied). Claim 4 Regarding claim 4, dependent on claim 1, Karasev teaches the invention as claimed in claim 1. Karasev further teaches wherein the 3D map is generated based on an AR algorithm and based on the image, and/or wherein the 3D map is pre-calculated and downloaded from a database based on the image ("The localization component 208 can include functionality to receive data from the sensor component 202 to determine a position of the computer systems 226 implemented as an autonomous vehicle. For example, the localization component 208 can include a three-dimensional map of an environment and can continuously determine a location of the autonomous vehicle within the map. In some instances, the localization component 208 can utilize SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive image data, LIDAR data, RADAR data, IMU data, GPS data, and the like to accurately determine a location of the autonomous vehicle.", Paragraph [0052]). Claim 5 Regarding claim 5, dependent on claim 1, Karasev teaches the invention as claimed in claim 1. Karasev further teaches wherein the plane corresponds to a plane of the real world in the image, preferably represented by a surface in the 3D map (Rejected as applied to claim 1), where the ground is used as the real world plane in the image; and/or wherein the virtual plane is determined by at least one further positional information about the object, preferably a height information acquired by a height sensor of the object (Due to the and/or, a rejection with relevant prior art is not needed for this limitation since the previous limitation has been satisfied). Claim 6 Regarding claim 6, dependent on claim 1, Karasev teaches the invention as claimed in claim 1. Karasev further teaches wherein the steps a) to g) are repeated in cycles ("At operation 134, the process can include determining a three-dimensional (3-D) bounding box and information (e.g., position, velocity, width, length, etc.) associated with the object based at least in part on the object contact point(s) and the distance. In an example 136, a three-dimensional (3-D) bounding box 138 is illustrated as being associated with the vehicle 108. In some instances, aspects of the operation 102, 112, and 126 can be repeated or performed continuously to determine updated object contact point(s) over time. Further the operation 134 can include aggregating the object contact point(s) over time or performing processing on a sliding window of N frames to determine a velocity of the vehicle 108 over time. For example, the operation 134 can include determining a velocity of the vehicle 108 by determining a distance that the vehicle 108 has moved from a position at a first time to a position at a second time. In some instances, the operation 134 can include determining a velocity of the vehicle 108 based at least in part on a physics-based model. For example, the physics-based mode can include, but is not limited to, a rigid body dynamics model, a vehicle model based on actual vehicle characteristics (e.g., friction, acceleration, length/width, etc.), and/or a simplified model whereby a vehicle is represented as a “bicycle” (e.g., a vehicle with four wheels is simplified as a motorcycle or bicycle).", Paragraph [0039]); wherein a 2D tracking of the object is performed in the images after the object has been recognized for the first time ("In some instances, the detection component can receive image data captured by an image sensor to detect an object represented in the image data. In some instances, the detection component can include a two-dimensional bounding box component, which can receive the image data and determine a two-dimensional bounding box that identifies the object. In some instances, the two-dimensional bounding box component can perform segmentation and/or classification to identify the object and/or to determine the two-dimensional bounding box.", Paragraph [0015]; "In some instances, the operation 126 can include determining a distance between the image capture device and the object based at least in part on data from one or more LIDAR sensors, RADAR sensors, GPS sensors, stereoscopic cameras, one or more depth cameras (e.g., time of flight sensors), and the like. Further, in some instances, the operation 126 can include determining a ray of the rays 132 and projecting the ray onto a two-dimensional surface that provides a simplified representation of the surface of the ground.", Paragraph [0037]); wherein the step c) and d) are skipped as long the object is tracked in the images (Rejected as applied directly above), wherein the previous limitation has been satisfied; and wherein the 2D position is obtained from the 2D tracking (Rejected as applied directly above). Claim 7 Regarding claim 7, dependent on claim 1, Karasev teaches the invention as claimed in claim 1. Karasev further teaches wherein the optical sensor is an optical sensor of a smart-device ("However, the systems and methods described herein may not be limited to these platforms. Instead, the systems and methods described herein may be implemented on any appropriate computer system running any appropriate operating system. Other components of the systems and methods described herein, such as, but not limited to, a computing device, a communications device, mobile phone, a smartphone, a telephony device, a personal computer (PC), a handheld PC, client workstations, thin clients, thick clients, proxy servers, network communication servers, remote access devices, client computers, server computers, routers, web servers, data, media, audio, video, telephony or streaming technology servers, etc., may also be implemented using a computing device.", Paragraph [0105]); and wherein the steps of the method are performed on the processing means of the smart device (Rejected as applied directly above); and/or wherein the processing means of the smart device are configured to communicate with a cloud environment to perform the steps of the method (As best understood, "The systems, modules, and methods described herein can be implemented using one or more virtual machines operating alone or in combination with one other. Any applicable virtualization solution can be used for encapsulating a physical computing machine platform into a virtual machine that is executed under the control of virtualization software running on a hardware computing platform or host. The virtual machine can have both virtual system hardware and guest operating system software.", Paragraph [0128]). Claim 8 Regarding claim 8, dependent on claim 7, Karasev teaches the invention as claimed in claim 7. Karasev further teaches wherein the position and orientation of the optical sensor is determined based on a positioning unit of the smart- device ("Further, since the location of the image sensor relative to the autonomous vehicle is known (or can be determined), a ray can be determined originating from an endpoint associated with the image sensor or autonomous vehicle and passing through an individual object contact point", Paragraph [0017]; In some instances, the operation 126 can include determining a location of the image sensor relative to the three-dimensional surface and unprojecting the ray onto the three-dimensional surface based at least in part on the geometry of the ray, intrinsic and extrinsic information associated with the image sensor 130 (e.g., focal length, center, lens parameters, height, direction, tilt, etc.), and the known location of the image sensor 130.", Paragraph [0036]). Claim 9 Regarding claim 9, dependent on claim 1, Karasev teaches the invention as claimed in claim 1. Karasev further teaches wherein the computer vision algorithm is one of: YOLO, RetinaNet, SSD, and their derivatives (Due to the and/or, a rejection with relevant prior art is not needed for this limitation since the succeeding limitation has been satisfied); and/or wherein the augmented reality algorithm is simultaneous localization and mapping, SLAM (Rejected as applied to claim 4). Claim 10 Claim 10 is rejected for the reasons outlined above with respect to Claim 1 and Claim 7. Claim 11 Regarding claim 11, an independent claim, Karasev teaches instructions which, when the program is executed by computing device connected to an optical sensor, cause the computing device to carry out the method according to claim 1 (Rejected as applied to claim 1). Claim 12, an independent claim, is rejected for the same reasons as applied to claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
Oct 06, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month