Prosecution Insights
Last updated: April 19, 2026
Application No. 18/274,357

ASSISTANCE SYSTEM, IMAGE PROCESSING DEVICE, ASSISTANCE METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Final Rejection §101§103§112
Filed
Jul 26, 2023
Examiner
ZHANG, WAYNE
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Omron Corporation
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
8 granted / 16 resolved
-12.0% vs TC avg
Strong +44% interview lift
Without
With
+43.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
22 currently pending
Career history
38
Total Applications
across all art units

Statute-Specific Performance

§101
19.2%
-20.8% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
25.1%
-14.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The interpretations under 35 U.S.C. 112(f) have been withdrawn in light of the amended claims. The rejection under 35 U.S.C. 112(b) have been withdrawn in light of the amended claims. However, in light of the amended claims, a new rejection under 35 U.S.C. 112(b) has been advanced. The Applicant’s arguments against the rejection under 35 U.S.C. 101 have been considered but are unpersuasive. A person can mentally measure a position/orientation of point cloud data and comparing it with template data. They can also mentally calculate a correlation value between point cloud data and template/reference data and determine how close the point cloud data is to the reference data. Moving a camera around with a robot is a specified at a high level of generality and is a well-understood extra solution activity of data inputting. Generating point cloud data is also an additional element of data gathering and displaying a map is an additional element of data outputting. Thus, the rejection under 35 U.S.C. 101 is maintained. The examiner recommends amending claim 1 so that it includes an improvement to known ideas, such as how the invention improves the success rate of gripping an object due to the generated map and calculations (as shown in specification paragraph [0005-0007]). Applicant’s arguments with respect to claim(s) 1-4, 6-11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1, 9-10 is/are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 1 recites “a robot configured to sequentially move the camera to a plurality of measurement positions within the movement range”. There is insufficient antecedent basis for “the movement range”. For examination purposes, “the movement range” will be interpreted as “a movement range”. Claim 1 recites “measuring, for each of the plurality of measurement positions, position and orientation of the object by three-dimensional search collating a plurality of pieces of template data with the three-dimensional point cloud data”. It is unclear what the Applicant is claiming as the language appears to be ambiguous. For examination purposes, the examiner will interpret this limitation as “measuring, for each of the plurality of measurement positions, position and orientation of the object by comparing the three-dimensional point cloud data with template data”. Claim 1 recites “calculating a correlation value between the three-dimensional point cloud data and particular template data that is most similar to the three-dimensional point cloud data among the plurality of pieces of template data”. It is unclear if the Applicant is referring to the “three-dimensional point cloud data” as the point cloud data of the object, or point cloud data of the visual field area. For examination purposes, the examiner will interpret “three-dimensional point cloud data” as “three-dimensional point cloud data of the object”, as supported by specification [0071]. Claims 2-4, 6-8 are rejected for their dependencies on claim 1. Claims 9-10 corresponds to claim 1 and thus are rejected for the same reasons as claim 1. Claim 11 is also rejected for their dependency on claim 10. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites: “measuring, for each of the plurality of measurement positions, position and orientation of the object by three-dimensional search collating a plurality of pieces of template data with the three-dimensional point cloud data” which can be reasonably interpreted as a human observer mentally measuring the position of an object and compare it with template data. “calculating a correlation value between the three-dimensional point cloud data and particular template data that is most similar to the three-dimensional point cloud data among the plurality of pieces of template data” which can be reasonably interpreted as a human observer mentally comparing point cloud data with reference data and determining how similar they are to each other. “generating three-dimensional point cloud data of a visual field area of the camera based on a captured image obtained by the camera, for each of the plurality of measurement positions” is a well-understood, routine, and conventional insignificant extra-solution activity of data gathering. “a camera configured to perform image capturing of an object” is a well-understood, routine, and conventional insignificant extra-solution activity of data gathering. “a robot configured to sequentially move the camera to a plurality of measurement positions within the movement range” simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. “displaying, on a display a map representing a correspondence relationship between each of the plurality of measurement positions and an the correlation value” is a well-understood, routine, and conventional insignificant extra-solution activity of data outputting. Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to additional elements of receiving a measurement position and displaying the correlation value corresponding to that measurement position. Receiving a position is an extra-solution activity of data gathering and displaying the correlation value is data outputting. Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to additional elements of receiving a measurement position and displaying an image corresponding to that measurement position. Receiving a position is an extra-solution activity of data gathering and displaying the image is data outputting. Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of registering a position of a model of an object such that it aligns with the object, generating a virtual image of the model, and displaying it. A human observer can mentally determine a model’s position and orientation aligning with the object’s position and orientation. Generating a virtual image of the model is an extra-solution activity of data gathering and displaying the image is data outputting. Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of repeatedly measuring each position, calculating a value representing the accuracy of each measurement, and displaying the result. A human observer can mentally measure a plurality of positions and determine how accurate they were at each position. Displaying the accuracy is an extra-solution activity of data outputting. Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of calculating a difference between a how much a virtual model moved from it’s reference position and how much the position of the object moved from it’s reference position, and displaying the difference. A person can mentally determine the difference between how much a virtual model moved and how much an object moved from it’s reference position. Displaying this value is an extra-solution activity of data outputting. Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an additional element of representing a map on a polar coordinate system and having the measurement positions on the same plane. These are generically recited insignificant extra-solution activity of data outputting and they do not provide any meaningful limitations on performing the abstract idea. Claim 9 is analogous to claim 1, additionally reciting an image processing device. The image processing device is an additional element that is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Thus, claim 9 is rejected for the same reasons as claim 1. Claim 10 is analogous to claim 1. Thus, claim 10 is rejected for the same reasons as claim 1. Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an additional element of a non-transitory computer-readable storage medium. This is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 are rejected under 35 U.S.C. 103 as being unpatentable over Toda (US 20200311854 A1) in view of Du (US 20190286932 A1). Regarding claim 1, Toda discloses an assistance system (Toda, paragraph [0002], "The present disclosure relates to an object detecting method, an object detecting device, and a robot system.") comprising: a camera configured to perform image capturing of an object (Toda, paragraph [0025], "The camera 3 can image the target object 91 placed on the table 92"), a robot configured to sequentially move the camera to a plurality of measurement positions within the movement range (Toda, paragraph [0099], Fig. 4 below, " When the estimation model is updated in this way, as shown in FIG. 4, the position x of the imaging position/posture moves to the position “a”, the position “b”, a position “c”, and a position “d”"), PNG media_image1.png 717 434 media_image1.png Greyscale a processor (Toda, paragraph [0043], "The object detecting device 4 shown in FIG. 2 includes a processor 4a, a storing section 4b, and an external interface 4c"), and a memory storing computer-executable instructions that, when executed by the processor, cause the processor to perform operations (Toda, paragraph [0045], "Examples of the storing section 4b include a volatile memory such as a RAM (Random Access Memory), a nonvolatile memory such as a ROM (Read Only Memory), and a detachable external storage device"), the operations including: generating three-dimensional point cloud data of a visual field area of the camera based on a captured image obtained by the camera, for each of the plurality of measurement positions (Toda, paragraph [0027], "The camera 3 is one or both of a device capable of acquiring two-dimensional images such as a color image, a monochrome image, and an infrared image of the target object 91 and the periphery of the target object 91, that is, a 2D camera and a device capable of acquiring a depth image (surface point group data) of the target object 91 and the periphery of the target object 91, that is, a 3D camera*."), *3D cameras are capable of generating point cloud information, as supported by the Wikipedia screenshots below. PNG media_image2.png 361 1220 media_image2.png Greyscale PNG media_image3.png 214 1192 media_image3.png Greyscale PNG media_image4.png 715 1200 media_image4.png Greyscale measuring, for each of the plurality of measurement positions, position and orientation of the object by three-dimensional search collating a plurality of pieces of template data with the three-dimensional point cloud data (Toda, paragraph [0067], "FIG. 5 is an example of the first image obtained by imaging a state in which bolts are used as the target objects 91 and loaded in bulk. Lines surrounding the contours of the target objects 91 successfully recognized by the object-position/posture calculating section 42 are given to the target objects 91."), calculating a correlation value between the three-dimensional point cloud data and particular template data that is most similar to the three-dimensional point cloud data among the plurality of pieces of template data (Toda, paragraph [0065], "Examples of one of specific methods of recognizing an object position/posture of the target object 91 include a method of matching the first image and design data of the target object 91. The design data of the target object 91 is, for example, data of three-dimensional CAD (Computer-Aided Design) that can be treated in three-dimensional design drawing software and data of three-dimensional CG (Computer Graphics) that is configured by constituent elements of a model such as dots, lines, and surfaces and can be treated by three-dimensional computer graphics software"). While Toda teaches displaying on a display (Toda, paragraph [0038], "The object detecting device 4 shown in FIG. 1 includes a display section 47 that displays at least one of the number of recognized object positions/postures output from the recognition evaluating section 43, the task evaluation value output from the task evaluating section 45, and the evaluation indicator including the number of recognized object positions/postures and the task evaluation value"), Toda does not teach “a map representing a correspondence relationship between each of the plurality of measurement positions and an the correlation value”. However, Du teaches a map representing a correspondence relationship between each of the plurality of measurement positions and and the correlation value (Du, paragraph [0049], "FIG. 3 illustrates the process by which the object detection system generates a heat map based on an input image and a target object keyword. For example, as mentioned above and as shown in FIG. 3, the object detection system can generate a heat map 310a by first transforming an embedding neural network 302 to a fully-convolutional dense tagging neural network 304.").* *Du’s heat map is a relationship between positions and a general value associated with that position. Using this concept, it can be incorporated into Toda’s measurement positions and it’s correlation value with each position. PNG media_image5.png 444 688 media_image5.png Greyscale It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to display a map that shows Toda’s correlation value and it’s location on the images, as taught by Du. The suggestion/motivation for doing so would have been to provide a visualization of every image and their correlation value, resulting in more optimal imaging positions. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Toda in view of Du to obtain the invention as specified in claim 1. Regarding claim 6, Toda in view of Du discloses the assistance system according to claim 1, wherein the measuring includes performing a plurality of times of measurement for each of the plurality of measurement positions (Toda, paragraph [0099], Fig. 4 below, " When the estimation model is updated in this way, as shown in FIG. 4, the position x of the imaging position/posture moves to the position “a”, the position “b”, a position “c”, and a position “d”"), and the operations further include: calculating a value representing a repetition accuracy of the plurality of times of measurement, and displaying the value representing the repetition accuracy on the display (Toda, paragraph [0070], "When the number of recognized object positions/postures is large, the recognition evaluating section 43 can evaluate that an imaging position/posture in which the first image is captured is an imaging position/posture in which the number of successful recognitions is large. The recognition evaluating section outputs the number of recognized object positions/postures to the imaging-position/posture determining section 46."). Regarding claim 11, Toda in view of Du discloses a non-transitory computer-readable storage medium storing a program which, when executed by a computer, causes the computer to perform the assistance method according to claim 10 (Toda, paragraph [0045], "The storing section 4b stores various programs and the like executable by the processor 4a. Examples of the storing section 4b include a volatile memory such as a RAM (Random Access Memory), a nonvolatile memory such as a ROM (Read Only Memory), and a detachable external storage device. Besides the programs, data output from the sections explained above, setting values, and the like are also stored in the storing section 4b."). Claim 9 corresponds to claim 1, additionally reciting an image processing device (Toda, paragraph [0044], “The processor 4a reads out and executes various programs and the like stored in the storing section 4b. Consequently, the processor 4a realizes various arithmetic operations, various kinds of processing, and the like in the object detecting device 4.”). Thus, claim 9 is rejected for the same reasons of obviousness as claim 1. Claim 10 corresponds to claim 1. Thus, claim 10 is rejected for the same reasons of obviousness as claim 1. Claim(s) 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Toda (US 20200311854 A1) in view of Du (US 20190286932 A1) and in further view of Technology for Teachers and Students on Youtube (Beginner's Guide to Google Maps - YouTube). Regarding claim 2, Toda in view of Du discloses the assistance system according to claim 1. Toda in view of Du does not teach “wherein the operations further include: receiving, on the map, designation of one measurement position of the plurality of measurement positions, and in response to the designation of the one measurement position, displaying the correlation value corresponding to the one measurement position on the display”. However, Technology for Teachers and Students teaches wherein the operations further include: receiving, on the map, designation of one measurement position of the plurality of measurement positions, and in response to the designation of the one measurement position, displaying the correlation value corresponding to the one measurement position on the display (Technology for Teachers and Students, 0:34, screenshot below).* *Google Maps takes a location of a position and displays arbitrary values associated with it. Using this concept, Toda (in view of Du) can display their correlation value associated with their measurement position. PNG media_image6.png 950 1534 media_image6.png Greyscale It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement a display for a correlation value of Toda’s (in view of Du) and their respective position, as taught by Technology for Teachers and Students. The suggestion/motivation for doing so would have been to provide a visualization for a position’s correlation value. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Toda in view of Du and in further view of Technology for Teachers and Students to obtain the invention as specified in claim 2. Regarding claim 3, Toda in view of Du discloses the assistance system according to claim 1. Toda in view of Du does not teach “wherein the operations further include: receiving, on the map, designation of one measurement position of the plurality of measurement positions, and in response to the designation of the one measurement position, displaying an image generated from the three-dimensional point cloud data corresponding to the one measurement position on the display”. However, Technology for Teachers and Students teaches wherein the operations further include: receiving, on the map, designation of one measurement position of the plurality of measurement positions, and in response to the designation of the one measurement position, displaying an image generated from the three-dimensional point cloud data corresponding to the one measurement position on the display (Technology for Teachers and Students, 0:34, screenshot below). PNG media_image6.png 950 1534 media_image6.png Greyscale It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to display the image of Tada’s (in view of Du) when selecting a measurement position, as taught by Technology for Teachers and Students. The suggestion/motivation for doing so would have been to provide a visualization for the selected position. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Toda in view of Du and in further view of Technology for Teachers and Students to obtain the invention as specified in claim 3. Claim(s) 8 are rejected under 35 U.S.C. 103 as being unpatentable over Toda (US 20200311854 A1) in view of Du (US 20190286932 A1) and in further view of Iino (US 5647019) and Sato (US 20100216575 A1). Regarding claim 8, Toda in view of Du discloses the assistance system according to claim 1. Toda in view of Du does not teach “wherein the plurality of measurement positions are on a same spherical surface and the map is represented in a polar coordinate system”. However, Inno teaches wherein the plurality of measurement positions are on a same spherical surface and the map is represented in a polar coordinate system (Iino, Col. 3, Line 9-18, "First, the image of an object which is picked up by a television camera connected to a computer is utilized to map the position of the object in a three-dimensional space; in which, according to the invention, the coordinates of the vertexes of a camera image display range are employed as a two-dimensional coordinate space, coordinates locating the image of the object in the camera image display range, and camera parameters are used, so that the position of the object is mapped in a spherical polar coordinate system with the position of the camera as the origin"). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to Toda’s (in view of Du) positions onto a spherical polar coordinate system, as taught by Iino. The suggestion/motivation for doing so would have been because polar coordinate system is more suited for analyzing rotational motion. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Toda in view of Du and Iino does not teach “with a center of the sphere as an origin”. However, Sato teaches with a center of the sphere as an origin (Sato, paragraph [0067], "For example, by using polar coordinates having the center of the virtual sphere as the origin, the center position of the circular dimple is defined by a polar angle .theta. and an azimuth angle .phi.."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to have the center of Toda’s (in view of Du and Iino) sphere as the origin, as taught by Sato. The suggestion/motivation for doing so would have been to keep the origin consistent and so that all points on the surface are equally distance from the center. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Toda in view of Du and in further view of Iino and Sato to obtain the invention as specified in claim 8. Allowable Subject Matter Claims 4 and 7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and amended to overcome all rejections set forth in the office action. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WAYNE ZHANG whose telephone number is (571) 272-0245. The examiner can normally be reached Monday-Friday 10:00-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WAYNE ZHANG/Examiner, Art Unit 2672 /GANDHI THIRUGNANAM/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Jul 26, 2023
Application Filed
Jul 29, 2025
Non-Final Rejection — §101, §103, §112
Oct 31, 2025
Response Filed
Jan 26, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591990
METHOD AND APPARATUS FOR GENERATING SPATIAL GEOMETRIC INFORMATION ESTIMATION MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12591958
INFRA-RED CONTRAST ENHANCEMENT FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12561843
METHOD FOR MANAGING IMAGE DATA, AND VEHICLE LIGHTING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12536629
Image Processing Method and Electronic Device
2y 5m to grant Granted Jan 27, 2026
Patent 12536667
METHOD AND FACILITY FOR SEGMENTATION OF HIGH-CONTRAST OBJECTS IN X-RAY IMAGES
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
94%
With Interview (+43.6%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month