Prosecution Insights
Last updated: April 19, 2026
Application No. 17/568,302

INFORMATION PROCESSING SYSTEM, SENSOR SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

Final Rejection §103
Filed
Jan 04, 2022
Examiner
NGUYEN, RACHEL NICOLE
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nuvoton Technology Corporation Japan
OA Round
3 (Final)
21%
Grant Probability
At Risk
4-5
OA Rounds
4y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
6 granted / 28 resolved
-30.6% vs TC avg
Strong +62% interview lift
Without
With
+62.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
49 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The following addresses applicant’s remarks/amendments dated 29 December 2025. Claims 1, 4, 5, 8, 10-12, 14-17, and 20 were amended. Claims 3, 6, and 9 were cancelled. New claims 21-23 were added. Therefore, claims 1-2, 4-5, 7-8, and 10-23 are currently pending in the current application and are addressed below. Response to Arguments Applicant's arguments filed 29 December 2025 have been fully considered but they are not persuasive. On page 8 of the Remarks, Applicant argues that Toriu fails to disclose separate two-dimensional detection results and three-dimensional detection results. Applicant references Fig. 8 which is a superimposed data set of RBG and distance values as evidence. However, MPEP 2111 states that pending claims must be given their broadest reasonable interpretation consistent with the specification. Toriu teaches both the two-dimensional detection result for the object and the three-dimensional detection result for the object. Before the data are superimposed in Fig. 8, both the RBG data and the distance values with corresponding coordinates have to be recorded. In step S11 of Fig. 5, Toriu records color image data, as shown in Fig. 6 (Paragraph [0043]). In step S12 of Fig. 5, Toriu records both infrared image data and distance data (Fig. 7 and Paragraph [0046]). Therefore, the two-dimensional detection result for the object and the three-dimensional detection result for the object are both composed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 7, 10, 12-13, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Toriu et al., JP 2013207415 A ("Toriu") in view of Hall et al., US 20100302528 A1 (“Hall”). Regarding claim 1, Toriu discloses an information processing system to be applied for an image sensor having a plurality of first pixels with sensitivity for visible light (Figs. 4, filter 22a has RGB filters for CCD image sensor 22b, Paragraph [0034]-[0036]) and a plurality of second pixels with sensitivity for infrared light (Fig. 4, filter 22a has IR filter for CCD image sensor 22b, Paragraph [0034]-[0036]), the information processing system comprising: a processor configured to perform operations comprising (Fig. 1, control unit 13, control unit 23, Paragraph [0028], [0038]): acquiring first brightness information relating to pixel values of the plurality of first pixels from the plurality of first pixels (Figs. 1 and 5, step S11, control unit 23 processes a color image data shown in Fig. 6, Paragraph [0043]), wherein the first brightness information constitutes a brightness image that is a set of outputs of the plurality of first pixels (Figs. 5-6, step S11, color image data, Paragraph [0043]); acquiring second brightness information relating to pixel values of the plurality of second pixels from the plurality of second pixels, (Figs. 1 and 5, step S12, control unit 23 processes infrared image data, Paragraph [0044]-[0045]) wherein the second brightness information constitutes a brightness image that is set of outputs of the plurality of second pixels (Fig. 5, step S12, infrared image data, Paragraph [0044]-[0045]); acquiring distance information (Figs. 1 and 7, computer 30 and control unit 31 record distance data D(n), Paragraph [0046]); detecting, as a two-dimensional detection result for the object, the object based on reference brightness information selected from the group consisting of the first brightness information and the second brightness information (Fig. 6, RBG data, Paragraph [0043]); detecting, a three-dimensional detection result for the object, the object based on the distance information (Fig. 7, distance data D(n), Paragraph [0046]); and composing: the two-dimensional detection result for the object (Fig. 6, RBG data, Paragraph [0043]); and the three-dimensional detection result for the object (Fig. 7, distance data D(n), Paragraph [0046]). Toriu does not teach: acquiring distance information from at least one second pixel of the plurality of second pixels. However, Hall teaches a LIDAR device with IR and RBG pixels that detect an intensity signal. The IR pixel is also used for a distance measurement (Fig. 1, APD Detectors 30, 32, 34, 36, Paragraph [0018], [0033]-[0034]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Toriu’s imaging system by calculating both the intensity and distance from the IR image sensor rather than using a separate distance sensor, which is disclosed by Hall. One of ordinary skill in the art would have been motivated to make this modification in order to “[provide] the ability to capture a full color image using the LiDAR's own supplied light, thus eliminating the need for daylight or other artificial light sources[,] and [eliminate] the problems caused by uneven light conditions (such as shadows) at the time of capture”, as suggested by Hall (Paragraph [0005]). Regarding claim 2, Toriu, as modified in view of Hall, discloses the information processing system of claim 1, wherein the first brightness information includes light and darkness information representing intensity of light input to the first pixel (Toriu, Paragraph 0043]: color image data corresponds to amount of charge accumulated in light receiving elements). Regarding claim 7, Toriu, as modified in view of Hall, discloses the information processing system of claim 1, wherein the distance information includes information obtained by a Time-of-Flight method (Toriu, Paragraphs [0009] and [0061]: distance measuring based on time or phase difference). Regarding claim 10, Toriu, as modified in view of Hall, discloses the information processing system of claim 1,wherein the composing comprises composing the two dimensional detection result and the three dimensional detection result by making a correction of the three dimensional detection result based on the two dimensional detection result (Toriu, Figs. 1 and 8, superimposition unit 31 superimposes distance data on color image data, Paragraph [0048]-[0049]). Regarding claim 12, Toriu, as modified in view of Hall, discloses the information processing system of claim 1, wherein the operations further comprise outputting target information relating to the object based on the two-dimensional detection result and the three-dimensional detection result (Toriu, Fig. 5, steps S11, S12, S13 repeated for moving objects, Paragraph [0041], Fig. 9, Paragraph [0052]). Regarding claim 13, Toriu, as modified in view of Hall, discloses the information processing system of claim 12, wherein the target information includes one or more pieces of information selected from the group consisting of information about a position of the object, information about a moving direction of the object, information about a moving speed of the object and information about a type of the object (Toriu, Fig. 9, Paragraph [0052]: use depth data). Regarding claim 17, Toriu, as modified in view of Hall, discloses the information processing system of claim 1, wherein the operations further comprise outputting an information processing result, obtained based on the first brightness information, the second brightness information and the distance information (Toriu, Figs. 1 and 8, computer 30 displays 3D image data on display unit 34, Paragraph [0050]), and the information processing result relates to a state of a monitoring area within an angle of view of the image sensor (Toriu, Fig. 1, target space S1, Paragraph [0023]). Regarding claim 18, Toriu, as modified in view of Hall, discloses the information processing system of claim 17, wherein the information processing result includes one or more pieces of information selected from the group consisting of: information about whether or not the object is present in the monitoring area; information about a position in the monitoring area, of the object present in the monitoring area (Toriu, Figs. 1 and 8, computer 30 displays 3D image data on display unit 34, Paragraph [0049]-[0050]); and information about an attribute of the object. Regarding claim 19, Toriu, as modified in view of Hall, discloses a sensor system, comprising the information processing system of claim 1 and the image sensor (Toriu, Fig. 3, CCD image sensor 22b, Paragraph [0034]). Claims 4-5, 8, 14-16, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Toriu, as modified in view of Hall, in further view of Bamji et al., US 20110285910 A1 ("Bamji"). Regarding claim 4, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the detecting the object based on the distance information comprises detecting the object based on not only the distance information but also one or more pieces of information selected from the group consisting of the first brightness information and the second brightness information (Bamji, Fig. 6A, step 460, edge map based on RGB and depth image, Paragraph [0063]). However, Bamji teaches a time of flight (TOF) system that detects IR and visible light. A processor forms an RBG image from visible light and a depth image and a confidence map, which contains brightness information, from IR-NIR light. The depth and RGB image are combined into a 3D estimate and edge map is created from the combined RGB and depth image. The 3D depth map is then further refined and alpha-matting is performed to determine if a pixel is in the foreground or background. (Fig. 5, processor 160, Fig. 6A-B, RGB image, depth image, confidence map, steps 400 – 480, Paragraph [0008], Paragraph [0052], Paragraphs [0055]-[0063], [0065]-[0066], [0071]-[0072]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of detecting a foreground image based on the depth and RBG images. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Regarding claim 5, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the detecting the object based on the distance information comprises detecting the object based on not only the distance information but also the first brightness information corrected so as to match a timing of the second brightness information (Bamji, Fig. 6B, step 470-480, combines course depth image, RBG image, and confidence map to create refined depth image, then perform alpha-matting, Paragraph [0065]-[0066], [0071]-[0072]). However, Bamji teaches a time of flight (TOF) system that detects IR and visible light. A processor forms an RBG image from visible light and a depth image and a confidence map, which contains brightness information, from IR-NIR light. The depth and RGB image are combined into a 3D estimate and edge map is created from the combined RGB and depth image. The 3D depth map is then further refined and alpha-matting is performed to determine if a pixel is in the foreground or background. (Fig. 5, processor 160, Fig. 6A-B, RGB image, depth image, confidence map, steps 400 – 480, Paragraph [0008], Paragraph [0052], Paragraphs [0055]-[0063], [0065]-[0066], [0071]-[0072]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of detecting a foreground image based on combination of the depth and RBG images as well as the confidence map. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Regarding claim 8, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the operations further comprise correcting the distance information based on the distance information and one or more pieces of information selected from the group consisting of the first brightness information and the second brightness information. However, Bamji teaches separating the IR light into a depth image and a confidence map, where the confidence map contains information about the brightness. The confidence can be used to weight the Z-pixels in the depth maps. This routine is carried out by a processor (Fig. 5, processor 160, Fig. 6A, depth map, confidence map, Step 430, Paragraph [0008], Paragraph [0056]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of using the IR brightness image to weight the confidence in depth data. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Regarding claim 14, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the operations further comprise separating the object from a peripheral area located around the object. However, Bamji does teach a method of image processing where a processor forms a 3D depth image from an RBG image, a depth image, and a confidence map which contains IR brightness information. The foreground pixels are separated from the background pixels in an alpha-matting image step (Fig. 5, processor 160, Fig. 6A-B, alpha-matting image, step 480, Paragraph [0008], Paragraph [0052], Paragraphs [0071]-[0072]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of detecting a foreground image. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Regarding claim 15, Toriu, as modified in view of Hall and Bamji, discloses the information processing system of claim 14, wherein the detecting the object based on the distance information comprises detecting the object based on information in which the peripheral area is removed from the distance information (Bamji, Fig. 6B, alpha-matting image, step 480, Paragraph [0008], Paragraph [0052], Paragraphs [0071]-[0072]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of detecting a foreground image. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Regarding claim 16, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the operations further comprise correcting a time difference between the first brightness information and the second brightness information. However, Bamji teaches synchronizing a depth image to an RGB image video frame. In a following step, a confidence map, which contains information corresponding to the brightness of the IR-NIR light used in generating the depth image, is created. Thus, the confidence map is also synchronized to the frame of the RGB image. (Fig. 6A, steps 410-430, Paragraph [0055]-[0056]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of synchronizing image times. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Regarding claim 21, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the composing comprises correcting the three-dimensional detection result based on the two-dimensional detection result to complement a lost point in the three-dimensional detection result. However, Bamji teaches a building a confidence map to weight Z depth measurement data. The confidence map effectively culls erroneous or missing Z data. When the Z depth data and RGB image data are combined, each Z pixel is mapped to an RBG pixel. To develop a coarse Z depth estimate image, a set of depth estimates is interpolated based information from both the Z depth data and the RBG image data. (Fig. 6A, steps 430-450Paragraph [0056]-[0058])) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method of creating a depth map, disclosed by Toriu, as modified in view of Hall, with Bamji’s method of creating a coarse depth image based on RBG and Z data. One of ordinary skill in the art could have combined the two methods in order to employ inexpensive arrays of RGB and Z pixels while providing high quality video manipulation, as suggested by Bamji (Paragraph [0023]). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Toriu, as modified in view of Hall, in further view of Oder et at., US 20180067966 A1 ("Oder"). Regarding claim 11, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the operations further comprise outputting a feedback signal to a sensor system including the image sensor, and the image sensor is configured to output an electrical signal in which one or more parameters selected from the group consisting of an exposure time and a frame rate are changed in response to the feedback signal. However, Oder teaches a sensor fusion system that receives raw measurement data from multiple sensor systems, including a LIDAR device and an image capture device. The sensor fusion system can generate feedback signals to provide to the sensor system. The feedback signals can reposition sensors, expand the field of view of the sensors, change the exposure time, or alter a mode of operation. (Fig. 1, sensor fusion system 300, feedback signals 116, Paragraph [0026]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined the imaging system disclosed by Toriu, as modified in view of Hall, with the functionality to generate feedback signals to change the sensor’s operation through a sensor fusion system, which is disclosed by Oder. One of ordinary skill in the art could have combined these elements and yielded the predictable result of updating the sensor exposure time. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Toriu, as modified in view of Hall, in further view of Xu et al., US 20190096086 A1 (“Xu”). Regarding claim 20, Toriu, as modified in view of Hall, disclose the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the detecting the object based on the reference brightness information comprises detecting the object based on the one or more pieces of information selected from the group consisting of the first brightness information and the second brightness information, using a Convolutional Neural Network. However, Xu teaches an image being captured by a camera and each pixel on the image being represented by a two-dimensional coordinate. Objects in the image may be identified with various algorithms including Fast-CNN and Faster-R CNN. (Fig. 1, image 110, bounding box 114, Paragraph [0019]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method identifying objects disclosed by Toriu, as modified in view of Hall, by using a CNN algorithm, which is disclosed by Xu. One of ordinary skill in the art would have been motivated to make this modification in order to only identify certain object classes, as suggested by Xu (Paragraph [0019]). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Toriu, as modified in view of Hall, in further view of Hardegger et al., US 20120105823 A1 (“Hardegger”). Regarding claim 22, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the composing comprises correcting the two-dimensional detection result based on the three-dimensional detection result. However, Hardegger teaches a system that contains a color sensor and a TOF sensor. The system also contains a look-up table that provides a correlation between a distance measurement and a degree of color correction in order to calibrate the color sensor data with the distance data (Fig. 9, lookup table 920, Paragraph [0044]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method identifying objects disclosed by Toriu, as modified in view of Hall, by correcting the color data with distance data, which is disclosed by Hardegger. One of ordinary skill in the art would have been motivated to make this modification in order to “negate the need for labor-intensive calibration procedures”, as suggested by Hardegger (Paragraph [0027]). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Toriu, as modified in view of Hall, in further view of Banerjee et al., EP 3438777 A1 (“Banerjee”). Regarding claim 23, Toriu, as modified in view of Hall, discloses the information processing system of claim 1. Toriu, as modified in view of Hall, does not teach: wherein the two-dimensional detection result includes a marker indicating a position of the object, and the composing comprises adjusting a position of the marker based on the three-dimensional detection result. However, Banerjee teaches a method for combining camera sensor data and LIDAR sensor data based on information related to one or more edges. The determining of a combined image may comprise of reducing the mismatch of an overlay of the edges over the camera sensor data and point cloud data. A calibration may be performed where the camera sensor data or the LIDAR sensor data is transformed to a common coordinate system (Fig. 1a, determining a combined image 160, Paragraph [0034]-[0035]). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method identifying objects disclosed by Toriu, as modified in view of Hall, by adjusting the coordinates of the object, which is disclosed by Banerjee. One of ordinary skill in the art would have been motivated to make this modification in order to “enable a more precise determination of a location of these objects”, as suggested by Banerjee (Paragraph [0003]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL N NGUYEN whose telephone number is (571)270-5405. The examiner can normally be reached Monday - Friday 8 am - 5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL NGUYEN/Examiner, Art Unit 3645 /YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Jan 04, 2022
Application Filed
May 15, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Sep 30, 2025
Non-Final Rejection — §103
Dec 29, 2025
Response Filed
Mar 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12442900
OPTICAL COMPONENTS FOR IMAGING
2y 5m to grant Granted Oct 14, 2025
Patent 12372354
Surveying Instrument
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
21%
Grant Probability
84%
With Interview (+62.5%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month