Prosecution Insights
Last updated: April 19, 2026
Application No. 18/122,391

ACOUSTIC DIAGNOSTICS OF VEHICLES

Non-Final OA §103
Filed
Mar 16, 2023
Examiner
JUNG, JAEWOOK
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
V2M Inc.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-18.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 16, 2026 has been entered. Response to Amendment This office action is in response to the amendments filed January 16, 2026. Claim 1 is amended with the matter of claim 12. Claims 10 and 12 have been cancelled. Claims 1-9, 11, and 13 are pending and addressed below. Response to Arguments Applicant’s amendments to claim 1 have overcome the rejection of claim 13 under 35 USC 112. The rejection under 35 USC 112 is withdrawn. Applicant's arguments regarding the rejection of claims under 35 USC 103 have been fully considered but they are not persuasive. Applicant argues that examiner applies an improper “obvious to try” rationale regarding Ekkizogloy in view of Yu. Specifically, applicant states that the microphone array of Yu is specifically for recognizing human speech recognition and that the microphone array combine signals for beamforming. However, examiner states that Yu discloses in their Background that single-microphone noise ratios do not perform well in environments where the noise is dynamic (Yu, [0002]) and implements two microphone sensors as a means to improve the signal-to-noise ratio (SNR) in an automotive environment, where applicant also acknowledges that Yu uses two-microphone arrays to optimize the SNR (page 7, paragraph 1). Applicant argues that examiner applies an improper “obvious to try” rationale regarding Ekkizogloy. Specifically, applicant states that Ekkizogloy fails to provide guidance regarding relative orientation of multiple microphones within a single sensor. However, examiner notes that [0043] of Ekkizogloy may be adjusted to focus audio sensing on reception of sound waves coming from specified location to improve fidelity of the signal from any noise or interference. Furthermore, [0044] further identifies the use of a database to refer to for better characterizing and determining which audio signatures area within normal parameters. As the microphones may be directed towards identified sound sources while also storing results for further optimization, examiner notes that the variability in angles, pairings, and groupings do form finite predictable ones as there are a finite of sound sources to measure and test different angle configurabilities. Applicant argues that Kite is merely a tutorial on audio conversion basics and no motivation exists to apply Kite. However, Kite particularly identifies that PDM offers a low-cost solution that provides low noise and minimal interference (page 3) and most PDM microphones support stereo operation of the microphones, where the only mention of the data output is regarding its impedance. Furthermore, Kite further discloses that the PDM receiver separates the combined bitstreams of the two microphones, allowing the sounds to remain separate on reception. As the features identified by Kite are desirable and Ekkizogloy deals with conversion of analog signals to digital signals (see at least [0039]), examiner maintains that there exist motivations to apply Kite in the context of acoustic vehicle diagnostics for at least the reasons above. Applicant argues that Ekkizogloy teaches trilateration of time delays and not independent per-mic processing of a random selection of four microphones. However, examiner directs applicant to [0043] of Ekkizogloy, where the control of at least one microphones can be independently controlled for location-targeted audio reception to focus the audio sensing. Examiner further notes that embodiments of Ekkizogloy also indicate that a single microphone may receive and be processed (see at least Fig. 9 of Ekkizogloy and [0061-0064]), supporting that independent per-mic processing exists of microphones. A random selection of a plurality of microphones does not provide any notable advantage as a random selection is equivalent to any combination of the plurality of microphones. For at least these reasons, examiner maintains the rejection of the claims under 35 USC 103. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over US20180350167A1 (Ekkizogloy) in view of “Understanding PDM Digital Audio” (Kite) and US20140270241A1 (Yu). Regarding claim 1, Ekkizogloy in view of Kite discloses a system for acoustic diagnostics of a vehicle, comprising: at least three acoustic sensors; and See Fig. 4 of Ekkizogloy. The system includes the use of two or more microphones (here, N1-N4) to detect vehicular sound ([0037-0038]) a control unit placed in the vehicle; See Fig. 7 of Ekkizogloy. Fig. 7 shows a system 700 placed in a vehicle where the controller is functions the same as system 400 ([0058]), where the controller is similarly designated as “μP”. the at least three acoustic sensors being connected to the control unit See Fig. 4 of Ekkizogloy. The microphones N1 to N4 are connected to processor 440. the at least three acoustic sensors being placed on a body of the vehicle; [0038] of Ekkizogloy, “Referring to FIG. 4, microphones N1-N4 are installed on the undercarriage of vehicle 410 (e.g., on the frame). However, microphones can be installed in any suitable location.” the at least three acoustic sensors comprising: a first sensor on a front part of the body, a second sensor on a middle part of the body, and a third sensor on a rear part of the body; Examiner interprets the middle part of the body of the vehicle to be of a location between the front and rear part of the body of the vehicle. See Fig. 4 of Ekkizogloy. While Ekkizogloy disclose a placement of microphones in the figure, Ekkizogloy further disclose that the microphones may be installed in any suitable location ([0038]). One of ordinary skill in the art would find it obvious to try placing the sensors on the middle part of the body as there are finite number of placements on the body of the vehicle. Furthermore, the disclosed placements of the sensors would not affect the operability of the system as the sensors would perform the same function regardless of the location. See MPEP 2143(I)(E) and MPEP 2144.04(VI)(C). wherein each of the first sensor, the second sensor, and the third sensor has two microphones; While Ekkizogloy discloses that the system can be adjusted to achieve audio signal beam forming ([0043]), where beam forming Is used to improve audio sensing based on directionality, Ekkizogloy do not explicitly disclose the use of two microphones per sensor. From a similar field of endeavor of acoustic signal processing, Yu discloses the use of a microphone array consisting of two microphones of various types (see Fig. 2), where the two-microphone array is utilized for identifying types of speech. Yu discloses further that single-microphone noise reduction algorithms have limitations in environment where the noise is dynamic ([0002]). One of ordinary skill in the art would find it obvious to try, prior to the applicant’s effective filing date, integrating the system of Yu to the system of Ekkizogloy as the advantage described by Yu of increasing the SNR ratio would help to distinguish signals of interest versus noisy interference would be a direct upgrade. wherein the two microphones in each of the first sensor, the second sensor, and the third sensor are configured to receive acoustic signals from moving elements of the vehicle and [0037] of Ekkizogloy, “FIG. 4 shows a simplified block diagram of a system 400 for automatically detecting vehicular malfunctions using audio signals, according to certain embodiments. System 400 can include a vehicle 410, two or more microphones (N1-N4), analog-to-digital converter (A/D) 430, one or more microprocessors 440, logic 450 (i.e., software stored in memory), database 460, and display 470.”, where the two or microphones are used to receive acoustic signals of the vehicle. convert the acoustic signals into electrical signals, the two microphones in each of the first sensor, the second sensor and the third sensor being directed in opposite directions; Regarding converting acoustic signals to electrical signals, see [0037] of Ekkizogloy while the rationale above regarding the limitation “wherein each of the first sensor, the second sensor, and the third sensor has two microphones;” disclose two microphones. However, while Ekkizogloy do not disclose having the microphones facing in opposite directions, Ekkizogloy does disclose embodiments of having microphones N1-N4 be servo controlled to directionally alter their audio-focus ([0038]). One of ordinary skill in the art would find it obvious to try a configuration where the microphones are faced in opposite directions as there are a finite number of direction combinations to try. wherein, each of the first sensor, the second sensor and the third sensor is configured to send the signals to the control unit in one common pulse density modulation (PDM) stream such that the electrical signals from the two microphones are latched on respective rising and falling edges of a PDM clock signal; In light of the rationale above regarding “covert the acoustic signals into electrical signals, …”, Ekkizogloy further disclose that “A/D usage and implementation would be understood by one of ordinary skill in the art” ([0039]). While signal modulation (i.e., PDM) would be understood by one of ordinary skill in the art, specific applications such as sending the signals to the control unit in one common pulse density modulation (PDM) stream such that the electrical signals from the two microphones are latched on respective rising and falling edges of a PDM clock signal is not inherent to the citation to paragraph 39. From a related field of signal modulation, Kite discloses a guide on understanding PDM audio. In particular, Kite discloses that PDM microphones (or digital microphones) are capable of converting the analog signal to a digital signal of a single bit at a high sampling rate (page 6 of Kite). Furthermore, the one-bit data can be asserted on the data line on either rising or falling edge of the clock signal, where the microphones can also support stereo operation that asserts two microphones on separate rising or falling edges. One of ordinary skill in the art would find it obvious, prior to the applicant’s effective filing date, to apply the disclosure of Kite to the system of Ekkizogloy as this would minimize the amount of transmission to the PDM receiver. wherein the control unit is configured, in real time, to: process the electrical signals from each of the two microphones of each of the first sensor, the second sensor and the third sensor independently from each other; In light of the rationale above regarding “wherein, each of the first sensor, the second sensor, and the third sensor is configured to send the signals …”, one of ordinary skill in the art would find it obvious to try to process the microphone signals independently as there are a finite number of combinations of processing the electrical signals (separately or together). based on results of said processing, identify a vehicle malfunction, [0068] of Ekkizogloy, “At step 1030, method 1000 can include identifying an anomalous audio signature in the audio data that differs from the sounds generated by the vehicle under normal operating conditions. Alternatively or additionally, the audio database can include sounds generated by the vehicle operating under one or more fault conditions, and method 1000 can further include comparing the anomalous audio signature in the audio data with the sounds generated by the vehicle operating under one or more fault conditions in the database, and identifying a match of the anomalous audio signature with the one or more sounds generated by the vehicle operating under one or more fault conditions.” calculate a location of a malfunction part and [0071] of Ekkizogloy, “At step 1050, method 1000 can include calculating a location of a source of the anomalous audio signature based on the calculated phase difference of the audio data corresponding to the anomalous audio signature received by the microphone and each of the one or more additional microphones.” determine what is the malfunction part; and [0072] of Ekkizogloy, “At step 1060, method 1000 can include determining a cause of the anomalous audio signature based on at least one of corresponding audio characteristics of the anomalous audio signature, the match of the anomalous audio signature with the one or more sounds generated by the vehicle operating under one or more fault conditions, and the calculated location of the source of the anomalous audio signature.” display the location of the malfunction part; While step 1070 of Fig. 10 of Ekkizogloy does not explicitl state that reporting to the driver comprises displaying the location of the malfunction part, Fig.11 and [0074] of Ekkizogloy disclose the use of user output devices 1110, where one such output device is a video display to communicate problem with vehicle based on method 900. As method 900 and method 1000 are both methods performable by processor 440 and accomplish the same goal of automatically detecting vehicular malfunction using audio signals ([0060], [0065]), one of ordinary skill in the art would find it obvious that the video display is capable of displaying the location of the malfunction part through a user output device 1110. wherein the control unit is configured to calculate the location of the malfunction part based on a known location of at least four microphones randomly selected from the microphones of the first sensor, the second sensor and the third sensor, an arrival time of the acoustic signals to each of the at least four microphones and a known location of the moving elements in the vehicle. In light of the rationale of claim 1 regarding the limitation “wherein each of the first sensor, the second sensor, and the third sensor has two microphones;”, Ekkizogloy further discloses in [0026], “Processor-controlled machine learning algorithms can be used to determine what a “normal” running state of the vehicle is over time. If the sensors detect a audio signature that is slightly abnormal or transient (i.e., anomalous), it can alert the driver (or processing logic) of the situation in advance and can cross-reference a database of possible issues to diagnose the most likely cause of the anomaly. Further, an array of microphones can be used to trilaterate the precise location of the audio signature in question. Thus, preemptive measures and servicing (repairs) may be performed to potentially avoid costly repairs or catastrophic failures that otherwise may have occurred.”, where Fig. 3 of Ekkizogloy disclose the arrival time and time delays of successive microphones (in this case three microphones). One of ordinary skill in the art would find it obvious to try to increase the number of microphones to multiliterate a malfunction in light of increasing the number of microphones. Regarding claim 2, with all of the limitations of claim 1, the system further discloses: where the control unit is configured to process the electrical signals using a neural network. While Ekkizogloy does not explicitly disclose the use of a neural network for processing the electrical signals with the control unit, Ekkizogloy teaches that a model trained by machine learning can be used to detect vehicular malfunctions, where the variations, modifications, and alternatives of machine learning models known by those of ordinary skill in the art of machine learning are relevant ([0063]). One of ordinary skill would find it obvious to try a neural network as there are finite number of machine learning architectures to try. Regarding claim 11, with all of the limitations of claim 1, the system further discloses: wherein the two microphones in each of the first sensor, the second sensor, and the third sensor are omnidirectional microphones. [0038] of Ekkizogloy, “Some embodiments may utilize omnidirectional microphones to detect sounds from any portion of vehicle 410.” Regarding claim 13, with all of the limitations of claim 1, the system further discloses: wherein the control unit is configured to calculate the location of the malfunction part as PNG media_image1.png 234 589 media_image1.png Greyscale where v is a speed of an acoustic wave (speed of sound); A(0,0), B(x_b,y_b), C(x_c,y_c), D(x_d,y_d) are coordinates of the at least four microphones A, B, C, D; t_a, t_b, t_c, t_d are reception times of the acoustic signals; t_1 = t_b - t_a; t_2 = t_c – t_a; t_3 = t_d – t_a; Examiner assumes that to be the mathematical expression that describes four microphone-based calculation of claim 1. See the rationale of claim 1. One of ordinary skill in the art would find it obvious that the equation above is the result of trilateration, where one of the four microphones (A) acts as the origin point to triangulate. Claims 3-8 are rejected under 35 U.S.C. 103 as being unpatentable over US20180350167A1 (Ekkizogloy) in view of “Understanding PDM Digital Audio” (Kite) and US20140270241A1 (Yu) and in further view of “You Only Hear Once: A YOLO-like Algorithm for Audio Segmentation and Sound Event Detection” (Venkatesh). Regarding claim 3, with all of the limitations of claim 2, the system further discloses: where the neural network is a You Only Hear Once (YOHO). In light of the rationale of claim 2, while Ekkizogloy does not explicitly teach the use of a You Only Hear Once (YOHO) architecture, Venkatesh discloses the YOHO architecture. One of ordinary skill would find it obvious to try YOHO as it is a neural network specific to audio segmentation and sound event detection (Venkatesh, Conclusion). Regarding claim 4, with all of the limitations of claim 3, the system further discloses: where the YOHO is purely a convolutional neural network (CNN). Page 4 of Venkatesh, “YOHO is a purely convolutional neural network (CNN).” Regarding claim 5, with all of the limitations of claim 3, the system further discloses: where the YOHO is configured to use log-mel spectrograms as input features. Page 4 of Venkatesh, “We use log-mel spectrograms as input features.”, Regarding claim 6, with all of the limitations of claim 3, the system further discloses: where the YOHO is configured to convert said processing into a regression problem, where: Page 3, Section 2.1 of Venkatesh, “However, in YOHO, each block of 0.307 s is processed through regression. One neuron detects the presence of an acoustic class. If the class is present, one neuron predicts the start point of the class and one neuron detects the end point of the class.” one neuron is configured to detect whether an acoustic class is present; Page 3, Section 2.1 of Venkatesh, “However, in YOHO, each block of 0.307 s is processed through regression. One neuron detects the presence of an acoustic class. If the class is present, one neuron predicts the start point of the class and one neuron detects the end point of the class.” one neuron is configured, if the acoustic class is present, to predict a start point of the acoustic class; and Page 3, Section 2.1 of Venkatesh, “However, in YOHO, each block of 0.307 s is processed through regression. One neuron detects the presence of an acoustic class. If the class is present, one neuron predicts the start point of the class and one neuron detects the end point of the class.” one neuron is configured to detect an end point of the acoustic class. Page 3, Section 2.1 of Venkatesh, “However, in YOHO, each block of 0.307 s is processed through regression. One neuron detects the presence of an acoustic class. If the class is present, one neuron predicts the start point of the class and one neuron detects the end point of the class.” Regarding claim 7, with all of the limitations of claim 6, the system further discloses: where the YOHO is trained based on a loss function, which shows a discrepancy between a true value of an estimated parameter and an estimated value provided by the YOHO, and Page 6, Section 2.3 of Venkatesh, “As we modeled the problem as a regression one, we used the sum squared error. Equation (1) shows the loss function for each acoustic class c.”, where, for each piecewise case, a difference of the ground truth and predictions are taken and then taken as a sum of squares. the loss function is minimized by an Adam optimizer, and Page 7, Section 2.5 of Venkatesh, “We trained the network with the Adam optimizer a learning rate of 0.001, a batch size of 32, and early stopping”. wherein the loss function provides an “approval” of the YOHO, One of ordinary skill in the art would find it obvious that the loss function presented as equation 1 of Venkatesh provides the approval of the YOHO as the loss function is a measure of how well fit the model is to the data and task presented, where the “better fit” model would have a smaller loss value as a theoretically perfect model should flawlessly predict the ground truth values. wherein the YOHO is configured to make a decision which of acoustic classes each of the acoustic signals belongs to, and See Fig.2 and equation 1 of Venkatesh. In the following paragraph, Venkatesh discloses that the output layer of 6 audio classes is dependent on the number of acoustic classes, where two acoustic classes of speech and music are identified. the loss function serves as an estimate of a quality of the decision made. See the rationale of the limitation “wherein the loss function provides an “approval” of the YOHO” above. As part of the loss function includes classification (Page 6, Section 2.3), it is clear that mismatching the classification loss (the ground truth and prediction being different) would raise the loss function that presents the performance of the model to the data. Regarding claim 8, with all of the limitations of claim 7, the system further discloses: wherein the loss function is PNG media_image2.png 85 487 media_image2.png Greyscale where y and y_hat are ground truth and predictions respectively; y1 = 1 if the acoustic class is present and y1 =0 if the acoustic class is absent; y2 and y3, which are the start point and end point for each of the acoustic classes and which are considered only if y1 =1. See equation (1) and section 2.3 of Venkatesh. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over US20180350167A1 (Ekkizogloy) in view of “Understanding PDM Digital Audio” (Kite) and US20140270241A1 (Yu) and in further view of US20160343180A1 (Talwar). Regarding claim 9, with all of the limitations of claim 2, the system further discloses: wherein the neural network is a self-learning neural network. Examiner assumes self-learning to mean a self-supervised or semi-supervised learning able to identify and label data by itself. In light of the rationale of claim 2, Ekkizogloy does not explicitly disclose a self-learning neural network. From a similar field of endeavor, Talwar disclose an automobile diagnostic system that utilizes sensors and machine learning to detect and identify vehicle malfunctions. Specifically, Talwar disclose a system trainable to classify each labeled audio sample that can utilize its data recursively to correct classification error of known audio samples, where the classification performance is satisfactory on a known set of data if it may be used for classifying audio samples with unknown categories ([0028]). One of ordinary skill in the art would find it obvious to combine the system of Talwar to the system of Ekkizogloy as a self-learning feature would allow for automating the data collection and training of a model for functional improvement. Conclusion A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAEWOOK JUNG whose telephone number is (571)272-5470. The examiner can normally be reached Monday - Friday, 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.J./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Apr 28, 2025
Non-Final Rejection — §103
Jul 28, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103
Jan 16, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12514149
SYSTEMS AND METHODS FOR SPRAYING SEEDS DISPENSED FROM A HIGH-SPEED PLANTER
2y 5m to grant Granted Jan 06, 2026
Patent 12480561
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month