Prosecution Insights
Last updated: April 19, 2026
Application No. 18/669,943

SEAMLESS REVERBERATION TRANSITION IN VIRTUAL VENUES

Non-Final OA §102§103
Filed
May 21, 2024
Examiner
ZHU, QIN
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Harman Becker Automotive Systems GmbH
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
534 granted / 610 resolved
+25.5% vs TC avg
Minimal +3% lift
Without
With
+2.6%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
29 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 610 resolved cases

Office Action

§102 §103
DETAILED ACTION This action is in response to communications filed 5/21/2024: Claims 1-20 are pending Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 10-14, and 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Koppens et al (EP4398607, hereinafter “Koppens”). Regarding claim 1, Koppens teaches a computer-implemented method for audio signal processing by an audio system comprising a processing unit (¶1, system and method of audio processing), the computer-implemented method comprising: receiving an input audio signal (abstract, receiving audio data corresponding to a user environment); obtaining a first set of reverberation parameters associated with a first acoustic environment (Figs. 8-9, ¶83, audio scene comprising of a plurality of different acoustic environments each with their specific acoustic properties (including reverberation properties)); obtaining a second set of reverberation parameters associated with a second acoustic environment, the second set of reverberation parameters being different from the first set of reverberation parameters (Figs. 8-9, ¶83, audio scene comprising of a plurality of different acoustic environments each with their specific acoustic properties (including reverberation properties)); processing the input audio signal based on the first set of reverberation parameters to generate a first early reflections signal, and based on the second set of reverberation parameters to generate a second early reflections signal, simultaneously, wherein each of the first and second early reflections signals comprises a respective synthesized audio signal representing initial sound reflections in the respective acoustic environment (¶98, 102, Fig. 1, 5, acoustic properties for each environment include direct, early reflections, and late reverberation signal components – each of which should be rendered to embody an accurate representation of real-world acoustics); combining the first and second early reflections signals through a fading process to generate a transitional early reflections signal, which comprises a transition from the first to the second early reflection signal in accordance with the fading process (¶194-196, applying crossfading techniques to generate a transition signal comprising of outputting a modified input signal combined with one set of reverberation parameters to an input signal combined with another set of reverberation parameters); and providing an output audio signal comprising a combination of the input audio signal and the transitional early reflections signal (¶193, output signal comprises of the transition signal). Regarding claim 2, Koppens teaches wherein the first and the second early reflections signals are generated by simulating only first-occurring sound reflections of the input audio signal within the respective acoustic environment (¶13, Fig. 1, reflection signal can comprise of early reflection signal portion). Regarding claim 3, Koppens teaches further comprising: continuously outputting the output audio signal to a user, while transitioning from the first early reflections signal to the second early reflections signal (¶193, output signal is continually output while transitioning from one reverberation parameter to another). Regarding claim 4, Koppens teaches further comprising: processing the input audio signal based on the first set of reverberation parameters to generate a first reverberation tail signal, and based on the second set of reverberation parameters to generate a second reverberation tail signal (¶104, one or more reverberators to generate one or more reverberation signals that include late/diffuse reverberations (i.e. reverberation tail)); providing the output audio signal comprising a combination of the input audio signal, the transitional early reflections signal, and the first reverberation tail signal (¶13-14, 33, output signal includes an early reflection as well as late/diffuse reverberation component); and during continuous playback of the output audio signal, switching from the first reverberation tail signal to the second reverberation tail signal to provide the output audio signal comprising the input audio signal, the transitional early reflections signal, and the second reverberation tail signal (¶193, output signal is continually output while transitioning from one reverberation parameter to another). Regarding claim 5, Koppens teaches wherein the first and second reverberation tail signals comprise a synthesized audio signal representing late sound reflections and decay characteristics occurring within the respective acoustic environment subsequent to the early reflections within the respective acoustic environment (¶42, reverberation parameters include late/diffuse characteristics for each of the audio environments). Regarding claim 10, Koppens teaches wherein the transitional early reflections signal is a continuous and seamless audio signal representing a blend of initial sound reflections in the first and second acoustic environments (¶192, a gradual transition between first and second acoustic parameters is applied). Regarding claim 11, Koppens teaches wherein: the first acoustic environment corresponds to a first virtual venue, and the second acoustic environment corresponds to a second virtual venue (¶46-48, 80, plurality of virtual environments with varying acoustic parameters); or the second acoustic environment corresponds to a user-induced parameter change of at least one parameter of the first set of parameters. Regarding claim 12, Koppens teaches wherein first early reflections signal and second early reflections signal are provided simultaneously (Fig. 1, ¶193-196, transition signal includes components from both a first reverberation parameter and a second reverberation parameter). Regarding claim 13, Koppens teaches further comprising: compensating for latency in the transitional early reflections signal to synchronize the input audio signal, early reflections signal, and/or reverberation tail signal based on a latency control signal (¶42, reverberation parameters include a delay parameter and a decay time parameter). Regarding claim 14, Koppens teaches wherein the input audio signal is a multi-channel audio signal (¶78, 98, binaural rendering). Regarding claim 18, it is rejected similarly as claim 1. The system can be found in Koppens (¶1, system). Regarding claim 19, it is rejected similarly as claim 4. The system can be found in Koppens (¶1, system). Regarding claim 20, it is rejected similarly as claim 1. The media can be found in Koppens (¶248, one or more storage type mediums). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6-7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Koppens et al (EP4398607, hereinafter “Koppens”) in view of Smith et al (US11878246, hereinafter “Smith”). Regarding claim 6, Koppens fails to explicitly teach further comprising: receiving a microphone signal, which comprises an audio signal from a user; and generating the first and second reverberation tail signal further based on the microphone signal. Smith teaches further comprising: receiving a microphone signal, which comprises an audio signal from a user; and generating the first and second reverberation tail signal further based on the microphone signal (col 8 lines 12-25, a microphone can be used to capture audio signals from the user’s position in the environment to determine environment characteristics). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the audio capture technique (as taught by Smith) audio reproduction apparatus (as taught by Koppens). The rationale to do so is to apply a known technique to a known device ready for improvement to yield the predictable result of further capturing audio signals during reproduction to improve the realistic audio experience of the user (Smith, col 1 lines 33-41). Regarding claim 7, Koppens in view of Smith teaches wherein processing the microphone signal based on the first or second set of reverberation parameters further takes into account a position of a user in the respective acoustic environment (Koppens, ¶80, a user’s position is taken into account to determine audio output). Regarding claim 15, Koppens in view of Smith teaches wherein the fading process is adaptive based on a user input controlling blending of the initial sound reflections in the first and second early reflections signals in the transitional early reflections signal (Smith, col 6 lines 63-col 7 lines 1-37, user interface allows a game designer to modify various components such as a nodes 306 and 308 to control a live reverb metric (Fig. 3A)). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Koppens et al (EP4398607, hereinafter “Koppens”) in view of Ye et al (US20240244388, hereinafter “Ye”). Regarding claim 8, Koppens fails to explicitly teach wherein the generating the first and second reverberation tail signals is controlled using a reverberation tail control signal based on a lookup table containing reverberation tail parameters of the first and second acoustic environments. Ye teaches wherein the generating the first and second reverberation tail signals is controlled using a reverberation tail control signal based on a lookup table containing reverberation tail parameters of the first and second acoustic environments (¶67-70, a look-up table can be used to calculate one or more reverberation parameter). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of using a look-up table (as taught by Ye) to the audio reproduction system (as taught by Koppens). The rationale to do so is to apply a known technique to a known device ready for improvement to yield the predictable result of quickly calculating one or more sound reproduction parameters using a look-up table (Ye, ¶67-70). Claim(s) 9 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Koppens et al (EP4398607, hereinafter “Koppens”) in view of Saini et al (WO2023208333, hereinafter “Saini”). Regarding claim 9, Koppens fails to explicitly teach further comprising: applying an equalization filter to the first and second reverberation tail signals to shape a tonal character of the reverberation tail. Saini teaches further comprising: applying an equalization filter to the first and second reverberation tail signals to shape a tonal character of the reverberation tail (pg. 6 lines 20-27, applying equalization to at least one or more portions of a late reverberation tail). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of equalization (as taught by Saini) to the audio reproduction system (as taught by Koppens). The rationale to do so is to apply a known technique to a known device ready for improvement to yield the predictable result of achieving different tonal characteristics of a reproduction environment (Saini, pg. 20 lines 4-7). Regarding claim 16, Koppens in view of Saini teaches further comprising: routing the input audio signal and the transitional early reflections signal to a multi-channel 3D surround system to generate a 3D audio signal (Saini, pg. 11 lines 17, 3D audio); and providing the 3D audio signal to a system equalization filter for reproduction room compensation on a loudspeaker level to generate the output audio signal (Saini, pg. 20 lines 4-8, applying equalization to achieve varying tonal characteristics in the listening environment). Regarding claim 17, Koppens in view of Saini teaches wherein the first and second sets of reverberation parameters are derived from a database of real-world or virtual acoustic environments (Saini, pg. 6 lines 1-36, plurality of reverberation parameters are stored in a database and can be further adapted for a user’s specific environment). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QIN ZHU whose telephone number is (571)270-1304. The examiner can normally be reached Monday-Thursday 6AM-4PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QIN ZHU/Primary Examiner, Art Unit 2691
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604125
DETECTING ACTIVE SPEAKERS USING HEAD DETECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12603076
NOISE CONTROL SYSTEM, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM INCLUDING A PROGRAM, AND NOISE CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12597900
METHOD AND APPARATUS TO EVALUATE AUDIO EQUIPMENT FOR DYNAMIC DISTORTIONS AND OR DIFFERENTIAL PHASE AND OR FREQUENCY MODULATION EFFECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12593169
DIRECTION-BASED FILTERING FOR AUDIO DEVICES USING TWO MICROPHONES
2y 5m to grant Granted Mar 31, 2026
Patent 12587805
SOUND-FIELD CONTROL METHOD AND DEVICE, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
90%
With Interview (+2.6%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 610 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month