Prosecution Insights
Last updated: April 19, 2026
Application No. 18/841,762

Spatial Rendering of Reverberation

Non-Final OA §102
Filed
Aug 27, 2024
Examiner
ALBERTALLI, BRIAN LOUIS
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
697 granted / 852 resolved
+19.8% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
19 currently pending
Career history
871
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
27.7%
-12.3% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 852 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 9, 12-13, 18, and 22 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Koppens (U.S. Patent Application Pub. No. 2024/0267693). In regard to claim 9, Koppens discloses an apparatus for assisting spatial rendering in at least one acoustic environment (see Fig. 3), the apparatus comprising: at least one processor (paragraph [0269]); and at least one memory (paragraph [0269]) storing instructions that, when executed, with the at least one processor, cause the apparatus at least to: obtain a bitstream (receiver 301 receives a bitstream, paragraph [0090]), the bitstream comprising: an identifier identifying the at least one acoustic environment (ID of the acoustic environment, paragraph [0216]); information defining at least one dimension of the at least one acoustic environment (geometric elements describing the region, paragraph [0216]); and a reverberation parameter part (reverberation properties, paragraph [0216]) comprising one of: an encoded at least one frequency band data or an encoded at least one reverberation parameter (offset and T60 times from diffuse to source energy ratios, provided per frequency, paragraph [0216]; note that the claim merely requires one of encoded frequency band data or encoded at least one reverberation parameter and that the reverberation parameters of Koppens could be considered one of encoded frequency band data (because the parameters are provided for a plurality of frequences) or reverberation parameters (because they provide, e.g. T60 times)); decode the reverberation parameter part to generate decoded reverberation parameters (metadata processor 305 extracts the metadata including reverberation parameters from the bitstream, paragraph [0093]); obtain reverberator parameters from the decoded reverberation parameters (reverberation parameters are obtained from the metadata, paragraphs [0093-0094]); initialize at least one reverberator based on the reverberator parameters (a decoder creates reverberation properties provided by the reverberation parameters, paragraph [0094]); obtain at least one input audio signal associated with the at least one acoustic environment (audio data processor 303 extracts audio data from the bitstream, paragraph [0091]); and generate an output audio signal based on the application of the at least one reverberator to the at least one input audio signal (renderer 307 generates an audio signal with reverberation representing the reverberation properties, paragraph [0094]). In regard to claim 12, Koppens discloses an indicator indicating that the reverberator parameters are to be determined from at least one reverberation parameter encoded into a scene payload (a flag indicating whether acoustic environment data is present, paragraph [0216]). In regard to claim 13, Koppens discloses to initialize the at least one reverberator using the at least one reverberator parameter independent on whether at least one acoustic environment is a virtual acoustic environment or an augmented reality acoustic environment (VR and AR environments are not distinguished, paragraphs [0005] and [0012]). In regard to claim 18, Koppens discloses a method for an apparatus for assisting spatial rendering in at least one acoustic environment (see Fig. 3), the method comprising: obtaining a bitstream (receiver 301 receives a bitstream, paragraph [0090]), the bitstream comprising: an identifier identifying the at least one acoustic environment (ID of the acoustic environment, paragraph [0216]); information defining at least one dimension of the at least one acoustic environment (geometric elements describing the region, paragraph [0216]); and a reverberation parameter part (reverberation properties, paragraph [0216]) comprising one of: an encoded at least one frequency band data or an encoded at least one reverberation parameter (offset and T60 times from diffuse to source energy ratios, provided per frequency, paragraph [0216]; note that the claim merely requires one of encoded frequency band data or encoded at least one reverberation parameter and that the reverberation parameters of Koppens could be considered one of encoded frequency band data (because the parameters are provided for a plurality of frequences) or reverberation parameters (because they provide, e.g. T60 times)); decoding the reverberation parameter part to generate decoded reverberation parameters (metadata processor 305 extracts the metadata including reverberation parameters from the bitstream, paragraph [0093]); obtaining reverberator parameters from the decoded reverberation parameters (reverberation parameters are obtained from the metadata, paragraphs [0093-0094]); initializing at least one reverberator based on the reverberator parameters (a decoder creates reverberation properties provided by the reverberation parameters, paragraph [0094]); obtaining at least one input audio signal associated with the at least one acoustic environment (audio data processor 303 extracts audio data from the bitstream, paragraph [0091]); and generating an output audio signal based on the application of the at least one reverberator to the at least one input audio signal (renderer 307 generates an audio signal with reverberation representing the reverberation properties, paragraph [0094]). In regard to claim 22, Koppens discloses initializing the at least one reverberator using the at least one reverberator parameter independent on whether at least one acoustic environment is a virtual acoustic environment or an augmented reality acoustic environment (VR and AR environments are not distinguished, paragraphs [0005] and [0012]). Allowable Subject Matter Claims 1-2, 4-8, and 14-17 are allowed. The following is an examiner’s statement of reasons for allowance: Claim 1 is directed to an apparatus for encoding an audio signal, wherein the apparatus obtains at least one reverberation parameter, converts the reverberation parameter into at least one frequency band data, compares resources required to transmit encoded frequency band data and encoded at least one reverberation parameter, and generates a bitstream that includes one or more of the encoded frequency band data and encoded at least one reverberation parameter based on the comparison. As noted above, Koppens discloses reverberation parameters that are provided for plurality of frequencies. The frequencies are provided as indexes in a list of frequency grids defined in FreqGridData() (paragraph [0216]). The frequency grid data contains frequency grid definitions that can be referenced by index by any of the parameters in the bitstream (paragraphs [0167-0168]). The frequency grid may be defined based on octaves (paragraph [0074]). As disclosed by Koppens, using the frequency grid provides a “flexible approach” that allows the encoder “the freedom to select a format for the description of the frequency grid that is particularly suitable for the specific properties and their specific frequency dependency” (paragraphs [0185-0188]). Encoding the reverberation data according to the frequency grid, therefore, would encode the reverberation data in a flexible manner, where the reverberation parameters could be encoded according to octave bands (“frequency band data”) or at arbitrary frequencies. However, Koppens does not disclose or suggest to encode reverberation parameters as both frequency band data and as a reverberation parameter, and then comparing the resources required to transmit the encoded frequency band data and encoded reverberation parameter, as required by claim 1. The general concept of comparing different methods of encoding data according to the resources required is well known. For example, Norvell et al. (U.S. Patent Application Pub. No. 2025/0063162) disclose a method performed in an encoder that determines a bitrate for a predictive encoding mode and absolute encoding mode and selects one of the coding modes based on the comparison (see Abstract). However, the prior art of record do not disclose or suggest the application of this general concept to select between reverberation parameter data encoded as frequency band data or encoded as a reverberation parameter, as required by claim 1. Claim 14 recites similar limitations as claim 1, and is allowed for the same reasons as claim 1. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Claims 10-11 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As noted above, the prior art of record does not disclose or suggest an encoded bitstream that includes a selection between a reverberation parameter encoded as frequency band data or encoded as a reverberation parameter. Accordingly, the prior art of record does not disclose or suggest a bitstream that includes at least one indicator indicating that the bitstream comprises one of the encoded frequency band data or the encoded reverberation parameter, as required by claims 10-11 and 19-20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Herre et al., Schwar et al., Koppens, Ye et al., Koppens et al., Eronen et al., Honma et al., Neukam et al., Liu et al., Liu et al., Lee et al., Ojala et al., and Coleman et al. disclose additional methods of encoding reverberation parameters into a bitstream. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN LOUIS ALBERTALLI whose telephone number is (571)272-7616. The examiner can normally be reached M-F 8AM-3PM, 4PM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. BLA 3/6/26 /BRIAN L ALBERTALLI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592247
INFERRING EMOTION FROM SPEECH IN AUDIO DATA USING DEEP LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12573407
QUICK AUDIO PROFILE USING VOICE ASSISTANT
2y 5m to grant Granted Mar 10, 2026
Patent 12574386
DISTRIBUTED IDENTIFICATION IN NETWORKED SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12572327
CONDITIONALLY ASSIGNING VARIOUS AUTOMATED ASSISTANT FUNCTION(S) TO INTERACTION WITH A PERIPHERAL ASSISTANT CONTROL DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573382
ADVERSARIAL LANGUAGE IMITATION WITH CONSTRAINED EXEMPLARS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+16.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 852 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month