Prosecution Insights
Last updated: April 19, 2026
Application No. 18/637,073

ADAPTABLE SPATIAL AUDIO PLAYBACK

Non-Final OA §102§103
Filed
Apr 16, 2024
Examiner
SNIEZEK, ANDREW L
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Dolby International AB
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
94%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
1030 granted / 1213 resolved
+22.9% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
28 currently pending
Career history
1241
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
35.1%
-4.9% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1213 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements filed 11/8/24 and 5/23/24 have been considered. Drawings The drawings filed 4/6/24 are acceptable to the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 10-11, 14, 17, 19-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Falk et al. (US 11,968,520 B2). Re claims 1, 19 and 20: Falk et al. teaches an audio processing system along with method of operation, comprising: an interface system (such as structure to obtain a stereo recording, column 4, lines 60-67); and a control system (400), figure 4 configured for: receiving audio data (stereo audio) via the interface system, the audio data including one or more audio signals and associated spatial data (such as Left and Right), the spatial data indicating an intended perceived spatial position corresponding to an audio signal, the spatial data including at least one of channel data or spatial metadata (454); receiving, via the interface system, listener position and orientation data (453), figure 4; determining a rendering mode (by use of controller (401 used in conjunction with 402 and 403)); rendering the audio data for reproduction via a set of loudspeakers (such as 404, 405) of an environment according to the rendering mode, to produce rendered audio signals, wherein: rendering the audio data comprises determining relative activation of a set of loudspeakers in an environment (by those signals used in the driving of the loudspeakers as depicted in figure 4); the rendering mode is variable between a reference spatial mode (such a mode of operation when person is at listening position “A”, figure 2) and one or more distributed spatial modes (such a mode or operation when person is at one of listening positions “B-E”, figure 2) the reference spatial mode has an assumed front sound stage location or orientation (area or audio element (101) facing the listener that varies according to a listener position and orientation indicated by the listener position and orientation data (see various sound stages in figures 2, 7A, 7B that face the listener based on the position/orientation that the listener is in relation to this sound stage); and in the one or more distributed spatial modes, one or more elements of the audio data is or are each rendered in a more spatially distributed manner than in the reference spatial mode and spatial locations of remaining elements of the audio data are warped such that they span a rendering space of the environment more completely than in the reference spatial mode (by providing appropriate rendered signals from rendering arrangement (401-403) to each of the loudspeakers during a rendering mode corresponding to the relative location and orientation between the sound stage (101) and the listener (104) position); and providing, via the interface system, the rendered audio signals to at least some loudspeakers of the set of loudspeakers of the environment (such as loudspeakers 404, 405) or any other arrangement possible as discussed in column 7, lines 29-32. The method of operation set forth in claim 19 include operations of corresponding system features as set forth in claim 1 with these operations satisfied by the operation of those elements in Falk et al. discussed above. Claim 20 contains those features of claim 1 and 19 and additionally sets forth a media having instructions to be used for controlling operations. These additional features are satisfied by the use of a storage medium having program instruction that are used by a processing circuitry to perform the operations as taught in column 3, lines 51-58, arrangement in figure 12 along with corresponding written disclosure. Re claim 10: as seen from figures 2, 7A and 7B the rendering mode is a continuum of rendering modes form one to another based on the position/orientation of the listener and the changing of positions/orientation of the listener with respect to the audio element stage Re claim 11: see figure 4 input to controller (401) includes metadata representative of spatially related information correspond to a rendering mode of operation Re claim 14: the alternative claimed audio data is satisfied by the river or sea audio data (figures 3A-3C) satisfying at least the broadly worded sound stage data as set forth Re claim 17: note figure 4, (454) which is spatial metadata that is used to determine the audio rendering in a manner according to this data Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Falk et al. in view of Shi et al. (US 10,728,683 B2). Re claims 2 and 12: The teaching of Falk et al. is discussed above and incorporated herein. Falk et al. does not teach that the sensing unit used to determine a listener position and orientation is obtained using a camera, but instead a Lidar tracking arrangement. Shi et al. teaches in a similar environment of audio rendering that cameras, such as (301) are used for determining listening position and orientations (See discussion in column 7, lines 12-24). It would have been obvious to one of ordinary skill in the art before the filing of the invention to replace the Lidar tracking arrangement of Falk et al. with a camera tracking arrangement to predictable provide an alternative means for determining a listener’s position and/or orientation. Therefor the claimed subject matter would have been obvious before the filing of the invention. Similarly Falk et al. does not teach that the spatial mode data is obtained from a microphone system or an camera system as set forth in claim 12. Shi et al. teaches in a similar environment of audio rendering that cameras, such as (301) are used for determining spatial mode data for audio rendering It would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate a camera system as taught in Shi et al. into the arrangement of Falk et al. to predictably provide spatial mode data to be used for audio rendering. Therefor the claimed subject matter would have been obvious before the filing of the invention. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Falk et al. in view of Shi et al. (US 10,375,498 B2) Re claim 3: The teaching of Falk et al. is discussed above and incorporated herein. Falk et al. does not teach that the interface system includes a user interface to provide a listener’s position and orientation data. Shi et al. teaches in a similar environment of audio rendering that a user’s interface (such as that depict in figure 3B) are used for determining a listener’s position/orientation, column 14, lines 50-67 allowing for the user to control the position at which audio will be rendered. It would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate this feature of Shi et al. into the arrangement of Falk et al. to predictably provide a means allowing a user to change the listening location for audio rendering. Therefor the claimed subject matter would have been obvious before the filing of the invention. Claim(s) 4 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Falk in view of De Bruijn et al. (US 2016/0080886 A1), provided by applicant. Re claim 4: The teaching of Falk et al. is discussed above and incorporated herein. Falk et al. does not teach to receive a rendering mode indication to determine the rendering mode. De Bruijn et al. teaches in a similar environment of audio rendering to receive rendering mode indication (render configuration data, paragraph [0093]) use by a renderer to provide audio that will be provided to a listener. It would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate such a feature taught by De Bruijn et al. into the arrangement of Falk et al. to predictably a means of indicating to a renderer how audio will be provided to a listener. Therefor the claimed subject matter would have been obvious before the filing of the invention. Re claim 18: The teaching of Falk et al. is discussed above and incorporated herein. Falk et al. does not teach that a content type classifier as set forth. De Bruijn et al. teaches to include such a classifier audio content characteristics/type (paragraphs [0083 and 0151] included in the metadata to select an appropriate mode of audio rendering. It would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate such a feature into the arrangement of Falk et al. to predictably select an appropriate mode of audio rendering. Therefor the claimed subject matter would have been obvious before the filing of the invention. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Falk et al. in view of Tsingos et al. (US 9,204, 236 B2). Re claim 15: The teaching of Falk et al. is discussed above and incorporated herein. Additionally note that Falk et al. teaches that other speaker arrangements may be used (column 7, lines 29-33) but not the specific arrangement(s) of claim 15. Tsingos et al. that audio rendering can be achieved with the use of at least a Dolby 5.1 arrangement allowing for an improved audio sound experience (column 1, lines 23-53) satisfying the alternative languge as set forth. It would have been obvious to one of ordinary skill in the art to modify the speaker arrangement used in Falk et al. to include at least a Dolby 5.1 speaker arrangement to predictably provide an improved audio sound experience. Therefor the claimed subject matter would have been obvious before the filing of the invention. Allowable Subject Matter Claims 5-9, 13 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The claimed audio processing system that in combination includes those features of claim 4/1 wherein the rendering mode indication is a received voice command (claim 5), sensor signals received via a graphical user interface (claim 6) or where the rendering mode indication involves receiving an indication of a number of people in a listening area (claim 8) is neither taught by nor an obvious variation of the art of record. The limitations of claims 7 and 9 depend upon those features of claims 6/4/1 and 8/4/1 respectively. The claimed audio processing system that in combination includes those features of claim 11/1 that further comprises a display device and a sensor system proximate the display device, wherein: the control system is further configured for controlling the display device to present a graphical user interface; and receiving reference spatial mode data involves receiving sensor signals corresponding to user input via the graphical user interface s set forth in claim 13 is neither taught by nor an obvious variation of the art of record. The claimed audio processing system that in combination includes those features of claim 14/1 wherein the front sound stage data comprises audio data received in Dolby Atmos format and having spatial metadata indicating an (x,y) spatial position wherein y < 0.5 as set forth in claim 16 is neither taught by nor an obvious variation of the art of record. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SNIEZEK whose telephone number is (571)272-7563. The examiner can normally be reached Monday-Friday 7:00 AM-3:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached at 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW SNIEZEK/Primary Examiner, Art Unit 2693 /A.S./Primary Examiner, Art Unit 2693 2/25/26
Read full office action

Prosecution Timeline

Apr 16, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598415
AUDIO PROCESSING SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598421
ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12582375
Modular Auscultation Device
2y 5m to grant Granted Mar 24, 2026
Patent 12581235
NOISE REDUCTION SYSTEM USING FINITE IMPULSE RESPONSE FILTER THAT IS UPDATED BY CONFIGURATION OF MINIMUM PHASE FILTER FOR NOISE REDUCTION AND ASSOCIATED METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12568326
MECHANISM FOR EXTERNAL MULTI-FUNCTIONAL CABLE RETENTION FOR A HEARING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
94%
With Interview (+8.8%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1213 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month