Prosecution Insights
Last updated: April 19, 2026
Application No. 18/779,533

SYSTEM AND TOOLS FOR ENHANCED 3D AUDIO AUTHORING AND RENDERING

Non-Final OA §DP
Filed
Jul 22, 2024
Examiner
YU, NORMAN
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Dolby Laboratories Licensing Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
525 granted / 598 resolved
+25.8% vs TC avg
Moderate +14% lift
Without
With
+13.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
35 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-15 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1, 6 and 11 of Patent 9838826 in view of Schevciw (US 2019/0239015). Although the conflicting claims are not identical, they are not patentably distinct from each other because: claims 1, 6 and 11 of Patent 9838826 anticipate all the limitations in claims 1, 6 and 11 of instant application except for wherein the amplitude panning process is based, at least in part, on a location of each of one or more virtual speakers. Schevciw teaches wherein the amplitude panning process is based, at least in part, on a location of each of one or more virtual speakers (Schevciw ¶0073). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Schevciw to improve the known apparatus of Patent 9838826 to achieve the predictable result of more efficient processing during rendering (Schevciw ¶0066). Dependent claims 2-10, 12-15 are also rejected because they are obvious variants of the patented claims. Patent 9838826 Claims 2-10, 12-15 teaches dependent claims 2-10, 12-15 of the instant application respectively. Patent 9838826 Instant Application 18/779533 1. A method, comprising: receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment; wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints. 6. An apparatus, comprising: an interface system; and a logic system configured for: receiving, via the interface system, audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving, via the interface system, reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment; wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints. 11. A non-transitory medium comprising a sequence of instructions, wherein the instructions, when executed by an audio signal processing device, cause the audio signal processing device to perform a method comprising: receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment; wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints. 1. A method, comprising: receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object, a location of each of one or more virtual speakers, and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment; wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints. 6. An apparatus, comprising: an interface system; and a logic system configured for: receiving, via the interface system, audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving, via the interface system, reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object, a location of each of one or more virtual speakers, and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment; wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints. 11. A non-transitory medium comprising a sequence of instructions, wherein the instructions, when executed by an audio signal processing device, cause the audio signal processing device to perform a method comprising: receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object, a location of each of one or more virtual speakers, and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment; wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints. Allowable Subject Matter Claims 1-15 would be allowable if a terminal disclaimer is filed to overcome the double patenting rejection(s) set forth in this office action. The following is an examiner’s statement of reasons for allowance: Independent claims 1, 6, and 11 are allowed because the closest cited prior art Dressler (US 2012/0230497) either alone or in combination, fails to anticipate or render obvious, the claimed limitation of “wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints” in combination with all other limitations in the claim(s) as defined by the applicant. Dressler ¶0023 teaches “an audio creation system to create audio objects by associating sound sources with attributes of those sound sources, such as location, velocity, directivity, downmix parameters to specific speaker locations. Then audio objects can be rendered based on the attribute information encoded in the objects based on the object's coordinates and metadata,” however Dressler fails to teach “wherein the metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints.” Consequently, the disclosed independent claims are allowed on behalf of above-discussed reasons. Since the disclosed dependent claims 2-5, 7-10, 12-15 are dependent on one of the independent claims, therefore they are also patentable Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NORMAN YU whose telephone number is (571)270-7436. The examiner can normally be reached on Mon - Fri 11am-7pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Any response to this action should be mailed to: Commissioner of Patents and Trademarks P.O. Box 1450 Alexandria, Va. 22313-1450 Or faxed to: (571) 273-8300, for formal communications intended for entry and for informal or draft communications, please label “PROPOSED” or “DRAFT”. Hand-delivered responses should be brought to: Customer Service Window Randolph Building 401 Dulany Street Arlington, VA 22314 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NORMAN YU/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Feb 28, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604123
APPARATUS AND VEHICULAR APPARATUS INCLUDING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12598409
IN-EAR WEARABLE DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12594882
AUTOMOTIVE SOUND AMPLIFICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12593165
ACOUSTIC INPUT-OUTPUT DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12581238
BINDING BAND ASSEMBLY FOR HEADSET AND HEADSET
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+13.5%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month