Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This is in response to application filed 01/05/2024.
Information Disclosure Statement
2. The information disclosure statement (IDS) submitted is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-8, 10-19, 24 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mehta et al. (Pub.No.: 2015/0223002 A1) in view of Malak (Pub.No.: 2015/0223002 A1).
Regarding claims 1, 10 and 24, Mehta teaches an apparatus (reads on object-based audio rendering system that describes reception module, rendering module, playback system and audio processing system, see [0009] and [0080]) and method ([0009]), comprising:
at least one processor (see [0006] and [0080]); and
at least one non-transitory memory storing instructions that, when executed with the at least one processor (see [0006], [0080] and [0045]), cause the apparatus at least to:
obtain at least one audio signal (reads on receiving object-based audio signals including waveform data, see [0080]);
obtain at least one anchor parameter associated with the at least one audio signal (reads on metadata associated with each audio object including spatial location of the audio object in 3D space, see [0045]); and
obtain information configured to assist, within an audio scene within which the at least one audio signal is to be rendered (reads on rendering audio objects based on speaker configuration and spatial rendering metadata, thereby adapting spatial placement parameters of audio objects, [0006]).
Mehta features already discussed in the rejection of claims 1, 24 and 26. Mehta does not specifically teach “adaptation of the at least one anchor parameter with at least one further anchor parameter”.
However, Malak teaches mapping object-based audio signals relative to speaker positions and selecting speaker groups based on spatial relationships between object location and speaker location, see [0077], [0074] and [0083].
Thus, it would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention to modify Mehta to incorporate the positional mapping, as taught by Malak, in order to improve spatial accuracy and rendering consistency across multiple playback environments.
Claim 26 is rejected for the same reasons addressed in independent claims 1 and 24.
Regarding claims 2 and 12, the combination of Mehta and Malak teaches wherein information configured to assist in the adaptation comprises at least one of:
guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within the audio scene within which the at least one audio signal is to be rendered; or
information configured to define a geometry of a virtual or augmented audio scene wherein the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry (reads on rendering based on speaker configuration and spatial playback environment corresponding to audio scene geometry, see Mehta [0006]. Also reads on positional parameters defining spatial relationships between audio objects and speakers including listener-relative geometry, see Malak [0074] and [0083]).
Regarding claims 3 and 13, (the combination of Mehta and Malak teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to obtain information to obtain at least one of:
a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene (reads on rendering audio objects based on spatial location within listening environment, see Mehta [0006]. Further, it reads on positional parameters including distance, azimuth, and elevation used for mapping audio objects to speaker sets, see [0074]);
a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; or
a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
Regarding claims 4, 16 and 18, the combination of Mehta and Malak teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to obtain information to obtain a processor filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value (this reads on rendering modules executing algorithms for distributing audio objects based on playback configuration, see Mehta [0006] and [0080]. Also, it reads on selecting speaker sets based on spatial mapping algorithms applied during rendering, see Malak [0077]).
Regarding claims 5 and 17, the combination of Mehta and Malak teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to obtain information to obtain at least one of:
an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene (reads on flexible rendering allowing audio objects to be reproduced across various speaker configurations, see Mehta [0006]. Also, it reads on mapping audio objects to closest speaker groups /set of speakers (e.g., three speakers in triangle pattern) based on the location of the object-based audio signal, see Malak [0077]);
a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; or
a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
Regarding claim 6, the combination of Mehta and Malak teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to obtain information to obtain an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene (reads on rendering multiple audio objects with individual spatial metadata and rendering instructions, see Mehta [0006] and [0045]. Also, in Malak this may read on mapping multiple audio objects relative to speaker configuration using spatial metadata, see [0045] and [0077]).
Regarding claim 7, the combination of Mehta and Malak teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to obtain information to obtain a mapping modification processing parameter configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene (Mehta teaches adaptive spatial rendering allowing modification of object placement based on playback configuration, see [0006]. Also, Malak teaches dynamic mapping of audio objects of speakers based on spatial relationships, see [0077] and [0083]).
Claim 8 is rejected for the same reasons addressed in independent claim 1. Also, for the claimed feature of “generate at least one bitstream”, Malak teaches object metadata and positional data transmitted with audio signals for rendering, see [0045] and [0074].
Regarding claim 14, the combination of Mehta and Malak teaches wherein the instructions, when executed with the at least one processor, cause the spatial filtering parameter to control the mapping one of:
a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor (Malak teaches selecting closets speaker groups based on object position, see [0077]);
a farthest anchor selection for selecting at least one anchor within the audio
scene farthest from the at least one audio element anchor;
a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and
a user input-based anchor selection.
Regarding claim 15, the combination of Mehta and Malak teaches wherein mapping of the at least one audio element anchor to the at least one anchor within the audio scene is based on the time difference between the at least one audio element anchor and the at least one anchor within the audio scene to control the mapping based on one of:
an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene;
an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement;
a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from others; or
a user input-based anchor selection (Mehta teaches rendering based on playback configuration input, see [0006]).
Regarding claim 19, the combination of Mehta and Malak teaches wherein the information is at least one of:
a mapping modification processing parameter wherein the means configured instructions, when executed with the at least one processor, cause the apparatus to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene; or
a dynamic updating parameter configured to control whether the at least one audio element anchor dynamically moves within the audio scene (Mehta teaches positional trajectory of audio objects during rendering, see [0006]).
Conclusion
4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rasha S. AL-Aubaidi whose telephone number is (571) 272-7481. The examiner can normally be reached on Monday-Friday from 8:30 am to 5:30 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ahmad Matar, can be reached on (571) 272-7488.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RASHA S AL AUBAIDI/Primary Examiner, Art Unit 2693