DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 05/31/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-7 and 11-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mate et al. (WO 2021186104 A1 – Cited IDS – US 20230171557 A1, a family publication is being used for prior art citations).
Regarding claim 1: Mate teaches an apparatus (Fig. 10: apparatus 900) for generating information to assist rendering an audio scene, the apparatus comprising.
at least one processor (Fig. 10: processor 902 and para [0110]); and
at least one memory storing instructions (Fig. 10: Memory 904 and para [0110]) that, when executed with the at least one processor, cause the apparatus at least to:
obtain at least one audio signal (Fig. 2: Encoder 202 receiving encoder input format (EIF) 200 together with the audio data (audio signals, SOFA files, etc.) and para [0036]);
obtain at least one scene parameter associated with the at least one audio signal, the at least one scene parameter being configured to define a position within the audio scene, wherein the audio scene is defined with the at least one audio signal and the at least one scene parameter (Fig. 2: Encoder 202 receiving encoder input format (EIF) 200 having positions of audio objects which read on scene parameters together with the audio data (audio signals, SOFA files, etc.) and para [0036]);
obtain at least one anchor parameter associated with the at least one audio signal, wherein the at least one anchor parameter is associated with at least one listening space anchor located within a listening space during rendering and the at least one anchor parameter is configured to assist in mapping the position within the listening space, the listening space being at least one of a virtual and/or or physical space within which the audio scene is rendered, wherein the mapped position is at least one of scaled to fit within the listening space or the mapped position at least in part modifies the listening space (Fig. 2, Fig. 3, and Fig. 4: anchor object, anchor parameter, and positioning the anchor object in the listening space; Fig. 5: steps 512 – step 524: positioning and dynamically scaling other audio objects in relation with the anchor object; also see para [0048]-[0054] and para [0058]-[0070]); and
generate a bitstream comprising the at least one audio signal, the at least one scene parameters and at least one anchor parameter (Fig. 2: Encoder 202 generates AR sensing enable bitstream 204).
Regarding claim 2: Mate teaches the apparatus as claimed in claim 1, wherein the instructions, when executed with the at least one processor, further cause the apparatus to obtain a scene origin parameter wherein the position within the audio scene is defined relative to the scene origin parameter, and wherein the bitstream further comprises the scene origin parameter (Fig. 5: steps 510, 512, and 514: scene origin parameter being rendered).
Regarding claim 3: Mate teaches the apparatus as claimed in claim 1, wherein the at least one anchor parameter defines a geometric shape at least partially defining a boundary of the audio scene wherein the position is within the boundary of the audio scene, and the mapped position maps the boundary of the audio scene within the listening space (para [0048]-[0053]).
Regarding claim 4: Mate teaches the apparatus as claimed in claim 1, wherein the instructions, when executed with the at least one process, cause the apparatus to at least one of:
store the generated bitstream; or
transmit the generated bitstream (Fig. 2: bitstream 204 is transmitted to Renderer 206 for rendering audio scene).
Regarding claim 5: the apparatus discussed in claim 1 above also supports this corresponding rendering apparatus which is the decoding and dynamic modification process of the encoding process in encoder 202. Also see Fig. 2: Renderer 206 and para [0058]-[0070]).
Regarding claim 6: Mate teaches the apparatus as claimed in claim 5, wherein the at least one anchor parameter at least partially defines a geometric shape defining a boundary of the audio scene (para [0048]-[0053]).
Regarding claim 7: Mate teaches the apparatus as claimed in claim 5, wherein the bitstream further comprises a scene origin, and wherein the instructions, when executed with the at least one processor, cause the apparatus to render the at least one spatial audio signal further based on the listening position within the listening space and the scene origin with respect to the geometric shape defining the boundary of the audio scene (para [0002] and Fig. 5: steps 510, 512, and 514: scene origin parameter being rendered).
Regarding claims 11-14: the apparatus discussed in claims 1-4 above also supports this corresponding method claim.
Regarding claims 15-17: the apparatus discussed in claims 5-7 above also supports this corresponding rendering method claims.
Allowable Subject Matter
Claims 8-10 and 18-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID L TON whose telephone number is (571)270-7839. The examiner can normally be reached Monday - Friday 8:00 AM - 6:00 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at (571)272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID L TON/ Primary Examiner, Art Unit 2695