DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner’s Comments
Based upon the most recently submitted amendment and remarks, rendering parameter and/or rendering parameter rule type as recited in the claims, is read as any parameter or collection of parameters used in the decoder.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 26-47 of U.S. Patent No. 11570564. Although the claims at issue are not identical, they are not patentably distinct from each other because the applications claims a broader recitation of the same system and method claimed in the patent
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3,5-8,10-13,15-18,20 is/are rejected under 35 U.S.C. 102a1 as being anticipated by Metcalf (US 7636448 B2).
As per claim 1, Metcalf discloses a method comprising:
defining a grouping comprising at least two of a plurality of elements and defining at least one further element of the plurality of elements outside of the contextual grouping, the plurality of elements within at least one audio scene (the elements in the micro objects grouping and the elements/objects which are in the macro objects grouping which is outside the micro object grouping, para 15); and
defining with respect to the grouping at least one first rendering parameter and/or rendering parameter rule type which is configured to be applied with respect to audio signals associated with the at least two of the plurality of elements (any of the parameters or combinations of parameters supporting the two rendering engines, para 15),
wherein the at least one first parameter and/or parameter rule type is configured to be applied with respect to audio signals associated with each of the at least one further element outside of the grouping (the rendering engine applied to macro objects as opposed to micro objects).
As per claim 2, the method as claimed in claim 1, further comprising determining a reference point or a reference area associated with the at least two of the plurality of elements that comprise the grouping (the nearfield and far field designations are based on reference areas of the rendered soundfield).
As per claim 3, the method as claimed in claim 2, wherein the at least one first rendering parameter and/or rendering parameter rule type is configured to be applied
(i) commonly with respect to the at least two of the plurality of elements that comprise the grouping with respect a distance between a user and the reference point or the reference area (either of the sets of objects in either of the macro or micro groups are based on a nearfield or farfield, each of which are defined by a relative distance from the user/listener and a reference area as those define each of the nearfield and farfield) (the rendering of the macro objects are commonly applies while the micro object rendering is individually applied) and
(ii) individually with respect to respective ones of the at least one further element outside of the grouping (the rendering of the macro objects are commonly applies while the micro object rendering is individually applied).
As per claim 5, The method as claimed in claim 1, wherein defining with respect to the grouping at least one first rendering parameter and/or rendering parameter rule type comprises defining with respect to the grouping a rendering volume which is configured to be applied commonly with respect to the at least two of the plurality of elements that comprise the grouping to maintain a single common volume level for the grouping of elements (per para 36, the micro elements have a common equalization applied, where an equalization comprises a common set of volumes to be applied to each micro object).
As per claim 6, an apparatus for processing audio signals within at least one audio scene, the apparatus comprising at least one processor and memory storing program code, the memory and the program code (the digital system required to perform the method of the claim 1 rejection) configured, upon execution by the at least one processor, to:
define a grouping comprising at least two of a plurality of elements and define at least one further element of the plurality of elements outside of the grouping, the plurality of elements within the at least one audio scene (per claim 1 rejection); and
define with respect to the grouping at least one first rendering parameter and/or rendering parameter rule type which is configured to be applied with respect to audio signals associated with the at least two of the plurality of elements (per claim 1 rejection),
wherein the at least one first rendering parameter and/or rendering parameter rule type is configured to be applied individually with respect to audio signals associated with each of the at least one further element outside of the grouping (per claim 1 rejection).
As per claim 7, The apparatus as claimed in claim 6, wherein the memory and the program code are further configured, upon execution by the at least one processor, to determine a reference point or a reference area associated with the at least two of the plurality of elements that comprise the grouping (per claim 2 rejection).
As per claim 8, the apparatus as claimed in claim 7, wherein the at least one first rendering parameter and/or rendering parameter rule type is configured to be applied (i) commonly with respect to the at least two of the plurality of elements that comprise the grouping with respect a distance between a user and the reference point or the reference area and
(ii) individually with respect to respective ones of the at least one further element outside of the grouping.
(per the claim 3 rejection)
As per claim 10, the apparatus as claimed in claim 6, wherein the memory and the program code are configured, upon execution by the at least one processor, to define with respect to the grouping at least one first rendering parameter and/or rendering parameter rule type by defining with respect to the grouping a rendering volume which is configured to be applied commonly with respect to the at least two of the plurality of elements that comprise the grouping to maintain a single common volume level for the grouping of elements (per the claim 5 rejection).
As per claim 11, a method for rendering audio signals associated with a plurality of elements within at least one audio scene, the method comprising:
Determining a grouping comprising at least two of the plurality of elements and determining at least one further elements of the plurality of elements outside of the grouping (per claim 1 rejection, noting defining and determining are used interchangeably);
determining with respect to the grouping at least one first rendering parameter and/or rendering parameter rule type (per the claim 1 rejection, noting defining and determining are used interchangeably);
rendering audio signals associated with the at least two of the plurality of elements by applying the at least one first rendering parameter and/or rendering parameter rule type with respect to audio signals associated with the at least two of the plurality of elements (per one of the two different rendering engines per the claim 1 rejection);
rendering audio signals associated with the at least one further element of the plurality of elements by individually applying the at least one first rendering parameter and/or rendering parameter rule type with respect to each of the at least one further element (per the other of the two different rendering engines per the claim 1 rejection); and
combining the rendering of audio signals associated with the at least two of the plurality of elements with the rendering of audio signals associated with the at least one further element of the plurality of elements outside of the grouping (per para 20: nearfield physical synthesis and farfield virtual synthesis may be combined; in addition, the rendered objects from both engines must be combined uin order to be played back per the cited playback system per para 16 and 76).
As per claim 12, the method as claimed in claim 11, further comprising determining a reference point or a reference area associated with the at least two of the plurality of elements that comprise the grouping (per claim 2 rejection).
As per claim 13, The method as claimed in claim 12, wherein the at least one first rendering parameter and/or rendering parameter rule type is configured to be applied (i) commonly with respect to the at least two of the plurality of elements that comprise the grouping with respect a distance between a user and the reference point or the reference area and (ii) individually with respect to respective ones of the at least one further element outside of the grouping. (per claim 3 rejection).
As per claim 15, the method as claimed in claim 11, wherein applying the at least one first rendering parameter and/or rendering parameter rule type comprises commonly applying with respect to the grouping a rendering volume with respect to the at least two of the plurality of elements that comprise the grouping to maintain a single common volume level for the grouping of elements (per claim 5 rejection).
As per claim 16, an apparatus for rendering audio signals associated with a plurality of elements within at least one audio scene, the apparatus comprising at least one processor and memory storing program code, the memory and the program code configured, upon execution by the at least one processor, to:
Determine/define a grouping comprising at least two of the plurality of elements and determine at least one further element of the plurality of elements outside of the grouping (per claim 1,6 rejections);
Determine/define with respect to the grouping at least one first rendering parameter and/or rendering parameter rule type (per claim 1 and 6 rejections);
render audio signals associated with the at least two of the plurality of elements by applying the at least one first rendering parameter and/or rendering parameter rule type with respect to audio signals associated with the at least two of the plurality of elements (per claim 11 rejection);
render audio signals associated with the at least one further element of the plurality of elements outside of the grouping by individually applying the at least one first rendering parameter and/or rendering parameter rule type with respect to each of the at least one further elements (per claim 11 rejection);
and
combine the rendering of audio signals associated with the at least two of the plurality of elements with the rendering of audio signals associated with the at least one further element of the plurality of elements outside of the grouping (per claim 11 rejection).
As per claim 17, the apparatus as claimed in claim 16, wherein the memory and the program code are further configured, upon execution by the at least one processor, to determine a reference point or a reference area associated with the at least two of the plurality of elements that comprise the grouping (per claim 2 rejection).
As per claim 18, the apparatus as claimed in claim 17, wherein the at least one first rendering parameter and/or rendering parameter rule type is configured to be applied (i) commonly with respect to the at least two of the plurality of elements that comprise the grouping with respect a distance between a user and the reference point or the reference area and (ii) individually with respect to respective ones of the at least one further element outside of the grouping (per claim 3 rejection).
As per claim 20, the apparatus as claimed in claim 16, wherein the memory and the program code are configured, upon execution by the at least one processor, to apply the at least one first rendering parameter and/or rendering parameter rule type by commonly applying with respect to the grouping a rendering volume with respect to the at least two of the plurality of elements that comprise the grouping to maintain a single common volume level for the grouping of elements (per the claim 5 rejection).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4,9,14,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Metcalf (US 7636448 B2) as applied to claims 1 and 6,11,16, and further in view of Arteaga et al (US 20180310116 A1).
As per claims 4,9,14,19, Metcalf discloses the method as claimed in claim 1, but does not specify, wherein the audio rendering is six-degrees- of-freedom audio rendering.
Arteaga discloses an audio rendering system and teaches that the system can render virtual audio objects in an audio scene (virtual model of the spatialized audio sources, para. 23) using 6DOF rendering (para. 65, the model is based on a 6dof, which is a free viewpoint system) in order to create a VR based audio scene (para. 23, the virtual model of spatialized audio sources). It would have been obvious to one skilled in the art at the time of filing that the virtual objects of Metcalf could be rendered in a well known format including a 6DOF format for the purpose of implementing a VR based audio scene for the listener.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER KRZYSTAN whose telephone number is 571-272-7498, and whose email address is alexander.krzystan@uspto.gov
The examiner can usually be reached on m-f 7:30-4:00 est.
If attempts to reach the examiner by telephone or email are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached on (571) 272-7547.
The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications.
/ALEXANDER KRZYSTAN/Primary Examiner, Art Unit 2653
Examiner Alexander Krzystan
January 26, 2026