DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This communication is responsive to the applicant's preliminary amendment filed 08/09/2024.
Claims 21-40 are pending. Claims 1-20 have been cancelled.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 21-40 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Arendash (US-2014/0100839).
Regarding claim 21, Arendash discloses an apparatus (see Figs. 8-10) comprising at least one processor (see Figs. 10 and 28); and at least one memory storing instructions that, when executed with the at least one processor (see ¶ [0181], [0182]), cause the apparatus at least to:
obtain one or more values for one or more acoustic parameters for a plurality of positions within a sound space (see ¶ [0099], [0104]-[0108], [0189]);
generate an image representing the sound space, where a value for a pixel of the image file represents a value for an acoustic parameter at a position, of the plurality of positions (see ¶ [0104]), corresponding to the pixel such that data in the image comprising information configured to enable the sound space to be rendered (see ¶ [0076], [0082], [0104]); code the image to obtain a coded image (see ¶ [0015]); and provide the coded image and metadata associated with the image (bitmap values corresponding to the image, see ¶ [0076]).
Regarding claim 22, Arendash discloses the apparatus as claimed in claim 21, wherein coding the image comprises: compress the image using at least one image compression process (see ¶ [0015]).
Regarding claims 23 and 25, Arendash discloses the apparatus as claimed in claim 21, having the apparatus to: cause a plurality of representations of the sound space to be combined in a single image, wherein different representations relate to different acoustic parameters; and cause the plurality of representations to be provided in a video format, and in a tiled format (see ¶ [0013] and [0114]).
Regarding claims 24, and 26, Arendash discloses the apparatus as claimed in claim 21, having the apparatus to: cause the plurality of representations to be provided in a tiled format; and with the image comprises at least one of: a grey scale image, or a colored image with different color channels used to represent different acoustic parameters (see ¶ [0106], [0107], [0116]).
Regarding claim 27, Arendash discloses the apparatus as claimed in claim 21, wherein the one or more acoustic parameters comprise any one or more of: reverberation, audio decay time, arrival time of reflection, horizontal arrival direction of reflection, vertical arrival direction of reflection, relative level of reflection, diffuseness, equalization, direct sound level, direct sound position, late delay, or early reflection latency (see ¶ [0112]).
Regarding claim 28, Arendash discloses the apparatus as claimed in claim 21, wherein the metadata comprises mapping data associated with the image, wherein the mapping data is configured to, at least, describe how the plurality of positions within the sound space are mapped to pixels of the image (see ¶ [0074], [0076], [0079]).
Regarding claim 29, Arendash discloses the apparatus as claimed in claim 21, wherein the metadata comprises mapping data associated with the image, wherein the mapping data comprises information that enables the image to be converted to the one or more values for the one or more acoustic parameters, wherein the mapping data is further configured to describe how the one or more values for the one or more acoustic parameters are mapped to intensity values for the pixels of the image (see ¶ [0007], [0008], [0074], [0076], [0079]).
Regarding claim 30, Arendash discloses the apparatus as claimed in claim 21, having the apparatus to: generate the image for different heights of the sound space (see ¶ [0108]).
Regarding claim 31, Arendash discloses the apparatus as claimed in claim 21, wherein the sound space comprises a virtual sound space (see Fig. 21).
Regarding claim 32, Arendash discloses the apparatus as claimed in claim 21, wherein the apparatus comprises one or more transceivers configured to transmit the image to an audio rendering device (see ¶ [0022], [0183]).
Regarding claim 33, Arendash discloses the apparatus as claimed in claim 21, wherein the one or more values are obtained via at least one of: an analysis of audio signals, or a modeling process (see ¶ [0108]).
Method claims 34-38 are rejected for the same reasoning as set forth for the rejection of apparatus claims 21-24 and 26 since the apparatus claims perform the same functions as the method claims.
Regarding claim 39, Arendash discloses an apparatus (see Figs. 8-10) comprising at least one processor (see Figs. 10 and 28); and at least one memory storing instructions that, when executed with the at least one processor (see ¶ [0181], [0182]), cause the apparatus at least to:
receive a coded version of an image (see ¶ [0015]) representing a sound space, wherein a value for a pixel of the image is associated with an acoustic parameter at a position (see ¶ [0076], [0082], [0104]), of a plurality of positions within a sound space, corresponding to the pixel such that data in the image comprises information configured to enable the sound space to be rendered (see ¶ [0099], [0104]-[0108], [0189]); receive metadata associated with the image (bitmap values corresponding to the image, see ¶ [0076]); receive an audio signal; decode the coded version of the image to obtain a decoded image; determine, for at least one position of the plurality of positions within the sound space, at least one value for at least one acoustic parameter based, at least partially, on the decoded image; and cause rendering of the received audio signal based, at least partially, on the at least one determined value for the at least one acoustic parameter (see Figs. 8-9, and ¶ [0076], [0078], [0082], [0088]-[0104]).
Method claim 40 is rejected for the same reasoning as set forth for the rejection of apparatus claim 39 since the apparatus claims perform the same functions as the method claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gibson, Myllyla et al, and Stuart et al discloses various apparatus for using virtual image for mixing and processing audio signals.
Issued patent(s) 12,081,963 of parent application(s) 17/262,025 is/are made of record here as pertinent art to the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XU MEI whose telephone number is (571)272-7523. The examiner can normally be reached on Monday-Friday 10-6:30 est.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XU MEI/ Primary Examiner, Art Unit 2695 02/20/2026