DETAILED ACTION
In the response to this office action, the examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the examiner in prosecuting this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: SOUND FIELD ADJUSTMENT BASED ON LISENER POSE DATA.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 7, 8, 10, 13-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Layton et al. (US 20060287748 A1).
Regarding claim 1, Layton discloses a device (see figures 2-4) comprising:
a memory (48 of figure 4, paragraph [0073]) configured to store audio data associated with an immersive audio environment (see abstract, paragraphs [0073]-[0075]); and
one or more processors (40 and peripheral devices of 21 of figure 4, see paragraph [0075]) configured to:
obtain a listener pose in the immersive audio environment (paragraph [0046], "a global positioning system locater to determine a current spatial location of a listener and an accelerometer device to determine a current orientation");
obtain an asset associated with the listener pose (such as audio stream to be output close to the location or URL information, paragraphs [0048] and [0075] to [0076]) and retrieve the asset from the memory or obtain the asset from a remote device (); and
generate an output audio signal based on the asset (local sound sources or ULR audio streams, paragraphs [0048] and [0075] to [0076], “The track player determination unit 13 utilises the current position information from the system 11 to determine suitable audio tracks to play around the current position of the listener 15”).
Layton also discloses wherein “data cache 48 is provided for storing frequently used data”, paragraph [0073], and “Advantageously, the method further comprises the step of downloading said audio content from a computer network”, paragraph [0031].
Although Layton does not expressly discloses determining whether an asset is stored locally and retrieving based on the determination, the examiner takes official notice that determining if data is found in local memory and subsequently retrieving the data from the closer of the local memory and a network location was well known in the art. Therefore, it would have been obvious to one of ordinary skill in the art to further comprise wherein the processor is configured to determine whether an asset associated with the listener pose is stored locally at the memory and based on the determination, select whether to retrieve the asset from the memory or to obtain the asset from a remote device in the system of Layton for the benefit of using faster local memory when the data is available and when not retrieving the data from a network.
Regarding claim 2, Layton discloses wherein the asset corresponds to one or more audio streams associated with the immersive audio environment.
Regarding claim 7, Layton discloses wherein the one or more processors are configured to perform a rendering operation (via 12) on the asset during generation of the output audio signal (paragraphs [0044] to [0052]).
Regarding claim 8, Layton discloses wherein the output audio signal includes an output binaural signal, and wherein the one or more processors are further configured to binauralize an output of the rendering operation to generate the output binaural signal (paragraph [0110]).
Regarding claim 10, Layton/well known combination of claim 1 discloses wherein the one or more processors are configured to, based on a determination that the asset is not stored locally at the memory:
select to obtain the asset from the remote device (no other choice);
initiate retrieval of the asset from the remote device (necessarily must be done to get data);
decode the asset at an audio stream decoder (required for network traffic paragraph [0031] and streaming audio paragraph [0075]); and
generate the output audio signal at a renderer (via 12).
Regarding claim 13, Layton discloses wherein the listener pose indicates a position of a listener in the immersive audio environment (paragraph [0046], [0029], and [0055]).
Regarding claim 14, Layton discloses wherein the listener pose indicates a position of a listener and an orientation of the listener in the immersive audio environment (paragraph [0046], [0029], and [0055]).
Regarding claim 15, Layton discloses further comprising a pose sensor coupled to the one or more processors, wherein the pose sensor and the one or more processors are integrated within a head-mounted wearable device (paragraph [0050] and [0051]).
Regarding claim 16, Layton discloses further comprising a modem coupled to the one or more processors and configured to receive the asset from the remote device (paragraph [0109]).
Claim 17 is rejected in a analogous manner to claim 1.
Claim 18 is rejected in a analogous manner to claim 10.
Claim 20 is rejected in a analogous manner to claim 1 in view of paragraph [0073] of Layton.
Allowable Subject Matter
Claims 3-6, 9, 11, 12, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS JOHN SUTHERS whose telephone number is (571)272-0563. The examiner can normally be reached M-F, 8 am -5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DOUGLAS J SUTHERS/ Examiner, Art Unit 2695
/PAUL KIM/ Primary Examiner, Art Unit 2695