Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Schevciw (US 20210409888, IDS 1/14/25).
Regarding claim 1, Schevciw teaches a device comprising: a memory (memory 110, [0125]) configured to store audio data associated with an immersive audio environment (media filed, [0125]); and one or more processors (processor 120, fig 2) configured to: obtain pose data for a listener in the immersive audio environment (pose, [0135]); determine a current listener pose based on the pose data and one or more pose constraints (movement, [0131]); obtain, based on the current listener pose, a rendered asset associated with the immersive audio environment (sound field representation, [0128]); and generate an output audio signal based on the rendered asset (output audio data, [0128]).
Regarding claim 2, Schevciw teaches the device of claim 1, wherein the one or more pose constraints include a human body movement constraint (translation between first and second pose, [0237]).
Regarding claim 3, Schevciw teaches the device of claim 2, wherein the human body movement constraint corresponds to a velocity constraint (velocity, [0135]).
Regarding claim 4, Schevciw teaches the device of claim 2, wherein the human body movement constraint corresponds to an acceleration constraint (acceleration, [0135]).
Regarding claim 5, Schevciw teaches the device of claim 2, wherein the human body movement constraint corresponds to a constraint on a hand or torso pose of the listener relative to a head pose of the listener (head tracker data, [0254]).
Regarding claim 6, Schevciw teaches the device of claim 1, wherein the one or more pose constraints include a boundary constraint that indicates a boundary associated with the immersive audio environment, and wherein the one or more processors are configured to determine the current listener pose such that the current listener pose is limited by the boundary (virtual location within a game environment, [0190]).
Regarding claim 7, Schevciw teaches the device of claim 1, wherein the one or more processors are configured to: obtain a pose based on the pose data; and determine whether the pose violates at least one of the one or more pose constraints (translation exceeding between first and second pose, [0237]).
Regarding claim 8, Schevciw teaches the device of claim 7, wherein the one or more processors are configured to, based on a determination that the pose does not violate the one or more pose constraints, use the pose as the current listener pose (select representation of the sound field associated with fifth viewpoint even though wearer has not moved, [0237]).
Regarding claim 9, Schevciw teaches the device of claim 7, wherein the one or more processors are configured to, based on a determination that the pose violates at least one of the one or more pose constraints (translation exceeding between first and second pose, [0237]), determine the current listener pose based on a prior listener pose that did not violate the one or more pose constraints (select representation of the sound field associated with fifth viewpoint even though wearer has not moved, [0237]).
Regarding claim 10, Schevciw teaches the device of claim 9, wherein the one or more processors are configured to, based on the determination that the pose violates at least one of the one or more pose constraints, determine a predicted listener pose based on a prior predicted listener pose associated with the prior listener pose (select representation of the sound field associated with fifth viewpoint even though wearer has not moved, [0237]).
Regarding claim 11, Schevciw teaches the device of claim 7, wherein the one or more processors are configured to, based on a determination that the pose violates at least one of the one or more pose constraints, determine the current listener pose based on an adjustment of the pose to satisfy the one or more pose constraints (transition from one representation to second representation, [0237]).
Regarding claim 12, Schevciw teaches the device of claim 1, wherein the pose data includes first pose data associated with a head of a listener (headset is used as the origin, [0244]) and second pose data associated with at least one of a torso of the listener or a hand of the listener (speed at which the head turns is inherently relative to the torso, [0245]).
Regarding claim 13, Schevciw teaches the device of claim 12, wherein the first pose data is obtained from a first device and wherein the second pose data is received from a second device that is distinct from the first device (first and second device, fig 2).
Regarding claim 14, Schevciw teaches the device of claim 1, wherein, to obtain the rendered asset, the one or more processors are configured to: determine a target asset based on the pose data; and generate an asset retrieval request to retrieve the target asset from a storage location (time-stamped location information 656 (e.g., indicating user positions (e.g., using (x,y,z) coordinates) and time stamps associated with the user positions, [0189]).
Regarding claim 15, Schevciw teaches the device of claim 1, further comprising a pose sensor coupled to the one or more processors, and wherein the pose sensor is configured to provide at least a portion of the pose data (head tracking data from one or more sensors, [0291]).
Regarding claim 16, Schevciw teaches the device of claim 15, wherein the pose sensor and the one or more processors are integrated within a head-mounted wearable device (wearable device has audio FX renderer, fig 7).
Regarding claim 17, Schevciw teaches the device of claim 1, wherein the one or more processors are integrated within an immersive audio player device (wearable device has audio FX renderer, fig 7).
Regarding claim 18, Schevciw teaches the device of claim 1, further comprising a modem coupled to the one or more processors and configured to receive the pose data from a device that includes a pose sensor (modem, [0374]).
Claims 19 and 20 are each substantially similar to claim 1 and are rejected for the same reasons.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kile Blair whose telephone number is (571)270-3544. The examiner can normally be reached M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KILE O BLAIR/Primary Examiner, Art Unit 2691