The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-16 and 18-21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Satongar et al. (US 2018/0091922).
Regarding claims 1, 20, and 21, Satongar discloses a method for determining sound field rotations, an apparatus comprising one or more processors 402 and a memory 404 storing software configured to be executed by the one or more processors 402, the software including instructions for performing the method, and one or more non-transitory media having software stored thereon, the software configured to be executed by the one or more processors 402 and including the instructions (see figs. 3-6, for example, and para. 0036), the method comprising: (a) determining an activity situation of an activity of a user (e.g., standing with minimal movement or running with significant movement), wherein the activity situation comprises a characterization of a type of movement of the user while engaged in the activity (e.g., standing with minimal movement (static use case) characterized by “reference angular change 716 within range of motion 714, e.g., less than 20 degrees in either direction”, or running with significant movement (dynamic use case) characterized by “reference angular change 718 outside of the predetermined range of motion 714, e.g., more than 20 degrees in either direction”; see fig. 6, step 608, fig. 7, and para. 0050-0052); (b) determining a user head orientation (e.g., device orientation data 504 corresponding to a device direction 704) using at least one sensor 410 of one or more sensors (see fig. 6, step 604, fig. 7, and para. 0047); (c) determining a direction of interest based on the activity situation and the user head orientation (see figs. 7 and 9A-9B, for example, regarding that if it is determined that the user is standing with minimal movement (static use case), then the reference direction 702 is not significantly changed and a virtual sound source 708 is not automatically relocated; however, see figs. 7 and 13A-13C, for example, regarding that if it is determined that the user is running with significant movement (dynamic use case), then the reference direction 702 is significantly changed and the virtual sound source 708 is automatically relocated); and (d) determining a rotation of a sound field used to present audio objects via headphones 108 based on the direction of interest (see fig. 6, step 610, and para. 0053; see also, for example, figs. 7 and 13A-13C, regarding that the sound field of a virtual sound source 708 is automatically rotated to the right as claimed).
Regarding claim 2, steps (a) – (d) are repeated such that the rotation of the sound field is updated over time based on changes in the activity situation of the user and the user head orientation. See figs. 5 and 6, for example, which teaches that the sound field is updated over time as claimed.
Regarding claim 3, the activity situation comprises at least one of: walking, running, non-walking and non-running movement, or minimal movement. For example, running with significant movement (dynamic use case) characterized by reference angular change 718 outside of the predetermined range of motion 714, e.g., more than 20 degrees in either direction.
Regarding claim 4, the activity situation comprises walking or running, and wherein the direction of interest is determined based on the direction in which the user is walking or running. See for example, figs. 13A-13C, regarding running with significant movement (dynamic use case) wherein the direction of interest is determined based on the direction 902 in which the user is running.
Regarding claims 5 and 6, the activity situation comprises non-walking and non-running movement (e.g., standing still and gazing straight ahead). The direction of interest is determined based on a direction the user has been facing within a predetermined previous time window. For example, if the user is standing still and gazing straight ahead for 0.2 to 3 seconds, then the direction of interest is determined based on the direction the user has been facing within this predetermined previous time window as claimed. See figures, 9A-9B, which teaches that if it is determined that the user is standing still and gazing straight ahead (static use case), for any amount of time such as 0.2 to 3 seconds, then the reference direction 702 is not significantly changed and a virtual sound source 708 is not automatically relocated.
Regarding claims 7-9, the activity situation comprises minimal movement (e.g., standing with minimal movement). The direction of interest is determined based on a direction the user has been facing within a predetermined previous time window. For example, if the user is standing with minimal movement for 3 to 10 seconds, then the direction of interest is determined based on the direction the user has been facing within this predetermined previous time window as claimed. See figures, 9A-9B, which teaches that if it is determined that the user is standing with minimal movement (static use case), for any amount of time such as 3 to 10 seconds, then the reference direction 702 is not significantly changed and a virtual sound source 708 is not automatically relocated.
Regarding claim 10, the direction the user has been facing is determined using a tolerated threshold of movement, and wherein the tolerated threshold of movement is within a range of 2 degrees to 20 degrees. See fig. 7 and para. 0051-0052, regarding standing with minimal movement (static use case) characterized by “reference angular change 716 within range of motion 714, e.g., less than 20 degrees in either direction”, or running with significant movement (dynamic use case) characterized by “reference angular change 718 outside of the predetermined range of motion 714, e.g., more than 20 degrees in either direction”.
Regarding claim 11, the rotation of the sound field involves an incremental rotation toward the direction of interest. See figs. 10-12, and para. 0070, regarding “adjustments to source direction 710 of virtual sound source 708 may match movements of user 706 more naturally”.
Regarding claim 12, incremental rotation is based at least in part on angular velocity measurements obtained from a user device. See para. 0068, regarding “the rate of device angular change may be analyzed in terms of angle versus time. In an embodiment, a rate 1202 of device angular change corresponds to the amount of device angular change per unit of time”.
Regarding claim 13, the user device 106 is substantially static in movement with respect to the headphones 108 worn by the user. See fig. 1, for example.
Regarding claim 14, the user device 106 provides audio content to the headphones 108. See para. 0034.
Regarding claim 15, the activity situation of the user is determined based at least upon sensor data obtained from one or more sensors 410 disposed in or on headphones 108 worn by the user. See para. 0041.
Regarding claim 16, the user head orientation is determined using at least one sensor 410 in or on headphones 108 worn by the user.
Regarding claim 18, after (d), causing the audio objects to be rendered (e.g., ‘audio output … to render the virtual sound source’ in step 610) based on the determined rotation of the sound field.
Regarding claim 19, the rendered audio objects are caused to be presented via the headphones 108.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The references cited on the PTO-892 each disclose a method and apparatus for determining sound field rotations.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL W HUBER whose telephone number is (571)272-7588. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen, can be reached at telephone number 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice.
/PAUL W HUBER/Primary Examiner, Art Unit 2691
pwh
January 18, 2026