DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 & 9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Norris et al, US Patent Pub. 20150373477 A1.
Re Claim 1, Norris et al discloses a method of virtualized spatial audio, comprising: tracking, by a motion sensor, a listener's movement (para 0057: motion capture system/sensor is used to track location and head orientation of listener’ para 0279: motion detectors/trackers/sensors); obtaining location information associated with the listener's movement, wherein the location information includes distance information and direction information regarding the listener relative to the motion sensor (claim 3: the system can determine distances/direction for virtual sound adjustment (para 0038), whereby the distance/directions are determined by the motion detector/tracker/sensor of paras 0057 & 0279); and producing virtual sound adaptively based on the location information associated with the listener's movement (fig. 2: 220; paras 0058-0060: virtual sound source/output is adjusted based on the listener‘s movements (tracked by motion tracker/sensor/detector of para 0057 & 0279) so that virtual sound source/output is constantly perceived to emanate from the same location regardless of the listener’s movement/distance/direction).
Claim 9 has been analyzed and rejected according to claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-6, 8, 10-14 & 16 are rejected under 35 U.S.C. 103 as being unpatentable over Norris et al, US Patent Pub. 20150373477 A1 as applied to claim 1 above, in view of Seldess, US Patent Pub. 20210112365 A1. (The Seldess reference is cited in IDS filed 12/28/2025)
Re Claim 2, Norris et al discloses the method according to claim 1, but fail to explicitly disclose wherein the producing virtual sound adaptively based on the location information comprises: decoding audio material into multi-channel signals; and merging the multi-channel signals into channels of left, center and right path and outputting signals of left path, center path, and right path; processing the signals of the left path and the right path by spatial filters, and outputting the processed signals of the left path and the right path, wherein the spatial filters are adaptively adjusted based on the location information; and producing the virtual sound based on the processed signals of the left path and the right path and the signals of the center path which are not processed by the spatial filters. However, Seldess discloses a system that teaches the concept of a decoder that is able to decode audio signals into a multi-channel audio signal associated with speaker locations at various locations based on listening position for surround sound (Seldess, para 0043: multichannel includes left, right & center channels; claim 1), wherein the system includes spatial processing to the left and right decoded multichannel (Seldess, paras 0008, 0011: spatial process/filter left, right channels with respect to listener location), whereby the center channel can bypass processing (Seldess, para 0039). It would have been obvious to modify the Norris et al system such that it includes a decoder that is able to process the audio signal into multichannel signals with left, right and center channels along with spatial processing to process/filter the channels based on the listener locations where the center channel can bypass processing as taught in Seldess for the purpose of providing surround sound to the moving listener of Norris et al.
Re Claim 3, the combined teachings of Norris et al and Seldess disclose the method according to claim 2, wherein the signals of the center path are directly steered to one speaker or more speakers in front of the listener based on the location information (Seldess, para 0102: center channel output receives spatial processing wherein spatial processing aims to compensate based on listener locations thus implying that the center channel output accounts for listener locations).
Re Claim 4, the combined teachings of Norris et al and Seldess disclose the method according to claim 2, wherein before the merging, the multi-channel signals are optionally processed by Head Related Transfer Function (HRTF) filters to produce binaural signals (Seldess, paras 0041, 0080, claim 1: HRTF), wherein center-channel signals in the multi-channel signals are not processed by the HRTF filters (Seldess, para 0039: center channel bypasses processing).
Re Claim 5, the combined teachings of Norris et al and Seldess disclose the method according to claim 4, further comprising: merging the binaural signals into channels of left and right path; processing the merged signals of the left path and the right path by spatial filters (Seldess, paras 0008, 0011: spatial process/filter left, right channels with respect to listener location), and generating the processed signals of the left path and the right path, wherein the spatial filters are adaptively adjusted based on the location information (Seldess, paras 0008, 0011: spatial process/filter left, right channels with respect to listener location); and producing the virtual sound based on the processed signals of the left path and the right path (Seldess, paras 0008, 0011: spatial process/filter left, right channels with respect to listener location) and the center-channel signals in the multi-channel signals (Seldess, para 0043: multichannel includes left, right & center channels; claim 1).
Re Claim 6, the combined teachings of Norris et al and Seldess disclose the method according to claim 2, but fail to explicitly disclose wherein the spatial filters comprises left spatial filters and right spatial filters, and both the number of the left spatial filters and the number of right spatial filters correspond to the number of speakers for producing virtual sound. It would have been obvious to modify the spatial processing of Seldess (Seldess, paras 0008, 0011) to create left spatial filter for left enhanced channel signal and right spatial filter for right spatial enhanced channel signal for the purpose of efficiently carrying out spatial processing.
Re Claim 8, the combined teachings of Norris et al and Seldess disclose the method according to claim 2, wherein the spatial filters utilize at least one of beamforming and cross-talk cancellation (Seldess, paras 0008: spatial processing includes crosstalk processing/cancellation).
Claim 10 has been analyzed and rejected according to claim 2.
Claim 11 has been analyzed and rejected according to claim 3.
Claim 12 has been analyzed and rejected according to claim 4.
Claim 13 has been analyzed and rejected according to claim 5.
Claim 14 has been analyzed and rejected according to claim 6.
Claim 16 has been analyzed and rejected according to claim 8.
Claims 7 & 15 are rejected under 35 U.S.C. 103 as being unpatentable over Norris et al, US Patent Pub. 20150373477 A1.
Re Claim 7, the combined teachings of Norris et al discloses the method according to claim 1, but fails to explicitly disclose wherein the motion sensor is at least one of a TOF sensor, a radar, and an ultrasound detector. Official Notice is taken that both the concepts and advantages of utilizing a radar detector are well known to one of ordinary skill in the art. Since Norris et al teaches the utilization of different motion detectors that work in hand with radar detectors such as laser detectors, radio frequency detectors etc (Norris et al, para 0279), it would have been obvious for one of ordinary skill in the art to modify the motion detector of Norris et al such that it utilizes a radar detector for the purpose of using motion detectors with improved range.
Claim 15 has been analyzed and rejected according to claim 7.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE C MONIKANG whose telephone number is (571)270-1190. The examiner can normally be reached Mon. - Fri., 9AM-5PM, ALT. Fridays off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE C MONIKANG/Primary Examiner, Art Unit 2692 03/04/2026
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692