DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The term “near” in claims 9 and 19 is a relative term which renders the claim indefinite. The term “near” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 9, 12-15 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lyren et al. (US 2019/0174249 A1), hereinafter “Lyren.”
As to claim 1, Lyren discloses an apparatus (¶0042 and ¶0175) comprising:
an eyewear frame dimensioned to be worn by a user (¶0042 and ¶0175, Figs. 3A/B. “An electronic device that the listener wears, such as… HMD, electronic glasses…”); and
circuitry coupled to the eyewear frame (¶0037 and ¶0137, Fig. 4. “The processor or sound hardware processing or convolving the sound can be located in one or more electronic devices.”) and configured to:
obtain an audio signal originating from a sound source in an environment of the user (¶0033 and ¶0035-0036, Fig. 1. “Block 100 states process sound so the sound externally localizes as binaural sound to a listener.” “Sound includes, but is not limited to, one or more of stereo sound, mono sound, binaural sound, computer-generated sound, sound captured with microphones, and other sound. Furthermore, sound includes different types including, but not limited to, music, background sound or background noise, human voice, computer-generated voice, and other naturally occurring or computer-generated sound.”);
manipulate an azimuthal angle of the audio signal relative to a location of the sound source in the environment toward a midline feature of the user (¶0049-0050, ¶0053-0058, ¶0069, ¶0133 and Figs. 1 and 3a/b-4. “FIG. 3B shows an error or difference between the coordinate direction 350 and/or coordinate location 380 of the HRTFs processing the sound and the coordinate direction 340 and/or coordinate location 370 from where the user hears the sound emanating or originating. This difference or error is shown as the azimuth angle error at 390.” “the wearable electronic device (alone or in conjunction with another electronic device) calculates an error of (|θ1−0λ2|, |ϕ1−ϕ2). This error represents a difference between the coordinates (θ1, ϕ1) of the HRTFs that processed the voice of the second user before the reaction of the first user and the coordinates (θ2, ϕ2) of the head orientation while the first user looks at the SLP where the voice of the second user externally localized as binaural sound to the first user. When a difference exists, the wearable electronic device changes the HRTFs processing the voice of the second user to reduce or to eliminate the error of (|θ1−θ2|, |ϕ−ϕ2|).” “correct or reduce, during the telephone call, the azimuth error by changing the azimuth coordinate of the HRTFs processing the voice of the second user when the azimuth error reaches a predetermined azimuth value.”); and
provide the audio signal for auditory display to the user such that the manipulated azimuthal angle causes the user to perceive the audio signal as originating from the location of the sound source (¶0049-0050, ¶0053-0058, ¶0133 and Figs. 1 and 3a/b-4. “The processor then addresses the error, such as correcting the error, reducing the error, storing or recording the error, transmitting the error, etc.” Error corrected and sound output.).
As to claim 2, Lyren discloses one or more sensors coupled to the eyewear frame and configured to detect movement of a head of the user (¶0066. “The WED [wearable electronic device] includes head tracking (such as one or more of an accelerometer, gyroscope, magnetometer, inertial sensor, MEMs sensor, a chip that provides three-axis measurements, etc.) that track head movements or head orientations of the first user.”); and
wherein the circuitry is further configured to manipulate the azimuthal angle of the audio signal to account for the location of the sound source in view of the movement of the head of the user (¶0069, ¶0128 and ¶0133, Figs. 3a-b. “correct or reduce, during the telephone call, the azimuth error by changing the azimuth coordinate of the HRTFs processing the voice of the second user when the azimuth error reaches a predetermined azimuth value.” “a change in head orientation in degrees of yaw and pitch correlates to a change in localization in degrees of azimuth and elevation respectively, aiding calculation and comparison between movement of the user and adjustment of the convolution of sound.”).
As to claim 3, Lyren discloses wherein the circuitry is further configured to manipulate the azimuthal angle of the audio signal by at least one of:
compressing the azimuth angle of the audio signal toward the midline feature of the user; or expanding the azimuth angle of the audio signal away from a lateral feature of the user (¶0133 and ¶0135, Fig. 3b. Correcting error difference between HRTF coordinate location 380 and coordinate location 370 where user hears the sound would move the angle toward 370, i.e., the user’s nose.).
As to claim 4, Lyren discloses the midline feature of the user comprises a nose or an external occipital protuberance (Fig. 3b. User’s nose aligned with coordinate location 370.); and
the lateral feature of the user comprises an ear (Fig. 3b. User’s ear at speaker 360a.).
As to claim 5, Lyren discloses wherein the circuitry is further configured to manipulate the azimuthal angle of the audio signal by applying a transfer function to the audio signal (¶0054 and ¶0058-0061. “One or more electronic devices execute an action to correct or reduce the errors. By way of example, this action includes one or more of… changing HRTFs processing the sound…” “When a difference exists, the wearable electronic device changes the HRTFs processing the voice of the second user to reduce or to eliminate the error of (|θ1−θ2|, |ϕ−ϕ2|).” “For example, the wearable electronic device (or an electronic device in communication with the wearable electronic device) repeatedly changes the HRTFs processing the voice of the second user. For instance, these changes continue until the coordinates (θ, ϕ) of the HRTF pair equal or approximate the head orientation (θ2, ϕ2) while the first user looked at the SLP.”).
As to claim 9, Lyren discloses the environment comprises a virtual or augmented environment (¶0036 and ¶0121. “playing a software game (e.g., an AR or VR software game).” “virtual chat room or other virtual location.”); and
the sound source comprises a virtual sound source implemented near the user in the virtual or augmented environment (¶0036, ¶0121 and ¶0133 and Fig. 3b. Virtual sound source near user.).
Claims 12 and 20 are directed towards substantially the same subject matter as claim 1 and are therefore rejected using the same rationale as claim 1 above.
Claims 13-15 and 19 are rejected under claim 12 using the same rationale as claims 3-5 and 9 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6, 10-11 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lyren, as applied to claims 1, 5 and 15 above, in view of Faundez Hoffmann et al. (US 2023/0093585 A1), hereinafter “Faundez.”
As to claim 6, Lyren does not expressly disclose calibrate the transfer function based at least in part on a preference of the user.
Faundez discloses calibrate the transfer function based at least in part on a preference of the user (Faundez, ¶0056. “the transfer function module 250 may determine HRTFs for the user using a calibration process.”).
Lyren and Faundez are analogous art because they are from the same field of endeavor with respect to sound source spatialization.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to calibrate the transfer function to the user, as taught by Faundez. The motivation would have been to provide a unique HRTF for the user (Faundez, ¶0056).
As to claim 10, Lyren discloses one or more sensors coupled to the eyewear frame (¶0035 and ¶0037. “Sound includes, but is not limited to… sound captured with microphones.” “…head mounted displays (HMDs), optical head mounted displays (OHMDs), electronic glasses (e.g., glasses that provide augmented reality (AR))…”), and configured to detect at least one of:
sounds produced by one or more additional sound sources in the environment; or audio information representative an acoustics profile of the environment (¶0035, “Sound includes, but is not limited to… sound captured with microphones.”).
Lyren does not expressly disclose wherein the circuitry is further configured to: generate an acoustics model of the environment based at least in part on the sounds or the audio information; and
manipulate the azimuthal angle of the audio signal to account for the acoustics model of the environment.
Faundez discloses wherein the circuitry is further configured to: generate an acoustics model of the environment based at least in part on the sounds or the audio information (Faundez, ¶0037 and ¶0039, Fig. 1. “The audio controller 150 may receive data from the sensor array (e.g., acoustic sensors 180) and create a mapping of sound sources in the local area of the audio system.” “In some embodiments, the headset 100 may provide for simultaneous localization and mapping (SLAM) for a position of the headset 100 and updating of a model of the local area.”); and
manipulate the azimuthal angle of the audio signal to account for the acoustics model of the environment (Faundez, ¶0022 and ¶0078, Fig. 5. “As such the audio system may spatialize voices with high energy at low frequencies at a high azimuth angle relative to the median sagittal plane (shown in FIG. 5) of the head of the user of the audio system.” “The first angle 506 is at an azimuth greater than a middle boundary 518 for sound source 504.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to map the local area, as taught by Faundez. The motivation would have been to prevent co-located sound sources (Faundez, ¶0037).
As to claim 11, Lyren in view of Faundez discloses wherein the additional sound sources comprise at least one of: a transducer coupled to the eyewear frame; or an object located in a room occupied by the user (Lyren, ¶0042. “The speakers are in or on an electronic device that the listener wears, such as headphones, HMD, electronic glasses, smartphone, or another WED, PED, or HPED.” Faundez, ¶0037, Fig. 1A. “The filtered and spatialized virtual sound source is output through the transducer array (e.g., speakers 160).”).
The motivation is the same as claim 10 above.
Claim 16 is rejected under claim 15 using the same motivation as claim 6 above.
Allowable Subject Matter
Claims 7-8 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES K MOONEY whose telephone number is (571)272-2412. The examiner can normally be reached Monday-Friday, 9:00 AM -5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 5712727848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES K MOONEY/Primary Examiner, Art Unit 2695