DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Watson (US 2016/0228771 A1), in view of Killian (US 2023/0089767 A1).
Regarding claim 1, Watson teaches a “method of supporting walking in a virtual environment” wherein Watson teaches a system for interactive gameplay where a head-mounted display (HMD) is worn by the user and a vibration device, or bone conduction headset, is interfaced with the HMD to provide supplemental audio to stimulate the vestibular system of the user (see Watson, figure 1, unit 102, figure 6A, and ¶ 0027 and 0042-0045). Watson does not appear to teach the steps for analyzing and/or “separating the sound band of the sound source of the content on the basis of the analyzed frequency and a reference frequency, and transmitting a sound by a bone conduction method to generate a first output signal and a second output signal that have different frequency bands”.
Killian teaches methods and technology to enhance auditory precepts with vestibular simulation (see Killian, abstract and ¶ 0014-0015), where the vestibular simulation is used to provide a beneficial balance percept (see Killian, ¶ 0042). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Watson with the teachings of Killian for the purpose of providing audio output to individuals affected by hearing loss while improving balance and/or reducing motion sickness associated with virtual reality content (see Watson, ¶ 0043-0045 in view of Killian, ¶ 0012, 0026, and 0042).
Therefore, the combination of Watson and Killian makes obvious “the method comprising:
analyzing, for a sound source of content in the virtual environment, a frequency for a sound band of the sound source” (see Watson, ¶ 0027-0028, 0032, and 0043, in view of Killian, ¶ 0022-0023 and figure 1, units 110 and 120);
“separating the sound band of the sound source of the content on the basis of the analyzed frequency and a reference frequency, and transmitting a sound by a bone conduction method to generate a first output signal and a second output signal that have different frequency bands” (see Killian, ¶ 0024-0026, where a fundamental or low frequency signal is extracted from the sound signal to generate the vestibular stimulation sound path and the remaining sound signal is passed through a cochlear stimulation path); and
“outputting the first output signal and the second output signal at different positions through a bone conduction output unit during execution of the content” (see Killian, ¶ 0025-0026 and figure 1, units 130 and 140, where the cochlear stimulator is a bone conduction component and the vestibular stimulator is a bone conduction component).
Regarding claim 4, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, wherein the generating of the first output signal and the second output signal comprises:
generating, as the first output signal, a sound band having a frequency band smaller than a low band reference frequency of the reference frequency among a whole sound band of the sound source of the content” (see Killian, ¶ 0024 and 0036-0038); and
“generating, as the second output signal, a sound band having a frequency band greater than a high band reference frequency of the reference frequency among the whole sound band of the sound source of the content” (see Killian, ¶ 0024).
Claim(s) 2-3 and 5-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Watson and Killian as applied to claim 1 above, and further in view of Chen (US 2008/0107300 A1).
Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination of Watson and Killian makes obvious a bone conduction headband or device (see Watson, ¶ 0045 and see Killian, ¶ 0026), but the combination does not appear to teach or reasonably suggest the features where “the bone conduction output unit is provided with a first bone conduction output unit and a second bone conduction output unit that are positioned spaced apart from each other along forward and rearward directions of a head”.
Chen teaches a headset for reproduction of 5.1 channel surround stereo using bone conduction speakers (see Chen, abstract, figures 1A-1B, and ¶ 0023-0025). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Watson and Killian with the teachings of Chen for the purpose of improving multi-channel reproduction with a smaller headset for virtual reality content (see Watson, ¶ 0043-0043 in view of Chen, ¶ 0005 and 0007).
Therefore, the combination of Watson, Killian, and Chen makes obvious the “method of claim 1, wherein the bone conduction output unit is provided with a first bone conduction output unit and a second bone conduction output unit that are positioned spaced apart from each other along forward and rearward directions of a head” (see Chen, figure 1A-1B, units 1 and 3, and ¶ 0025, where a first bone conduction component is behind the user’s ear and the second is in front of the user’s ear), and
“wherein the outputting of the first and second output signals comprises:
outputting the first output signal through the first bone conduction output unit installed at a first position; and
outputting the second output signal through the second bone conduction output unit installed at a second position” (see Killian, ¶ 0025-0026 and figure 1, units 130 and 140 in view of Chen, ¶ 0025, where the cochlear stimulator is a bone conduction component and the vestibular stimulator is a bone conduction component).
Regarding claim 3, see the preceding rejection with respect to claim 2 above. The combination makes obvious the “method of claim 2, wherein the first output signal is a signal for vestibular organ stimulation” (see Watson, ¶ 0043-0045 and Killian, ¶ 0036-0038).
Regarding claim 5, see the preceding rejection with respect to claim 2 above. The combination makes obvious the “method of claim 2, wherein the first position is a position corresponding to a left and right mastoid of a user” (see Chen, figure 1A-1B, unit 3, and ¶ 0025, where a first bone conduction component is behind the user’s ear, such as disposed on the mastoid on each side of the user’s head).
Regarding claim 6, see the preceding rejection with respect to claim 2 above. The combination makes obvious the “method of claim 2, wherein the second position is a position corresponding to a left and right condyle of a user” (see Chen, figure 1A-1B, units 1, and ¶ 0025, where a second bone conduction component is in front of the user’s ear, such as near or on the mandibular condyle on each side of the user’s head).
Regarding claim 7, see the preceding rejection with respect to claim 5 above. The combination makes obvious the “method of claim 5, wherein the outputting of the first output signal through the first bone conduction output unit installed at the first position comprises:
playing a whole sound band of the sound source of the content through the first bone conduction output unit” (see Killian, ¶ 0024 in view of Chen, figure 1A-1B, units 1, and ¶ 0025, where a bone conduction component is in front of the user’s ears for outputting a front left and right sound); and
“playing the whole sound band of the sound source of the content through an air stimulation vibration output unit at a third position corresponding to an ear of the user” (see Killian, ¶ 0025 in view of Chen, figure 1A-1B, units 1, and ¶ 0025, wherein bone conduction and vibrations sent through the air can help the user hear the front left and right sounds).
Regarding claim 8, see the preceding rejection with respect to claim 5 above. The combination makes obvious the “method of claim 5, wherein at least one of the first output signal or the second output signal may be adjusted on the basis of at least one of a shape and a size of a skull of the user or a structure of a vestibular labyrinth” (see Killian, ¶ 0026, wherein the bone conduction component for vestibular stimulation is sized and shaped for appropriate output).
Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Watson, in view of Chen.
Regarding claim 9, Watson teaches:
“A virtual content execution device comprising:
a body unit configured to be worn by a user” (see Watson, figure 1, unit 102, and ¶ 0027 and 0044-0045, where a head-mounted display (HMD) is worn by the user and a vibration, or bone conduction, device is interfaced with the HMD to provide supplemental audio to stimulate the vestibular system of the user, such that a bone conduction audio headset, or vibration device, reads on a body unit worn by a user);
“a display unit connected to the body unit and configured to output image information on content” (see Watson, figure 1, unit 102, and ¶ 0027 and 0045, where the vibration device is interfaced with the HMD); and
“a control unit configured to control an output of the image information” (see Watson, figure 1, units 106 and ¶ 0028).
Watson does not appear to teach that the body unit (e.g., the bone conduction audio headset, or vibration device) comprises “a first bone conduction output unit and a second bone conduction output unit that are disposed on two opposite sides with respect to a specific point of the user”.
Chen teaches a headset for reproduction of 5.1 channel surround stereo using bone conduction speakers (see Chen, abstract, figures 1A-1B, and ¶ 0023-0025). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Watson with the teachings of Chen for the purpose of improving multi-channel reproduction with a smaller headset for virtual reality content (see Watson, ¶ 0043-0043 in view of Chen, ¶ 0005 and 0007).
Therefore, the combination of Watson and Chen makes obvious the features:
“wherein the body unit is provided with a first bone conduction output unit and a second bone conduction output unit that are disposed on two opposite sides with respect to a specific point of the user” (see Chen, figure 1A-1B, units 1 and 3, and ¶ 0025, where a first bone conduction component is behind the user’s ear and the second is in front of the user’s ear), and
“wherein the control unit controls at least one of the first bone conduction output unit or the second bone conduction output unit to output a sound source of the content by a bone conduction method” (see Watson, ¶ 0028 and 0043 in view of Chen, ¶ 0025, where a rear left channel is sent to the first bone conduction unit and a front left channel is sent to the second).
Regarding claim 10, see the preceding rejection with respect to claim 9 above. The combination makes obvious the “virtual content execution device of claim 9, wherein the first bone conduction output unit is disposed in a position corresponding to a mastoid of the user, and the second bone conduction output unit is disposed in a position corresponding to a left and right condyle of the user” (see Chen, figure 1A-1B, units 1 and 3, and ¶ 0025, where a first bone conduction component is behind the user’s ear, such as disposed on the mastoid, and the second is in front of the user’s ear, such as near or on the mandibular condyle).
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Watson and Chen as applied to claim 10 above, and further in view of Killian.
Regarding claim 11, see the preceding rejection with respect to claim 10 above. The combination of Watson and Chen makes obvious the virtual content execution device of claim 10, but does not appear to teach or reasonably suggest the features where “a first output signal and a second output signal having different frequency bands that are separated from a sound source of the content” such that the first and second output signals are output through the first and second bone conduction output unit respectively.
Killian teaches methods and technology to enhance auditory precepts with vestibular simulation (see Killian, abstract and ¶ 0014-0015), where the vestibular simulation is used to provide a beneficial balance percept (see Killian, ¶ 0042). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Watson and Chen with the teachings of Killian for the purpose of providing audio output to individuals affected by hearing loss while improving balance and/or reducing motion sickness associated with virtual reality content (see Watson, ¶ 0043-0045 and Chen, ¶ 0025, in view of Killian, ¶ 0012, 0026, and 0042).
Therefore, the combination of Watson, Chen, and Killian makes obvious the “virtual content execution device of claim 10, wherein the control unit receives a first output signal and a second output signal having different frequency bands that are separated from a sound source of the content, and outputs the first output signal through the first bone conduction output unit and outputs the second output signal through the second bone conduction output unit while outputting the image information” (see Watson, ¶ 0028 and 0043-0045 and Chen, ¶ 0025, in view of Killian, ¶ 0024-0026, where a fundamental or low frequency signal is extracted from the sound signal to generate the vestibular stimulation sound path and the remaining sound signal is passed through a cochlear stimulation path).
Regarding claim 12, see the preceding rejection with respect to claims 9 and 11 above. Similar to claims 9 and 11, Watson does not appear to teach that the features “to separate a sound band of the sound source of the received content on the basis of the analyzed frequency band and a reference frequency to generate a first output signal and a second output signal for vestibular organ stimulation” and does not appear to teach the features of “a first bone conduction output unit and a second bone conduction output unit configured to each output the first output signal and the second output signal that have been generated”.
Chen teaches a headset for reproduction of 5.1 channel surround stereo using bone conduction speakers (see Chen, abstract, figures 1A-1B, and ¶ 0023-0025). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Watson with the teachings of Chen for the purpose of improving multi-channel reproduction with a smaller headset for virtual reality content (see Watson, ¶ 0043-0043 in view of Chen, ¶ 0005 and 0007). However, the combination of Watson and Chen do not appear to teach or reasonably suggest the features “to separate a sound band of the sound source of the received content on the basis of the analyzed frequency band and a reference frequency to generate a first output signal and a second output signal for vestibular organ stimulation”.
Killian teaches methods and technology to enhance auditory precepts with vestibular simulation (see Killian, abstract and ¶ 0014-0015), where the vestibular simulation is used to provide a beneficial balance percept (see Killian, ¶ 0042). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Watson and Chen with the teachings of Killian for the purpose of providing audio output to individuals affected by hearing loss while improving balance and/or reducing motion sickness associated with virtual reality content (see Watson, ¶ 0043-0045 and Chen, ¶ 0025, in view of Killian, ¶ 0012, 0026, and 0042).
Therefore, the combination of Watson, Chen, and Killian makes obvious:
“A system for supporting walking in a virtual environment, comprising:
a content receiving unit configured to receive content” ;
“a control unit configured to analyze a frequency band for a sound band of a sound source of the received content, and to separate a sound band of the sound source of the received content on the basis of the analyzed frequency band and a reference frequency to generate a first output signal and a second output signal for vestibular organ stimulation” (see Watson, ¶ 0028 and 0043-0045 and Chen, ¶ 0025, in view of Killian, ¶ 0024-0026, where a fundamental or low frequency signal is extracted from the sound signal to generate the vestibular stimulation sound path and the remaining sound signal is passed through a cochlear stimulation path); and
“a first bone conduction output unit and a second bone conduction output unit configured to each output the first output signal and the second output signal that have been generated” (see Chen, figure 1A-1B, units 1 and 3, and ¶ 0025, where a first bone conduction component is behind the user’s ear and the second is in front of the user’s ear),
“wherein the control unit generates, as the first output signal, a sound band having a frequency band smaller than a low band reference frequency of the reference frequency among a whole sound band of the sound source of the content, separates, as the second output signal, a sound band having a frequency band greater than a high band reference frequency of the reference frequency among the whole sound band of the content, and adjusts at least one of the first output signal and the second output signal on the basis of at least one of a shape and a size of a skull of a user or a structure of a vestibular labyrinth” (see Watson, ¶ 0028 and 0043-0045 and Chen, ¶ 0025, in view of Killian, ¶ 0024-0026, where a fundamental or low frequency signal is extracted from the sound signal to generate the vestibular stimulation sound path and the remaining sound signal is passed through a cochlear stimulation path, and see Killian, ¶ 0026, where the bone conduction component for vestibular stimulation is sized and shaped for appropriate output).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Heiman et al. (US 2014/0363033 A1 and hereafter Heiman) teaches equalization and power control of bone conduction elements, wherein a first element outputs an acoustical signal and the second element is used to obtain a feedback signal to improve the acoustical output signal (see Heiman, abstract and figures 3 and 5); and
Godfrey (US 2024/0267681 A1) teaches a bone-conductive audio system to improve audio localization for users with hearing impairments (see Godfrey, abstract, figures 1-16, ¶ 0008-0009, 0013-0015, 0021-0023, 0027, 0052, 0074-0076, and 0096-0098).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Daniel R Sellers/Primary Examiner, Art Unit 2694