Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
The indicated allowability of claims 1-20 is withdrawn in view of the newly discovered reference(s) to Mindlin (US 10206055), Shipes (US 2018/0284882), Kohler (US 2017/0358140), Flaks (US 2012/0093320) and Makino (US 6862356). Rejections based on the newly cited reference(s) follow.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-9, 11-15, 17-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mindlin (US 10206055) in view of Shipes (US 2018/0284882).
Regarding claim 1, Mindlin teaches A method of presenting an audio signal to a user, the method comprising: receiving a first input audio signal, wherein the first input audio signal (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”) is associated with a first direction from a virtual object to the user in a virtual environment at a first time (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”); generating a first output audio signal and a second output audio signal (Mindlin figure 2, and Col 10 lines 1-12, “left-side version 224-L and right-side version 224-R of the sound received from virtual experience provider system 202”), wherein the generating the first output audio signal and the second output audio signal comprises applying a first interaural time delay (ITD) (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented) to the first input audio signal based on the first direction (Mindlin Col 14 lines 33-50, “generated at different spatial locations corresponding to potential orientations of a virtual avatar with respect to a virtual sound source”); presenting the first output audio signal to the user via a first speaker; presenting the second output audio signal to the user via a second speaker (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15); receiving a second input audio signal (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”), wherein the second input audio signal is associated with a second direction from the virtual object to the user in the virtual environment at a second time (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another. In this type of implementation, virtual experience provider system 202 may also maintain a library of head-related impulse responses such that virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200”), wherein: the user has a first orientation with respect to the virtual environment at the first time, the user has a second orientation with the virtual environment at the second time, and the first orientation is different from the second orientation (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another”); generating a third output audio signal and a fourth output audio signal (Mindlin figure 2, and Col 10 lines 1-12, “left-side version 224-L and right-side version 224-R of the sound received from virtual experience provider system 202”), wherein the generating the third output audio signal and the fourth audio signal comprises applying a second ITD to the second input audio signal based on the second direction (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented); presenting the third output audio signal to the user via the first speaker; and presenting the fourth output audio signal to the user via the second speaker (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15), however does not explicitly teach applying a first and second interaural time delay (ITD), and presenting the first and third output audio signal to the user via a first speaker; presenting the second and fourth output audio signal to the user via a second speaker.
Shipes teaches applying a first and second interaural time delay (ITD) (Shipes figure 1, and ¶0054, “The directly propagating and reflections of the virtual sound source may be delayed prior to be output to the user with delays based on directed or reflected propagation path length as the case may be”), and presenting the first and third output audio signal to the user via a first speaker; presenting the second and fourth output audio signal to the user via a second speaker (Shipes figure 1, speaker ports 134 and 136).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Shipes to improve the known method of Mindlin to achieve the predictable result of more realistic spatialized audio (Shipes ¶0005).
Regarding claims 2 and 15, Mindlin in view of Shipes teaches wherein the generating the first output audio signal comprises applying a filter to the first input audio signal (Mindlin figure 8, Col 7 line 42 – End of Col 18, “head-related impulse response” and Col 5 lines 59-61. See also Pertinent art Flaks ¶0003, “The HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head and pinna, before the sound reaches the eardrum and inner ear”).
Regarding claim 3, Mindlin in view of Shipes teaches wherein the generating the third output audio signal comprises applying a filter to the second input audio signal (Mindlin Col 16 lines 1-19, Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented).
Regardings claims 6 and 17, Mindlin in view of Shipes teaches wherein the generating the first output audio signal comprises applying a head-related transfer function (HRTF) direction (Mindlin figure 8, Col 7 line 42 – End of Col 18, “head-related impulse response” and Col 5 lines 59-61. See also Pertinent art Flaks ¶0003, “The HRTF describes how a given sound wave input (parameterized as frequency and source location) to the first input audio signal, wherein the HRTF is determined based on the first direction (Mindlin Col 9 lines 22-30 and Col 9 line 55- Col 10 line 11).
Regarding claim 7, Mindlin in view of Shipes teaches wherein the generating the third output audio signal comprises applying a HRTF (Mindlin figure 8, Col 7 line 42 – End of Col 18, “head-related impulse response” and Col 5 lines 59-61. See also Pertinent art Flaks ¶0003, “The HRTF describes how a given sound wave input (parameterized as frequency and source location) to the second input audio signal, wherein the HRTF is determined based on the second direction (Mindlin Col 9 lines 22-30 and Col 9 line 55- Col 10 line 11).
Regarding claim 8, Mindlin in view of Shipes teaches cross-fading the first output audio signal and the third output audio signal; and cross-fading the second output audio signal and the fourth output audio signal (Mindlin Col 22 lines 30-43).
Regarding claims 9 and 18, Mindlin in view of Shipes teaches wherein the virtual object is at a first location in the virtual environment at the first time and the virtual object is at a second location in the virtual environment at the second time, the second location different from the first location (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another”).
Regarding claim 11, Mindlin in view of Shipes teaches wherein the virtual object is on a first side of a median plane of the user's head at the first time (Mindlin figure 4, avatar is on the right side of a median plane of the user’s head) and the virtual object is on a second side of the median plane of the user's head at the second time (Mindlin figure 3, shows the different spatial locations around the user, which can be on the left side of the center of the user model 500 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another).
Regarding claim 12, Mindlin in view of Shipes teaches wherein the first ITD is determined based on a distance from the virtual object to a first ear of the user at the first time and further based on a distance from the virtual object to a second ear of the user at the first time (Mindlin Col 2 lines 61-66).
Regarding claim 13, Mindlin in view of Shipes teaches wherein the second ITD is determined based on a distance from the virtual object to a first ear of the user at the second time (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another”) and further based on a distance from the virtual object to a second ear of the user at the second time (Mindlin Col 2 lines 61-66).
Regarding claim 14, Mindlin teaches A system comprising: a first speaker associated with a wearable head device; a second speaker associated with the wearable head device (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15); and one or more processors configured to perform a method comprising: receiving a first input audio signal, wherein the first input audio signal (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”) is associated with a first direction from a virtual object to the user in a virtual environment at a first time (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”); generating a first output audio signal and a second output audio signal (Mindlin figure 2, and Col 10 lines 1-12, “left-side version 224-L and right-side version 224-R of the sound received from virtual experience provider system 202”), wherein the generating the first output audio signal and the second output audio signal comprises applying a first ITD (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented) to the first input audio signal based on the first direction (Mindlin Col 14 lines 33-50, “generated at different spatial locations corresponding to potential orientations of a virtual avatar with respect to a virtual sound source”); presenting the first output audio signal to the user via the first speaker; presenting the second output audio signal to the user via the second speaker (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15); receiving a second input audio signal (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”), wherein the second input audio signal is associated with a second direction from the virtual object to the user in the virtual environment at a second time (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another. In this type of implementation, virtual experience provider system 202 may also maintain a library of head-related impulse responses such that virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200”), wherein: the user has a first orientation with respect to the virtual environment at the first time, the user has a second orientation with the virtual environment at the second time, and the first orientation is different from the second orientation (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another”); generating a third output audio signal and a fourth output audio signal (Mindlin figure 2, and Col 10 lines 1-12, “left-side version 224-L and right-side version 224-R of the sound received from virtual experience provider system 202”), wherein the generating the third output audio signal and the fourth audio signal comprises applying a second ITD to the second input audio signal based on the second direction (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented); presenting the third output audio signal to the user via the first speaker; and presenting the fourth output audio signal to the user via the second speaker (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15), however does not explicitly teach applying a first and second interaural time delay (ITD), and presenting the first and third output audio signal to the user via a first speaker; presenting the second and fourth output audio signal to the user via a second speaker.
Shipes teaches applying a first and second interaural time delay (ITD) (Shipes figure 1, and ¶0054, “The directly propagating and reflections of the virtual sound source may be delayed prior to be output to the user with delays based on directed or reflected propagation path length as the case may be”), and presenting the first and third output audio signal to the user via a first speaker; presenting the second and fourth output audio signal to the user via a second speaker (Shipes figure 1, speaker ports 134 and 136).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Shipes to improve the known method of Mindlin to achieve the predictable result of more realistic spatialized audio (Shipes ¶0005).
Regarding claim 20, Mindlin teaches A non-transitory computer-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform a method comprising: receiving a first input audio signal, wherein the first input audio signal (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”) is associated with a first direction from a virtual object to the user in a virtual environment at a first time (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”); generating a first output audio signal and a second output audio signal (Mindlin figure 2, and Col 10 lines 1-12, “left-side version 224-L and right-side version 224-R of the sound received from virtual experience provider system 202”), wherein the generating the first output audio signal and the second output audio signal comprises applying a first ITD (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented) to the first input audio signal based on the first direction (Mindlin Col 14 lines 33-50, “generated at different spatial locations corresponding to potential orientations of a virtual avatar with respect to a virtual sound source”); presenting the first output audio signal to the user via a first speaker; presenting the second output audio signal to the user via a second speaker (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15); receiving a second input audio signal (Mindlin figure 2, and col 8 lines 18-24, “devices and systems of configuration 200 may present a sound (i.e., audio data representative of an acoustic sound that may be captured by a microphone and/or rendered by a loudspeaker) to either or both of users 208 and 212 that is generated by a virtual sound source within a virtual space that the users are experiencing”, wherein the second input audio signal is associated with a second direction from the virtual object to the user in the virtual environment at a second time (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another. In this type of implementation, virtual experience provider system 202 may also maintain a library of head-related impulse responses such that virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200”), wherein: the user has a first orientation with respect to the virtual environment at the first time, the user has a second orientation with the virtual environment at the second time, and the first orientation is different from the second orientation (Mindlin figure 3 and Col 9 lines 4-50, “media player devices 206 and 210 to track dynamic location changes for both virtual avatars 302 and 304, and, based on the tracked locations of the virtual avatars, identify the orientation (e.g., angles) of the virtual avatars with respect to one another”); generating a third output audio signal and a fourth output audio signal (Mindlin figure 2, and Col 10 lines 1-12, “left-side version 224-L and right-side version 224-R of the sound received from virtual experience provider system 202”), wherein the generating the third output audio signal and the fourth audio signal comprises applying a second ITD to the second input audio signal based on the second direction (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented); presenting the third output audio signal to the user via the first speaker; and presenting the fourth output audio signal to the user via the second speaker (Mindlin figure 2 and Col 10 lines 1-11, “media player device 206 may generate a left-side version 224-L and a right-side version 224-R of the sound received from virtual experience provider system 202. Media player device 206 may present versions 224-L and 224-R of the sound to user 208 by rendering left-side version 224-L for the left ear of user 208 and right-side version 224-R for the right ear of user 208” and Col 8 lines 10-15), however does not explicitly teach applying a first and second interaural time delay (ITD), and presenting the first and third output audio signal to the user via a first speaker; presenting the second and fourth output audio signal to the user via a second speaker.
Shipes teaches applying a first and second interaural time delay (ITD) (Shipes figure 1, and ¶0054, “The directly propagating and reflections of the virtual sound source may be delayed prior to be output to the user with delays based on directed or reflected propagation path length as the case may be”), and presenting the first and third output audio signal to the user via a first speaker; presenting the second and fourth output audio signal to the user via a second speaker (Shipes figure 1, speaker ports 134 and 136).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Shipes to improve the known method of Mindlin to achieve the predictable result of more realistic spatialized audio (Shipes ¶0005).
Claim(s) 4-5 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mindlin (US 10206055) in view of Shipes (US 2018/0284882) in further view of Makino (US 6862356).
Regarding claims 4 and 16, Mindlin in view of Shipes teaches wherein the generating the first output audio signal comprises applying a gain to the first input audio signal, wherein the gain is determined based on the first direction (Mindlin figure 8, Col 7 line 42 – End of Col 18, “head-related impulse response” and Col 5 lines 59-61. See also Pertinent art Flaks ¶0003, “The HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head and pinna, before the sound reaches the eardrum and inner ear”), however does not explicitly teach applying a gain to the first input audio signal, wherein the gain is determined based on the first direction.
Makino teaches applying a gain to the first input audio signal, wherein the gain is determined based on the first direction (Makino Col 1 lines 51-60, “A head related transfer function (HRTF) correction method is known. In the HRTF basis correction method, a sound field of a concert hall or the like is simulated or a sound image is localized in a desired direction by controlling a transfer function (amplitude and phase characteristics) of a space between a speaker and the ears of a listener”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Makino to improve the known method of Mindlin in view of Shipes to achieve the predictable result of reproducing a more realistic sound field for the user.
Regarding claim 5, Mindlin in view of Shipes in further view of Makino teaches herein the generating the third output audio signal (Mindlin figure 8, Col 7 line 42 – End of Col 18, “head-related impulse response” and Col 5 lines 59-61. See also Pertinent art Flaks ¶0003, “The HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head and pinna, before the sound reaches the eardrum and inner ear”) comprises applying a gain to the second input audio signal, wherein the gain is determined based on the second direction (Makino Col 1 lines 51-60, “A head related transfer function (HRTF) correction method is known. In the HRTF basis correction method, a sound field of a concert hall or the like is simulated or a sound image is localized in a desired direction by controlling a transfer function (amplitude and phase characteristics) of a space between a speaker and the ears of a listener”).
Claim(s) 10 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mindlin (US 10206055) in view of Shipes (US 2018/0284882) in further view of Kohler (US 2017/0358140).
Regarding claims 10 and 19, Mindlin in view of Shipes teach determining one or more of the first orientation and the second orientation, wherein: the first ITD is determined based on the first orientation, and the second ITD is determined based on the second orientation (Mindlin Col 16 lines 1-19, “interaural time difference…delays 606-L and 606-R” and Col 9 lines 4-50, “virtual experience provider system 202 may select an appropriate head-related impulse response from the library and generate the left-side and right-side versions of the sounds, which virtual experience provider system 202 may present to user 208 by transmitting the versions to media player device 206 as a spatialized audio signal represented by a transmission arrow 222 in configuration 200.” Each head-related impulse response has their respective delays, wherein when the orientation and location of each avatar changes, a new head-related impulse response is implemented), however does not explicitly teach using a sensor to determine the first and second orientations.
Kohler teaches determining, via one or more sensors, one or more of the first orientation and the second orientation (Kohler ¶0016).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Kohler to improve the known method of Mindlin in view of Shipes to achieve the predictable result of more accurate representation of the user’s location and orientation.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Flaks (US 2012/0093320).
Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the timing fee set forth in 37 CFR 1.17(p) on 1/21/2026 prompted the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 609.04(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NORMAN YU whose telephone number is (571)270-7436. The examiner can normally be reached on Mon - Fri 11am-7pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Any response to this action should be mailed to:
Commissioner of Patents and Trademarks
P.O. Box 1450
Alexandria, Va. 22313-1450
Or faxed to:
(571) 273-8300, for formal communications intended for entry and for
informal or draft communications, please label “PROPOSED” or “DRAFT”.
Hand-delivered responses should be brought to:
Customer Service Window
Randolph Building
401 Dulany Street
Arlington, VA 22314
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NORMAN YU/Primary Examiner, Art Unit 2693