DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1. The amendment filed October 1, 2025 has been entered. Claims 1, 3, 5, 7-16, 20, and 22-26 are pending. Claims 2, 4, 6, 17, 19, and 21 are canceled. Claims 1, 3, 5, 7, 9-16, 18, 20, and 22-25 are amended. Claim 26 is new.
Claim Rejections - 35 USC § 112
2. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
3. Claim 18 is rejected under 35 USC 112(b) as being indefinite because it depends from respective canceled base Claim 17, which make the claims incomplete (see MPEP 608.01(n)(V)).
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1, 3, 5, 7, 9, 10, 14-16, 18, 20, 22-25 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Petrov (U.S. Pub. No. 9,648,438 B1) in view of Johnson et al. (U.S. Pub. No. 2016/0284136 A1, hereinafter "Johnson") in view of Khaleghimeybodi et al. (U.S. Pub. No. 2021/0314720 A1, hereinafter "Khaleghimeybodi"), and further in view of Katayama et al. (U.S. Pub. No. 2003/0147543 A1, hereinafter "Katayama").
Regarding Claim 1, Petrov teaches a system for generating a personalised Head-Related Transfer Function, HRTF, for a user (system 100, 200 for generating personalised HRTF for user, Figs. 1, 2A and 2B, Col. 3, Ln. 19 thru Col. 6, Ln. 28, Col. 9, Ln. 3 thru Col. 10, Ln. 13), the system comprising:
a sound source (speaker 180, 280, Figs. 1, 2A and 2B);
a virtual reality headset (VR headset 105, 205, Figs. 1, 2A and 2B) comprising[[:]]
a left microphone arranged to be at a left ear of the user, when the user device is worn (left microphone 285, Fig. 2A);
a right microphone arranged to be at a right ear of the user, when the user device is worn (right microphone 285, Fig. 2A), and
a display (display 115, Fig. 1, Col. 3, Lns. 41-50); and
a controller configured (virtual reality console 110, 210, Figs. 1, 2A and 2B) to,
prompt the user to assume different positions (console 210 prompts user 265 to move to series of different second positions, Figs. 2A and 2B, Col. 9, Ln. 3 thru Col. 10, Ln. 13),
control the sound source to emit a predetermined sound signal when the user is at each position (console 210 configures speaker 280 to generate test sound when user 265 is at second positions, Figs. 2A and 2B, Col. 9, Ln. 3 thru Col. 10, Ln. 13),
obtain a detected sound signal from each of the left and right microphones when the user is at each position (console 210 receives the test sound from left and right microphones 285 with user at second positions, Figs. 2A and 2B, Col. 9, Ln. 3 thru Col. 10, Ln. 13),
generate a personalised HRTF based on the predetermined sound signal and the detected sound signal for each microphone and each position (each microphone converts test sound into an audio sample and console 210 receives the audio samples from the microphones and generates a corresponding HRTF for each ear, Figs. 2A and 2B, Col. 9, Ln. 3 thru Col. 10, Ln. 13).
Petrov fails to explicitly teach controlling the display to show a plurality of virtual targets, wherein each virtual target is associated with one of the positions,
for at least one of the positions, determine a hearing factor based on the detected sound signal from each of the left and right microphones at that position, wherein the virtual target for each of those one or more positions comprises a visual element indicating the hearing factor determined at that position;
generate a personalised HRTF based on the predetermined sound signal and the detected sound signal for each microphone and each position and the one or more determined hearing factors.
However, Johnson teaches controlling the display to show a plurality of virtual targets, wherein each virtual target is associated with one of the positions (display of HMD 104 is controlled to show multiple virtual targets 404, 406, 408 each associated with a one position, Figs. 4A and 4B, Paras. [0050] and [0051]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system (as taught by Petrov) to include the plurality of virtual objects (as taught by Johnson). Doing so improves the accuracy and efficiency of creating personalized spatial audio.
However, Khaleghimeybodi teaches for at least one of the positions, determine a hearing factor based on the detected sound signal from each of the left and right microphones at that position (pinna geometric information [hearing factor] is determined for a given test sound [detected sound signal] from the left and right microphones and the geometric information is sent to the headset 220 for processing spatial audio for AR, VR, or MR, Para. [0048]),
generate a personalised HRTF based on the one or more determined hearing factors (personalised HRTFs corresponding to the test information may be generated using the geometric information, Para. [0048]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system (as taught by Petrov in view of Johnson) to include the hearing factor and personalised HRTF based on hearing factor (as taught by Khaleghimeybodi). Doing so enables tailor made virtual audio experience that is unique to the individual.
However, Katayama teaches a visual element indicating the hearing factor (face width and auricle size [hearing factor] can be derived and displayed, Figs. 15A-F, Paras. [0067]-[0071]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system (as taught by Petrov in view of Johnson in view of Khaleghimeybodi) to include the virtual target comprising visual element indicating the hearing factor (as taught by Katayama). Doing so enhances spatial audio realism and better localization and improved immersion in VR.
Regarding Claim 3, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama teaches wherein the virtual targets are displayed in a virtual 3D environment shown by the display, or the virtual targets augment a real 3D environment shown through the display (Petrov, the image presented by the HMD 205 for performing HRTF calibration includes an indicator 230 in a virtual space 220 [3D environment], Figs. 2A and 2B, Col. 9, Lns. 3-28).
Regarding Claim 5, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama teaches wherein each of the plurality of virtual targets (Petrov, display 290 show completion indicator, Figs. 2C and 2D, Col. 10, Lns. 14-42) further comprises a, completion indicator indicating an amount of sound signal detection which has been performed for the second position corresponding to the virtual target (Petrov, the completion indicator of display 290 indicates one signal detection performed, Figs. 2C and 2D, Col. 10, Lns. 14-42).
Regarding Claim 7, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama teaches wherein the different positions comprise different head orientations (Petrov, console 210 determines head orientation is aligned with target 230 at the different second positions, Col. 9, Ln. 3 thru Col. 10, Ln. 13).
Regarding Claim 9, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama teaches wherein the virtual reality headset further comprises a second position sensor (Petrov, VR headset 105 comprises second position sensors 125, Fig. 1, Col. 4, Lns. 20-31), and the controller is configured to detect when the user is in each position using the second position sensor (Petrov, the console 110 HRTF calibration engine 152 can also confirm whether the indicator is aligned with the head orientation through the tracking module 150 (e.g., by use of information obtained from the imaging device 135, the position sensors 125, or both), Fig. 1, Col. 7, Lns. 25-45).
Regarding Claim 10, Petrov teaches wherein the virtual reality headset comprises the sound source (Petrov, the VR headset 105 presents media to a user. Examples of media presented by the VR headset 105 include audio [the sound source], Col. 3, Lns. 41-50).
Regarding Claim 14, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama teaches wherein the left and right microphones are respectively arranged to be in the left ear canal and right ear canal of the user, when the virtual reality headset is worn (Petrov, microphone 185 can be attached next to an ear canal, Col. 5, Ln. 60 thru Col. 6, Ln. 9).
Regarding Claim 15, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama teaches wherein generating the personalised HRTF comprises:
obtaining a predetermined default HRTF model (Khaleghimeybodi, audio server 280 may use a HRTF model [default HRTF model] to predict an HRTF for a given test sound and audio signal combination, Para. [0048]); and
modifying the default HRTF model based on the one or more obtained hearing factors (Khaleghimeybodi, the default HRTF model is modified based on determined geometry of the pinnae [hearing factor], Para. [0048]).
Regarding Claim 16, it is similarly rejected as Claim 1. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 18, it is similarly rejected as Claim 3. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 20, it is similarly rejected as Claim 5. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 22, it is similarly rejected as Claim 7. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 23, it is similarly rejected as Claim 8. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 24, it is similarly rejected as Claim 9. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 25, it is similarly rejected as Claim 10. The method can be found in Petrov (Fig. 3, Col. 10, Ln. 43 thru Col. 11, Ln. 41).
Regarding Claim 26, it is similarly rejected as Claim 1. The non-transitory computer-readable medium can be found in Petrov (Fig. 1, Col. 6, Lns. 10-37).
6. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Petrov (U.S. Pub. No. 9,648,438 B1) in view of Johnson et al. (U.S. Pub. No. 2016/0284136 A1, hereinafter "Johnson") in view of Khaleghimeybodi et al. (U.S. Pub. No. 2021/0314720 A1, hereinafter "Khaleghimeybodi") in view of Katayama et al. (U.S. Pub. No. 2003/0147543 A1, hereinafter "Katayama"), and further in view of Magariyachi et al. (U.S. Pub. No. 2020/0068334 A1, hereinafter "Magariyachi"). Regarding Claim 8, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama fails to explicitly teach wherein the sound source comprises a first position sensor.
However, Magariyachi teaches wherein the sound source comprises a first position sensor (a sensor [first position sensor] is provided in the speaker 13, Fig. 7, Para. [0102]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the HRTF system (as taught by Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama) to include the sound source position sensor (as taught by Magariyachi). Doing so, the position of the sound source can be displayed as a target for HRTF measurement (Magariyachi Para. [0102]).
7. Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Petrov (U.S. Pub. No. 9,648,438 B1) in view of Johnson et al. (U.S. Pub. No. 2016/0284136 A1, hereinafter "Johnson") in view of Khaleghimeybodi et al. (U.S. Pub. No. 2021/0314720 A1, hereinafter "Khaleghimeybodi") in view of Katayama et al. (U.S. Pub. No. 2003/0147543 A1, hereinafter "Katayama"), and further in view of Cappello et al. (U.S. Pub. No. 2020/0374647 A1, hereafter "Cappello").
Regarding Claim 11, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama fails to explicitly teach wherein the hearing factor is an interaural time delay calculated between the left ear and the right ear for at least one of the positions.
However, Cappello teaches wherein the hearing factor is an interaural time delay calculated between the left ear and the right ear for at least one of the positions (the in-ear microphones are provided to measure a frequency response to the received sounds, and processing may be applied to generate HRTFs for each sound source position in dependence upon the measured frequency response. Interaural time and level differences may also be identified from analysis of the audio captured by the in-ear microphones, Para. [0028]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the HRTF system (as taught by Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama) to include calculating an interaural time delay (as taught by Cappello). Doing so, unique HRTFs generated for a user can be used as approximations of the correct HRTF for another user and one or more other sound source positions (Cappello Para. [0029]).
Regarding Claim 12, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama fails to explicitly teach wherein the hearing factor is an interaural level difference calculated between the left ear and the right ear for at least one of the positions.
However, Cappello teaches wherein the hearing factor is an interaural level difference calculated between the left ear and the right ear for at least one of the positions (the in-ear microphones are provided to measure a frequency response to the received sounds, and processing may be applied to generate HRTFs for each sound source position in dependence upon the measured frequency response. Interaural time and level differences may also be identified from analysis of the audio captured by the in-ear microphones, Para. [0028]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the HRTF system (as taught by Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama) to include calculating an interaural level difference (as taught by Cappello). Doing so, unique HRTFs generated for a user can be used as approximations of the correct HRTF for another user and one or more other sound source positions (Cappello Para. [0029]).
Regarding Claim 13, Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama fails to explicitly teach wherein the hearing factor is a spectral peak or notch associated with a physical feature of the user based on the detected sound signal for at least one of the microphones and at least one of the positions.
However, Cappello teaches wherein the hearing factor is a spectral peak or notch associated with a physical feature of the user based on the detected sound signal for at least one of the microphones and at least one of the positions (the step of generating one or more lower quality HRTF features for the head under test may also involve determining one or more spectral cues for the user. The spectral cues may correspond to one or more peaks and notches in the amplitude-frequency response of the audio signal detected at the left and right in-ear microphones, Paras. [0063] and [0064]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the HRTF system (as taught by Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama) to include calculating an interaural level difference (as taught by Cappello). Doing so, the spectral peak can be used with interaural time and level differences to obtain high quality HRTF for a user (Cappello Paras. [0069]-[0072]).
Response to Arguments
8. Applicant's arguments filed October 1, 2025 have been fully considered but they are not persuasive.
Regarding Independent Claims 1 and 16, Applicant argues (see applicant’s remark page 7), Claim 1 has been amended to recite that a system prompts a user to "assume different positions by controlling [a] display [ of a virtual reality headset] to show a plurality of virtual targets, wherein each virtual target is associated with one of the positions " and "for at least one of the positions, determine a hearing factor based on the detected sound signal from each of ... left and right microphones [of the virtual reality headset] at that position, wherein the virtual target for each of those one or more positions comprises a visual element indicating the hearing factor determined at that position." Independent claim 16 has been amended correspondingly. The Applicant argues that the applied reference does not disclose, teach or suggest at least this new feature, and submits that it is self-evident that the introduction of this feature merits further search and/or consideration.
The other claims in the application are each dependent on the independent claims, and are allowable for at least the above reasons. Because each claim is deemed to define additional aspects of the disclosure, however, the individual consideration of each claim on its own merits is respectfully requested. Withdrawal of all rejections is therefore respectfully requested.
In response to the above applicant’s argument, Independent Claims 1 and 16 have been rejected on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama.
Johnson teaches controlling the display to show a plurality of virtual targets, wherein each virtual target is associated with one of the positions (Figs. 4A and 4B, Paras. [0050] and [0051]).
Khaleghimeybodi teaches for at least one of the positions, determine a hearing factor based on the detected sound signal from each of the left and right microphones at that position (Para. [0048]).
Katayama teaches a visual element indicating the hearing factor (Figs. 15A-F, Paras. [0067]-[0071]).
The combination of the teaches of Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama renders Claims 1 and 16 obvious.
The rejections of Claims 1 and 16 based on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama are maintained.
Dependent Claims 3, 5, 7, 9, 10, 14, 15, 18, and 20 have been rejected on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama.
The rejections of Claims 3, 5, 7, 9, 10, 14, 15, 18, and 20 based on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi, and further in view of Katayama are maintained.
Dependent Claim 8 has been rejected on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi in view of Katayama, and further in view of Magariyachi.
The rejection of Claim 8 based on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi in view of Katayama, and further in view of Magariyachi is maintained.
Dependent Claims 11-13 have been rejected on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi in view of Katayama, and further in view of Cappello.
The rejections of Claims 3, 5, 7, 9, 10, 14, 15, 18, and 20 based on a new ground of rejection under 35 U.S.C. 103 as being unpatentable over Petrov in view of Johnson in view of Khaleghimeybodi in view of Katayama, and further in view of Cappello are maintained.
Conclusion
9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lester, III (U.S. Pub. No. 2016/0119731 A1) teaches measuring head-related transfer function.
Reijniers et al. (U.S. Pub. No. 2019/0208348 A1) teaches estimating an individualized head related transfer function.
10. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHIMEZIE E BEKEE whose telephone number is (571)272-0202. The examiner can normally be reached M-F 7.30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHIMEZIE EZERIWE BEKEE/Examiner, Art Unit 2691
/DUC NGUYEN/Supervisory Patent Examiner, Art Unit 2691