Prosecution Insights
Last updated: April 19, 2026
Application No. 18/771,206

AUDIO DEVICE AND WORKING METHOD THEREOF, VR DEVICE

Non-Final OA §103
Filed
Jul 12, 2024
Examiner
GANMAVO, KUASSI A
Art Unit
2692
Tech Center
2600 — Communications
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
415 granted / 593 resolved
+8.0% vs TC avg
Strong +20% interview lift
Without
With
+20.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
40 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 593 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Based on the ADS filed 07/12/2024, the domestic priority documents are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/27/2024 was filed after the mailing date of the application on 07/12/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a positioning unit” “an acquisition unit configured to” “a computing unit configured to” “an image processing unit” “a transmitting and receiving unit” “computing unit” in claims 1 - 14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1). Regarding claim 1, Litovsky et al disclose an audio device, comprising: a sound chamber (Litovosky et al; Fig 1A; acoustic waveguide 44); at least one speaker located within the sound chamber (Litovosky et al; Fig 1A; driver 42); but do not expressly disclose an acquisition unit, configured to respectively obtain audio data and a target ear spectral curve HRTF at a position of a virtual sound source corresponding to the audio data; a computing unit, configured to process the audio data based on the target HRTF, generate a target sound signal, and output the target sound signal to the speaker. However, in the same field of endeavor, Torres et al disclose a method further comprising: an acquisition unit, configured to respectively obtain audio data and a target ear spectral curve HRTF at a position of a virtual sound source corresponding to the audio data (Torres et al; Para [0046][0050]); a computing unit, configured to process the audio data based on the target HRTF, generate a target sound signal, and output the target sound signal to the speaker (Torres et al; Para [0046][0050]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound virtualization taught by Torres to virtualize the audio taught by Litovsky. The motivation to do so would have been to improve the realism of the virtual sound quality (Torres et al; Para [0087]). Regarding claim 2, Litovosky et al in view of Torres et al disclose the audio device according to claim 1, wherein the audio device is a neck hanging audio device (Litovosky et al; Fig 1B), the sound chamber is located within the neck hanging portion of the neck hanging audio device (Litovosky et al; Fig 1A; waveguide 44 is located within the neck hanging portion of the neck hanging audio device). Regarding claim 11, Litovosky et al in view of Torres et al disclose a working method of audio device, applied to an audio device according to claim 1 (Litovosky et al in view of Torres et al disclose claim 1), but do not expressly disclose comprising: respectively obtaining audio data and a target ear spectral curve HRTF at a position of a virtual sound source corresponding to the audio data; processing the audio data based on the target HRTF, generating a target sound signal, and outputting the target sound signal to the speaker. However, in the same field of endeavor, Torres et al disclose a method further comprising: respectively obtaining audio data and a target ear spectral curve HRTF at a position of a virtual sound source corresponding to the audio data (Torres et al; Para [0046][0050]); processing the audio data based on the target HRTF, generating a target sound signal, and outputting the target sound signal to the speaker (Torres et al; Para [0046][0050]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound virtualization taught by Torres to virtualize the audio taught by Litovsky. The motivation to do so would have been to improve the realism of the virtual sound quality (Torres et al; Para [0087]). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Litovsky et al’314 (US 2018/0103314 A1). Regarding claim 3, Litovosky et al in view of Torres et al disclose the audio device according to claim 2, but do not expressly disclose further comprising: a power supply located within the neck hanging portion; the speaker is connected to the power supply through wired means. However, in the same field of endeavor, Litovsky et al’314 disclose a device further comprising: a power supply located within the neck hanging portion (Litovsky et al’314; Fig 10; power supply 106 through PCB 104); the speaker is connected to the power supply through wired means (Litovsky et al’314; Fig 10; transducer 14 connected to power supply through wired means). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the energy circuit taught by Litovsky et al’314 as energy circuit in the audio taught by Litovsky. The motivation to do so would have been to provide energy to the acoustic device (Litovsky et al’314; Para [0039]). Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Ohura (US 2019/0320258 A1). Regarding claim 4, Litovosky et al in view of Torres et al disclose the audio device according to claim 2, comprising audio waveguide tube located within the sound chamber (Litovosky et al; Fig 1A; waveguide tube 45 is located within sound chamber 44); but do not expressly disclose further comprising: at least one low-frequency passive diaphragm. However, in the same field of endeavor, Ohura discloses an audio device further comprising: at least one low-frequency passive diaphragm (Ohura; Para [0106]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio driver taught by Ohura as audio driver in the system taught by Litovsky. The motivation to do so would have been to provide a speaker apparatus in which the timbre is less likely to change due to variations in the relative positions of the speaker and the ear (Ohura; Para [0053]). Regarding claim 5, Litovosky et al in view of Torres et al and further in view of Ohura disclose the audio device according to claim 4, wherein the audio waveguide tube and the speaker are set at intervals (Litovosky et al; Fig 1A; waveguide tube 45 and driver 42 are set at intervals) but do not expressly disclose and/or the low-frequency passive diaphragm and the speaker are set at intervals. However, in the same field of endeavor, Ohura discloses an audio device wherein and/or the low-frequency passive diaphragm and the speaker are set at intervals (Ohura; Fig 12; diaphragm 203 and speaker 202 are set at interval). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio driver taught by Ohura as audio driver in the system taught by Litovsky. The motivation to do so would have been to provide a speaker apparatus in which the timbre is less likely to change due to variations in the relative positions of the speaker and the ear (Ohura; Para [0053]). Claim(s) 6, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Woefl (US 2018/0041837 A1) and further in view of Mehra et al (US 2021/0306744 A1). Regarding claim 6, Litovosky et al in view of Torres et al disclose the audio device according to claim 2, but do not expressly disclose wherein the audio device is applied to a VR device, and the audio device further comprises: a positioning unit, configured to locate a relative position relationship between the speaker and a target part, the target part comprises head and/or ears; the computing unit is specifically configured to obtain a first position of a virtual audio source in a space coordinate system of the VR device; obtain a second position of the target part in the space coordinate system, and determine a third position of the speaker in the space coordinate system based on the relative position relationship between the speaker and the target part; generate the target sound signal based on the target HRTF at the first position, the HRTF at the third position, and the audio data. However, in the same field of endeavor, Woefl discloses and the audio device further comprises: a positioning unit, configured to locate a relative position relationship between the speaker and a target part, the target part comprises head and/or ears (Woefl et al; Para [0035]-[0036]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the positioning unit taught by Woefl as positioning unit in the system taught by Litovsky. The motivation to do so would have been to reduce perceivable differences in tonality and loudness (Woefl; Para [0029]). Moreover, in the same field of endeavor, Mehra et al disclose an audio device wherein the audio device is applied to a VR device (Mehra et al; Para [0016]), and the audio device further comprises: the computing unit is specifically configured to obtain a first position of a virtual audio source in a space coordinate system of the VR device (Mehra et al; Para [0015]; [0031]; SLAM interpreted as space coordinate of the VR device); obtain a second position of the target part in the space coordinate system (Mehra et al; Para [0015]-[0018]), and determine a third position of the speaker in the space coordinate system based on the relative position relationship between the speaker and the target part (Mehra et al; Para [0015][0031]); generate the target sound signal based on the target HRTF at the first position, the HRTF at the third position, and the audio data (Mehra et al; Para [0015][0024]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio sensor taught by Mehra as audio sensor in the system taught by Litovsky. The motivation to do so would have been to provide for dynamic determination of personalized acoustic transfer functions for a user (Mehra et al; Para [0002]). Regarding claim 13, Litovsky et al in view of Torres et al disclose the working method of audio device according to claim 11, wherein the audio device is a neck hanging audio device (Litovosky et al; Fig 1B), the sound chamber is located within the neck hanging portion of the neck hanging audio device (Litovosky et al; Fig 1A; waveguide 44 is located within the neck hanging portion of the neck hanging audio device); but do not expressly disclose wherein the audio device is applied to a VR device, and the audio device further comprises: a positioning unit, configured to locate a relative position relationship between the speaker and a target part, the target part comprises head and/or ears; the computing unit is specifically configured to obtain a first position of a virtual audio source in a space coordinate system of the VR device; obtain a second position of the target part in the space coordinate system, and determine a third position of the speaker in the space coordinate system based on the relative position relationship between the speaker and the target part; generate the target sound signal based on the target HRTF at the first position, the HRTF at the third position, and the audio data; the working method further comprises: locating a relative position relationship between the speaker and a target part, the target part comprises head and/or ears; the processing the audio data based on the target HRTF and generating a target sound signal comprises: obtaining a first position of a virtual audio source in a space coordinate system of the VR device; obtaining a second position of the target part in the space coordinate system, and determining a third position of the speaker in the space coordinate system based on the relative position relationship between the speaker and the target part; generating the target sound signal based on the target HRTF at the first position, the HRTF at the third position, and the audio data. However, in the same field of endeavor, Woefl discloses and the audio device further comprises: a positioning unit, configured to locate a relative position relationship between the speaker and a target part, the target part comprises head and/or ears (Woefl et al; Para [0035]-[0036]) the working method further comprises: locating a relative position relationship between the speaker and a target part, the target part comprises head and/or ears (Woefl et al; Para [0035]-[0036]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the positioning unit taught by Woefl as positioning unit in the system taught by Litovsky. The motivation to do so would have been to reduce perceivable differences in tonality and loudness (Woefl; Para [0029]). Moreover, in the same field of endeavor, Mehra et al disclose an audio device wherein the audio device is applied to a VR device (Mehra et al; Para [0016]), the computing unit is specifically configured to obtain a first position of a virtual audio source in a space coordinate system of the VR device (Mehra et al; Para [0015]; [0031]; SLAM interpreted as space coordinate of the VR device); obtain a second position of the target part in the space coordinate system (Mehra et al; Para [0015]-[0018]), and determine a third position of the speaker in the space coordinate system based on the relative position relationship between the speaker and the target part (Mehra et al; Para [0015][0031]); generate the target sound signal based on the target HRTF at the first position, the HRTF at the third position, and the audio data (Mehra et al; Para [0015][0024]) the processing the audio data based on the target HRTF and generating a target sound signal comprises: obtaining a first position of a virtual audio source in a space coordinate system of the VR device (Mehra et al; Para [0015]; [0031]; SLAM interpreted as space coordinate of the VR device); obtaining a second position of the target part in the space coordinate system (Mehra et al; Para [0015]-[0018]), and determining a third position of the speaker in the space coordinate system based on the relative position relationship between the speaker and the target part (Mehra et al; Para [0015][0031]); generating the target sound signal based on the target HRTF at the first position, the HRTF at the third position, and the audio data (Mehra et al; Para [0015][0024]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio sensor taught by Mehra as audio sensor in the system taught by Litovsky. The motivation to do so would have been to provide for dynamic determination of personalized acoustic transfer functions for a user (Mehra et al; Para [0002]). Claim(s) 7, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Woefl (US 2018/0041837 A1) and further in view of Mehra et al (US 2021/0306744 A1) and further in view of DeSalvo et al (US 11,226,406 B1). Regarding claim 7, Litovosky et al in view of Torres et al and further in view of Mehra et al and further in view of Woefl disclose the audio device according to claim 6, but do not expressly disclose wherein the positioning unit comprises at least one of the following: a transmitting and receiving unit, comprising a transmitting unit located on a head-mounted display of the VR device and a receiving unit located on the neck hanging portion, the transmitting unit is configured to transmit a target signal, and the receiving unit is configured to determine a relative position relationship between the head-mounted display and the speaker based on the received signal, and determine the relative position relationship between the speaker and the target part based on the relative position relationship between the head-mounted display and the speaker, the target signal comprises at least one of the following: non-sinusoidal narrow pulse, ultrasound, optical signal, or electromagnetic wave; an image processing unit set at the neck hanging portion, configured to obtain an image of the target part and determine the relative position relationship between the speaker and the target part based on the image; an inertial sensor IMU set at the neck hanging portion, configured to record relative rotation and displacement between the head-mounted display of the VR device and the neck hanging portion, determine the relative position between the head-mounted display and the speaker based on the recorded relative rotation and displacement, and determine the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker. However, in the same field of endeavor, DeSalvo et al disclose the audio device wherein the positioning unit comprises at least one of the following: a transmitting and receiving unit, comprising a transmitting unit located on a head-mounted display of the VR device and a receiving unit located on the neck hanging portion (DeSalvo et al; col 6; lines 10-45), the transmitting unit is configured to transmit a target signal, and the receiving unit is configured to determine a relative position relationship between the head-mounted display and the speaker based on the received signal (DeSalvo et al; col 16; lines 20-45), and determine the relative position relationship between the speaker and the target part based on the relative position relationship between the head-mounted display and the speaker, (DeSalvo et al; Fig 3; col 16; lines 20-45). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the positioning unit taught by DeSalvo as positioning unit in the system taught by Litovsky. The motivation to do so would have been to accurately measure small distances (DeSalvo et al; col 2; lines 55-65). Moreover, in the same field of endeavor, Woefl discloses and the audio device further comprises: and determine the relative position relationship between the speaker and the target part based on the relative position relationship between the head-mounted display and the speaker (Woefl et al; Para [0035]-[0036]) the target signal comprises at least one of the following: non-sinusoidal narrow pulse, ultrasound, optical signal, or electromagnetic wave (Woefl; Para [0031]) an image processing unit set at the neck hanging portion, configured to obtain an image of the target part and determine the relative position relationship between the speaker and the target part based on the image (Woefl et al; Para [0031]-[0033]); an inertial sensor IMU set at the neck hanging portion, configured to record relative rotation and displacement between the head-mounted display of the VR device and the neck hanging portion (Woefl et al; Para [0030][0031]), determine the relative position between the head-mounted display and the speaker based on the recorded relative rotation and displacement (Woefl et al; Para [0034]-[0036]), and determine the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker (Woefl et al; Para [0034]-[0037]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the positioning unit taught by Woefl as positioning unit in the system taught by Litovsky. The motivation to do so would have been to reduce perceivable differences in tonality and loudness (Woefl; Para [0029]). Regarding claim 14, Litovsky et al in view of Torres et al and further in view of Woefl and further in view of Mehra et al disclose the working method of audio device according to claim 13, but do not expressly disclose wherein the positioning unit comprises at least one of the following: a transmitting and receiving unit, comprising a transmitting unit located on a head-mounted display of the VR device and a receiving unit located on the neck hanging portion, the transmitting unit is configured to transmit a target signal, and the receiving unit is configured to determine a relative position relationship between the head-mounted display and the speaker based on the received signal, and determine the relative position relationship between the speaker and the target part based on the relative position relationship between the head-mounted display and the speaker, the target signal comprises at least one of the following: non-sinusoidal narrow pulse, ultrasound, optical signal, or electromagnetic wave; an image processing unit set at the neck hanging portion, configured to obtain an image of the target part and determine the relative position relationship between the speaker and the target part based on the image; an inertial sensor IMU set at the neck hanging portion, configured to record relative rotation and displacement between the head-mounted display of the VR device and the neck hanging portion, determine the relative position between the head-mounted display and the speaker based on the recorded relative rotation and displacement, and determine the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker; wherein the locating a relative position relationship between the speaker and a target part comprises at least one of the following: using the transmitting unit to transmit a target signal, using the receiving unit to determine the relative position relationship between the head-mounted display and the speaker based on the received signal, and determining the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker, the target signal comprises at least one of the following: non-sinusoidal narrow pulse, ultrasound, optical signal, or electromagnetic wave; using the image processing unit to obtain an image of the target part, and determining the relative position relationship between the speaker and the target part based on the image; using the inertial sensor IMU to record relative rotation and displacement between the head-mounted display of the VR device and the neck hanging portion, determining the relative position between the head-mounted display and the speaker based on the recorded relative rotation and displacement, and determining the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker. However, in the same field of endeavor, DeSalvo et al disclose the audio device wherein the positioning unit comprises at least one of the following: a transmitting and receiving unit, comprising a transmitting unit located on a head-mounted display of the VR device and a receiving unit located on the neck hanging portion (DeSalvo et al; Fig 3; col 6; lines 10-45), the transmitting unit is configured to transmit a target signal, and the receiving unit is configured to determine a relative position relationship between the head-mounted display and the speaker based on the received signal (DeSalvo et al; col 16; lines 20-45), and determine the relative position relationship between the speaker and the target part based on the relative position relationship between the head-mounted display and the speaker (DeSalvo et al; Fig 3; col 16; lines 20-45) wherein the locating a relative position relationship between the speaker and a target part comprises at least one of the following: using the transmitting unit to transmit a target signal, using the receiving unit to determine the relative position relationship between the head-mounted display and the speaker based on the received signal (DeSalvo et al; col 16; lines 20-45), and determining the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker (DeSalvo et al; Fig 3; col 16; lines 20-45). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the positioning unit taught by DeSalvo as positioning unit in the system taught by Litovsky. The motivation to do so would have been to accurately measure small distances (DeSalvo et al; col 2; lines 55-65). Moreover, in the same field of endeavor, Woefl discloses and the audio device further comprising the target signal comprises at least one of the following: non-sinusoidal narrow pulse, ultrasound, optical signal, or electromagnetic wave (Woefl; Para [0031]); an image processing unit set at the neck hanging portion, configured to obtain an image of the target part and determine the relative position relationship between the speaker and the target part based on the image (Woefl et al; Para [0031]-[0033]); an inertial sensor IMU set at the neck hanging portion, configured to record relative rotation and displacement between the head-mounted display of the VR device and the neck hanging portion (Woefl et al; Para [0030][0031]), determine the relative position between the head-mounted display and the speaker based on the recorded relative rotation and displacement (Woefl et al; Para [0034]-[0036]), and determine the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker (Woefl et al; Para [0034]-[0037]); the target signal comprises at least one of the following: non-sinusoidal narrow pulse, ultrasound, optical signal, or electromagnetic wave (Woefl; Para [0031]); using the image processing unit to obtain an image of the target part, and determining the relative position relationship between the speaker and the target part based on the image (Woefl et al; Para [0031]-[0033]); using the inertial sensor IMU to record relative rotation and displacement between the head-mounted display of the VR device and the neck hanging portion (Woefl et al; Para [0030][0031]), determining the relative position between the head-mounted display and the speaker based on the recorded relative rotation and displacement (Woefl et al; Para [0034]-[0036]), and determining the relative position relationship between the speaker and the target part based on the relative position between the head-mounted display and the speaker (Woefl et al; Para [0034]-[0037]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the positioning unit taught by Woefl as positioning unit in the system taught by Litovsky. The motivation to do so would have been to reduce perceivable differences in tonality and loudness (Woefl; Para [0029]). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Ushakov (US 2016/0381453 A1). Regarding claim 8, Litovosky et al in view of Torres et al disclose the audio device according to claim 2, but do not expressly disclose further comprising: an external interface, configured to connect to wired earphones. However, in the same field of endeavor, Ushakov discloses an audio device further comprising: an external interface, configured to connect to wired earphones (Ushakov; Fig 28; Para [0283] audio device 1 further comprising: an external interface, configured to connect to wired earphone 3A and 3B). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the connection interface taught by Ushakov as connection interface in the system taught by Litovsky. The motivation to do so would have been to assure convenient use with no restricting freedom of user's movement (Ushakov; Para [0060]) Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Mehra et al (US 2021/0306744 A1). Regarding claim 9, Litovosky et al in view of Torres et al disclose the audio device according to claim 2, but do not expressly disclose further comprising: a microphone set at the neck hanging portion. However, in the same field of endeavor, Mehra et al disclose an audio device further comprising: a microphone set at the neck hanging portion (Mehra et al; Fig 3; microphone 320c 320d on neck portion 305). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio sensor taught by Mehra as audio sensor in the system taught by Litovsky. The motivation to do so would have been to provide for dynamic determination of personalized acoustic transfer functions for a user (Mehra et al; Para [0002]). Regarding claim 10, Litovosky et al in view of Torres et al disclose an audio device according to claim 1; but do not expressly disclose and a VR device comprising a head-mounted display. However, in the same field of endeavor, Mehra et al disclose an audio device further comprising: a VR device comprising a head-mounted display (Mehra et al; Fig 3; VR display 300; Para [0018]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the audio sensor taught by Mehra as audio sensor in the system taught by Litovsky. The motivation to do so would have been to provide for dynamic determination of personalized acoustic transfer functions for a user (Mehra et al; Para [0002]). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Litovsky et al (US 10,531,186 B1) in view of Torres et al (US 2020/0037097 A1) and further in view of Boyden (US 5,815,579). Regarding claim 12, Litovsky et al in view of Torres et al disclose the working method of audio device according to claim 11, but do not expressly disclose wherein the sound signal comprises a right channel signal and a left channel signal, before outputting the target sound signal to the speaker, the method further comprises: eliminating a crosstalk signal generated by the right channel signal from the left channel signal, and eliminating a crosstalk signal generated by the left channel signal from the right channel signal. However, in the same field of endeavor, Boyden et al disclose a method e working method of audio device according to claim 11, but do not expressly disclose wherein the sound signal comprises a right channel signal and a left channel signal, before outputting the target sound signal to the speaker (Boyden; Fig 1]), the method further comprises: eliminating a crosstalk signal generated by the right channel signal from the left channel signal, and eliminating a crosstalk signal generated by the left channel signal from the right channel signal (Boyden; Fig 2; Fig 3; col 4; lines 40-60). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the crosstalk cancellation taught by Boyden to process the audio in the system taught by Litovsky. The motivation to do so would have been to improve high frequency smoothness (Boyden; col 3; lines 30-35). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUASSI A GANMAVO whose telephone number is (571)270-5761. The examiner can normally be reached M-F 9 AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 5712707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KUASSI A GANMAVO/Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jul 12, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604127
INFORMATION HANDLING SYSTEM HEADSET WITH ADJUSTABLE HEADBAND TENSIONER
2y 5m to grant Granted Apr 14, 2026
Patent 12587781
Parametric Spatial Audio Rendering with Near-Field Effect
2y 5m to grant Granted Mar 24, 2026
Patent 12572319
SYSTEM AND METHOD FOR PLAYING AN AUDIO INDICATOR TO IDENTIFY A LOCATION OF A CEILING MOUNTED LOUDSPEAKER
2y 5m to grant Granted Mar 10, 2026
Patent 12556858
METHODS OF MAKING SIDE-PORT MICROELECTROMECHANICAL SYSTEM MICROPHONES
2y 5m to grant Granted Feb 17, 2026
Patent 12538089
Spatial Audio Rendering Point Extension
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 593 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month