Prosecution Insights
Last updated: April 19, 2026
Application No. 18/729,478

EARBUD SUPPORTING VOICE ACTIVITY DETECTION AND RELATED METHOD

Non-Final OA §103§112
Filed
Jul 16, 2024
Examiner
NGUYEN, SEAN H
Art Unit
2691
Tech Center
2600 — Communications
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
513 granted / 596 resolved
+24.1% vs TC avg
Minimal +5% lift
Without
With
+4.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
13 currently pending
Career history
609
Total Applications
across all art units

Statute-Specific Performance

§101
0.5%
-39.5% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
31.7%
-8.3% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 596 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "a first ADC" in line 4. There is insufficient antecedent basis for this limitation in the claim. The Examiner believes the Applicant meant “the second signal includes a digital signal obtained by passing an analog signal input through the bone conduction VPU sensor through a second ADC,” and has interpreted claim 10 as such. Claim 18 is a substantial duplicate of claim 16. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pucci et al. (KR 2015-0080645) herein D1 in view of (KR 2012-0125986) herein D2. Regarding claim 1, D1 discloses an earbud for supporting voice activity detection (VAD) (earbud of D1: Figs. 7a-8b), the earbud comprising: a first filter unit configured to filter a first signal input through a microphone (filter unit P1Oa configured to filter a first signal input through microphone ML10, D1: Fig. 1b); a first VAD unit configured to perform VAD on a signal passing through the first filter unit (VAD unit 16 configured to perform VAD on a signal passing through first filter unit P1Oa, D1: Fig. 1b, [0031], [0035]-[0038]); a second filter unit configured to filter a second signal input through a voice pick up (VPU) sensor (second filter unit P1Ob via second microphone MS10 and second microphone signal MS10 interpreted to meet VPU sensor, D1: Fig. 1b); a second VAD unit configured to perform VAD on a signal passing through the second filter unit (second VAD unit 20 performs speech recognition on a signal that has passed through the second stage P1Ob, D1: [0031], [0035]-[0038]); and a determination unit configured to compare a detection result of the first VAD unit and a detection result of the second VAD unit to determine whether there is utterance (speech estimator SE10 estimates speech (utterance) by comparing the results of detection by second VAD unit 20 with the result of detection by the first VAD unit 16, D1: [0071]-[0074]), Fig. 5b), but lacks wherein the second signal input is through a bone conduction voice pick up sensor. Nevertheless, it is well known in the voice activity detection art to utilize a bone conduction voice pick up sensor for the type of microphone used as demonstrated by D2 (external VAD 14 using a bone conduction type microphone, D2: [0031], Fig. 1). Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the second pickup sensor of D1 to be a bone conduction voice pick up sensor for the type of microphone used as demonstrated by D2 in order to more accurately gather sound from a user. Regarding claim 2, in the combination of D1 and D2, D1 discloses wherein the first VAD unit and the second VAD unit simultaneously detect VAD (speech estimator (SE10) estimates speech by combining the result of detection by second VAD unit with the result of detection by the first VAD unit, interpreted to meet simultaneous detection of VAD by the first VAD unit and second VAD unit, D1: Figs. 5b, 7a-8b, [0071]-[0074]). Regarding claim 3, in the combination of D1 and D2, D1 discloses wherein the detection result of the first VAD unit and the detection result of the second VAD unit are either detection of utterance or non-detection of utterance (detection results of the first VAD unit and the second VAD units are either identifying include voice activity or lack of voice activity, D1: [0071]-[0074]), and the determination unit determines that there is utterance when both the detection result of the first VAD unit and the detection result of the second VAD unit are detection of utterance (determination unit SE10 determines utterance by combining the result of detection by second VAD unit with the result of detection by the first VAD unit, D1: Figs. 5b, 7a-8b, [0071]-[0074]). Regarding claim 4, in the combination of D1 and D2, D1 discloses wherein the first filter unit and the second filter unit include a high pass filter (HPF) (at stages P1Oa and P1Ob, high pass filtering is conducted, D1: Fig. 1B, [0031]). Regarding claim 6, while the combination of D1 and D2 do not specifically teach wherein, based on that the determination unit determines that there is utterance, content being played on the earbud is stopped, it is well known in the earbud art to stop content being played when utterance is detected. Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the earbud of D1 and D2 to stop content being played when utterance is detected in order to allow a user to hear a person speaking to them. The Examiner takes Official Notice. Regarding claim 7, while the combination of D1 and D2 do not specifically teach wherein a volume of the earbud is lowered to a preset level based on that the determination unit determines that there is utterance, it is well known in the art to lower the volume of content being played by an earbud when utterance is detected. Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the earbud of D1 and D2 to lower volume of content being played when utterance is detected to a preset level in order to allow a user to hear a person speaking to them. The Examiner takes Official Notice. Regarding claim 8, in the combination of D1 and D2, D1 discloses wherein the first filter unit, the second filter unit, the first VAD unit, the second VAD unit, and the determination unit are provided in a digital signal processor (DSP) unit (the first filter unit, second filter unit, first VAD unit, second VAD unit, and the determination unit SE10 are provided in a DSP, D1: [0136]), and the DSP is provided in the earbud (processing electronics are provided in the earbud, D1: Figs. 7a-8b). Regarding claim 9, the combination of D1 and D2 discloses wherein the microphone and the bone conduction VPU sensor are provided in the earbud (the microphones of D1 are provided in an earbud, D1: Fig. 1a, thus the bone conduction VPU sensor taught by D2 would be incorporated in the earbud as well). Regarding claim 10, while D1 and D2 do not specifically teach wherein the first signal includes a digital signal obtained by passing an analog signal input through the microphone through a first ADC, and the second signal includes a digital signal obtained by passing an analog signal input through the bone conduction VPU sensor through a first ADC, D1 does teach pre-processing the first and second signals through a first and second corresponding ADC (first signal passed through ADC at C10a and second signal passed through ADC at C10b, Fig. 1b, [0031], [0032]). Therefore, it would have been obvious to a person having ordinary skill in the art to modify the first and second signals of D1 and D2 to execute the pre-processing with corresponding ADC so that the first signal includes a digital signal obtained by passing an analog signal input through the microphone through a first ADC, and the second signal includes a digital signal obtained by passing an analog signal input through the bone conduction VPU sensor through a second ADC to execute digital signal processing sooner. Regarding claim 20, D1 discloses a method of determining voice activity detection (VAD) (method of determining voice activity detection, Figs. 1-21), the method comprising: filtering a first signal input through a microphone (filter unit P1Oa configured to filter a first signal input through microphone ML10, D1: Fig. 1b); performing VAD on the filtered first signal (VAD unit 16 configured to perform VAD on a signal passing through first filter unit P1Oa, D1: Fig. 1b, [0031], [0035]-[0038]); filtering a second signal input through a voice pick up (VPU) sensor (second filter unit P1Ob via second microphone MS10 and second microphone signal MS10 interpreted to meet VPU sensor, D1: Fig. 1b); performing VAD on the filtered second signal (second VAD unit 20 performs speech recognition on a signal that has passed through the second stage P1Ob, D1: [0031], [0035]-[0038]); and comparing a VAD detection result related to the first signal and a VAD detection result related to the second signal to determine whether there is utterance (speech estimator SE10 estimates speech (utterance) by comparing the results of detection by second VAD unit 20 with the result of detection by the first VAD unit 16, D1: [0071]-[0074]), Fig. 5b), but lacks wherein the VPU sensor is a bone conduction VPU sensor. Nevertheless, it is well known in the voice activity detection art to utilize a bone conduction voice pick up sensor for the type of microphone used as demonstrated by D2 (external VAD 14 using a bone conduction type microphone, D2: [0031], Fig. 1). Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the second pickup sensor of D1 to be a bone conduction voice pick up sensor for the type of microphone used as demonstrated by D2 in order to more accurately gather sound from a user. Claim(s) 11-16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Border et al. (US 2013/0278631) herein D3 in view of Pucci et al. (KR 2015-0080645) herein D1 and (KR 2012-0125986) herein D2. Regarding claim 11, D3 discloses a head mounted display (HMD) for supporting voice activity detection (VAD) (head mounted display of D3: Figs. 1-206 for supporting voice activity detection (configured to receive voice commands, thus support voice activity detection), [0538])), the HMD comprising: a display unit configured to provide an image to a user (eyepiece displays images, D3: [0028], [0279]); a wearing unit configured to allow the display unit to be worn on a head of a user (HMD has wearing unit that allows the display unit in the eyepiece to be worn on a head of a user via temple pieces 122, D3: Fig. 1); an earbud configured to provide a sound related to the image to the user (earbud 120 for providing sound related to the image to the user, D3: Fig. 1, [0463], [0589]); but lacks a first filter unit configured to filter a first signal input through a microphone; a first VAD unit configured to perform VAD on a signal passing through the first filter unit; a second filter unit configured to filter a second signal input through a bone conduction voice pick up (VPU) sensor; a second VAD unit configured to perform VAD on a signal passing through the second filter unit; and a determination unit configured to compare a detection result of the first VAD unit and a detection result of the second VAD unit to determine whether there is utterance. Nevertheless, D1 teaches a first filter unit configured to filter a first signal input through a microphone (filter unit P1Oa configured to filter a first signal input through microphone ML10, D1: Fig. 1b); a first VAD unit configured to perform VAD on a signal passing through the first filter unit (VAD unit 16 configured to perform VAD on a signal passing through first filter unit P1Oa, D1: Fig. 1b, [0031], [0035]-[0038]); a second filter unit configured to filter a second signal input through voice pick up (VPU) sensor (second filter unit P1Ob via second microphone MS10 and second microphone signal MS10 interpreted to meet VPU sensor, D1: Fig. 1b); a second VAD unit configured to perform VAD on a signal passing through the second filter unit (second VAD unit 20 performs speech recognition on a signal that has passed through the second stage P1Ob, D1: [0031], [0035]-[0038]); and a determination unit configured to compare a detection result of the first VAD unit and a detection result of the second VAD unit to determine whether there is utterance (speech estimator SE10 estimates speech (utterance) by comparing the results of detection by second VAD unit 20 with the result of detection by the first VAD unit 16, D1: [0071]-[0074]), Fig. 5b). Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the HMD of D3 to have a first filter unit configured to filter a first signal input through a microphone; a first VAD unit configured to perform VAD on a signal passing through the first filter unit; a second filter unit configured to filter a second signal input through a voice pick up (VPU) sensor; a second VAD unit configured to perform VAD on a signal passing through the second filter unit; and a determination unit configured to compare a detection result of the first VAD unit and a detection result of the second VAD unit to determine whether there is utterance as taught by D1 in order to improve a user’s communication experience by reducing background noise (D1: [0004]). Furthermore, D2 teaches the use of a bone conduction voice pick up sensor for the type of microphone used as demonstrated by D2 (external VAD 14 using a bone conduction type microphone, D2: [0031], Fig. 1). Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the second pickup sensor of D3 and D1 to be a bone conduction voice pick up sensor for the type of microphone used as demonstrated by D2 in order to more accurately gather sound from a user. Regarding claim 12, in the combination of D3, D1 and D2, D1 discloses wherein the first filter unit, the second filter unit, the first VAD unit, the second VAD unit, and the determination unit are provided in a DSP unit (the first filter unit, second filter unit, first VAD unit, second VAD unit, and the determination unit SE10 are provided in a DSP, D1: [0136]), and the DSP unit is provided in either the HMD or the earbud (processing electronics are provided in the earbud, D1: Figs. 7a-8b). Regarding claim 13, in the combination of D3, D1 and D2, D1 and D2 disclose wherein the microphone and the bone conduction VPU sensor are provided in the earbud (the microphones of D1 are provided in an earbud, D1: Fig. 1a, thus the bone conduction VPU sensor taught by D2 would be incorporated in the earbud as well). Regarding claim 14, in the combination of D3, D1 and D2, D1 discloses wherein the first VAD unit and the second VAD unit simultaneously detect VAD (speech estimator (SE10) estimates speech by combining the result of detection by second VAD unit with the result of detection by the first VAD unit, interpreted to meet simultaneous detection of VAD by the first VAD unit and second VAD unit, D1: Figs. 5b, 7a-8b, [0071]-[0074]). Regarding claim 15, in the combination of D3, D1 and D2, D1 discloses wherein the detection result of the first VAD unit and the detection result of the second VAD unit are either detection of utterance or non-detection of utterance (detection results of the first VAD unit and the second VAD units are either identifying include voice activity or lack of voice activity, D1: [0071]-[0074]), and the determination unit determines that there is utterance when both the detection result of the first VAD unit and the detection result of the second VAD unit are detection of utterance (determination unit SE10 determines utterance by combining the result of detection by second VAD unit with the result of detection by the first VAD unit, D1: Figs. 5b, 7a-8b, [0071]-[0074]).. Regarding claim 16 and 18, while the combination of D3, D1 and D2 do not specifically teach wherein, based on that the determination unit determines that there is utterance, content being played on the earbud is stopped, it is well known in the earbud art to stop content being played when utterance is detected. Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the earbud of D3, D1 and D2 to stop content being played when utterance is detected in order to allow a user to hear a person speaking to them. The Examiner takes Official Notice. Regarding claim 19, while the combination of D3, D1 and D2 do not specifically teach wherein a volume of the earbud is lowered to a preset level based on that the determination unit determines that there is utterance, it is well known in the art to lower the volume of content being played by an earbud when utterance is detected. Therefore, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the earbud of D3, D1 and D2 to lower volume of content being played when utterance is detected to a preset level in order to allow a user to hear a person speaking to them. The Examiner takes Official Notice. Allowable Subject Matter Claims 5 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN H NGUYEN whose telephone number is (571)270-5728. The examiner can normally be reached M-F 10-6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at (571)272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN H NGUYEN/Primary Examiner, Art Unit 2691
Read full office action

Prosecution Timeline

Jul 16, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599260
AUDIO APPARATUS IN DRINKWARE
2y 5m to grant Granted Apr 14, 2026
Patent 12604153
VIRTUAL CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12598417
WAVEGUIDES FOR SIDE-FIRING AUDIO TRANSDUCERS
2y 5m to grant Granted Apr 07, 2026
Patent 12598416
VIBRATION EXCITER WITH ELASTIC BRACKETS
2y 5m to grant Granted Apr 07, 2026
Patent 12593175
ELECTRO-ACOUSTIC TRANSDUCER
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
91%
With Interview (+4.9%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 596 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month