Prosecution Insights
Last updated: April 19, 2026
Application No. 18/735,853

MIRCROPHONE ASSEMBLY AND METHOD FOR PROVIDING HEARING ASSISTANCE

Non-Final OA §103
Filed
Jun 06, 2024
Examiner
YU, NORMAN
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Sonova AG
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
525 granted / 598 resolved
+25.8% vs TC avg
Moderate +14% lift
Without
With
+13.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
35 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: MICROPHONE ASSEMBLY AND METHOD FOR PROVIDING HEARING ASSISTANCE WITH OPTIMIZED DIRECTIVTY Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 10-12, and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 2013/0272097) in view of Kubota (US 2015/0373452). Regarding claim 1, Kim teaches A microphone assembly comprising: at least three spaced apart microphones defining a microphone plane (Kim figure 17 and 19, MC10, MC20, and MC30. See also figure 11B), each configured to capture an input audio signal from an audio source (Kim figures 17, 19 and ¶0214 “source directions d1, d2, and d3”); an audio signal processing unit for processing the input audio signals in a manner so as to generate an output audio signal with a directivity (Kim ¶0276, “apply a spatially directive filter (e.g., a beamformer and/or null beamformer) to the multichannel signal to concentrate the energies of sound components arriving from different directions into different corresponding output channels of the spatially directive filter and/or to attenuate energy of a sound component arriving from a particular direction”); an audio source direction estimation unit (Kim figure 32A and 24A, apparatus MF320) including: a first direction of arrival estimation module (Kim figure 32A, means FB100a and figure 24B, TB100a) for estimating a first angle of incidence between the audio source and a first axis (Kim figures 20A-22E, 22A-22E and ¶0226, “DOA estimates (.theta..sub.x, .theta..sub.y)”) defined by a first pair of the microphones (Kim figures 20A-22E,22A-22E, any pair formed by MC10, MC20 and MC30) with regard to a base point centered between the microphones of the first pair of microphones (Kim figures 20A-22E,22A-22E. The midpoint between, MC10 and MC20 is on an axis. The midpoint between MC 20 and MC30 is also on an axis); a second direction of arrival estimation module (Kim figure 32A, means FB100b and figure 24B, TB100b) for estimating a second angle of incidence between the audio source and a second axis (Kim figure 19A, .theta..sub.x, .theta..sub.y) defined by a second pair of the microphones with regard to a base point centered between the microphones of the second pair of microphones (Kim figures 20A-22E,22A-22E. The midpoint between, MC10 and MC20 is on an axis. The midpoint between MC 20 and MC30 is also on an axis), the second pair of microphones being different from the first pair of microphones and the first axis and the second axis having different directions (Kim figure 19A, .theta..sub.x, .theta..sub.y); and an elevation estimation module for estimating an angle of elevation of the audio source with regard to the microphone plane based on the estimated first angle of incidence and the estimated second angle of incidence (Kim figure 32A, means FB400 and figure 24B, TB400); and a control unit for using the estimated angle of elevation to control a directivity parameter of the audio signal processing unit on which the directivity of the output audio signal depends (Kim figure 49A and ¶0280-0282, 0295, applying and updating spatially directive filter based on (change in) angle of arrival, wherein the set of filter (i.e. beamformer) coefficients corresponds to the “directivity parameter”), however does not explicitly teach a control unit for using the estimated angle of elevation to control a directivity parameter of the audio signal processing unit on which the directivity of the output audio signal depends. Kubota further teaches a control unit for using the estimated angle of elevation (Kubota figure 3 and ¶0068, “a vertical position coordinate and a horizontal position coordinate expressed on a pixel basis are respectively converted into a vertical deviation angle δx and a horizontal deviation angle δy.” See also ¶0148 which disclose determining the angles with a microphone)to control a directivity parameter of the audio signal processing unit on which the directivity of the output audio signal depends (Kubota figures 3-4 and ¶0073, “Then, the directional control angle θ in the elevation/depression angle direction is obtained on the basis of the deviation angle δ, the relative positional relationship between the camera 15 and the speaker set 11, and the emission characteristics of the sound wave emitted from the speaker apparatus 10”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Kubota to improve the known microphone assembly of Kim to achieve the predictable result of improved directionality of the audio output. Regarding claim 2, Kim in view of Kubota teaches wherein the control unit is configured to control said directivity parameter such that the directivity of the output audio signal is reduced during times when the estimated angle of elevation is found to be above a threshold (Kim ¶0196, ¶0198). Regarding claim 10, Kim teaches wherein: the audio source direction estimation unit includes a third direction of arrival estimation module for estimating a third angle of incidence between the audio source and a third axis defined by a third pair of the microphones with regard to a base point centered between the microphones of the third pair of microphones (Kim ¶0244 and figure 24B, 32B TB100c, FB100c); and the elevation estimation module is configured to estimate the angle of elevation of the audio source based on the estimated first angle of incidence, the estimated second angle of incidence and the estimated third angle of incidence (Kim ¶0265, “such case elevation calculator B400 may be configured to estimate the angle of elevation based on information from the three DOA estimates”). Regarding claim 11, Kim teaches wherein the elevation estimation module comprises: a first submodule, a second submodule and a third submodule, each of the submodules configured to provide for a pre-estimate of the angle of elevation of the audio source based on a different pair of the estimated first angle of incidence, the estimated second angle of incidence and the estimated third angle of incidence (Kim figure 32B, means FB100a, FB100b and FB100c), and an elevation fusion submodule configured to generate the estimate of the angle of elevation of the audio source from the pre-estimates of the angle of elevation of the audio source (Kim figure 32C, “such case elevation calculator B400 may be configured to estimate the angle of elevation based on information from the three DOA estimates”). Regarding claim 12, Kim teaches wherein: each direction of arrival estimation module is configured to provide its estimated angle of incidence between the audio source and its axis as an uncertainty cone around its axis (Kim figures 17a, 19a and ¶0215+¶0222, “cone of confusion”); and the elevation estimation module or submodule, respectively, is configured to provide the estimate or pre-estimate of the angle of elevation, respectively, from an intersection line of the uncertainty cones of the estimated first angle of incidence and the estimated second angle or of the respective pair of the estimated first angle of incidence, the estimated second angle of incidence and the estimated third angle of incidence, respectively(Kim figure 32C, “such case elevation calculator B400 may be configured to estimate the angle of elevation based on information from the three DOA estimates”). Regarding claim 14, Kim teaches A hearing assistance system comprising: the microphone assembly of claim 1, further comprising a wireless interface for transmitting the output audio signal as an audio stream (Kim ¶0332, “a radio transmitter, which is configured to transmit an encoded audio signal which is based on audio information received via microphone MC10, MC20, and/or MC30 (e.g., based on an output signal produced by a spatially directive filter of apparatus AC100, AD100, MFC100, or MFD100) into a transmission channel as an RF communications signal that describes the encoded audio signal. Such a device may be configured to transmit and receive voice communications data wirelessly”); and a hearing device comprising a wireless interface for receiving the audio stream from the microphone assembly and an output transducer for stimulating a user's hearing according to the audio stream (Kim ¶0332, “device may be configured to transmit and receive voice communications data wirelessly via any one or more of the codecs referenced herein”). Regarding claim 15, Kim teaches A method of providing an output audio signal from an audio source, comprising: providing a microphone assembly comprising at least three spaced apart microphones defining a microphone plane (Kim figure 17 and 19, MC10, MC20, and MC30. See also figure 11B); capturing, by each of the microphones, an input audio signal from the audio source (Kim figures 17, 19 and ¶0214 “source directions d1, d2, and d3”); estimating a first angle of incidence between the audio source and a first axis (Kim figures 20A-22E, 22A-22E and ¶0226, “DOA estimates (.theta..sub.x, .theta..sub.y)”) defined by a first pair of the microphones (Kim figures 20A-22E,22A-22E, any pair formed by MC10, MC20 and MC30) with regard to a base point centered between the microphones of the first pair of microphones (Kim figures 20A-22E,22A-22E. The midpoint between, MC10 and MC20 is on an axis. The midpoint between MC 20 and MC30 is also on an axis); estimating a second angle of incidence between the audio source and a second axis (Kim figure 19A, .theta..sub.x, .theta..sub.y) defined by a second pair of the microphones with regard to a base point centered between the microphones of the second pair of microphones (Kim figures 20A-22E,22A-22E. The midpoint between, MC10 and MC20 is on an axis. The midpoint between MC 20 and MC30 is also on an axis), the second pair of microphones being different from the first pair of microphones and the first axis and the second axis having different directions (Kim figure 19A, .theta..sub.x, .theta..sub.y); estimating an angle of elevation of the audio source with regard to the microphone plane based on the estimated first angle of incidence and the estimated second angle of incidence (Kim figure 32A, means FB400 and figure 24B, TB400); and processing, by an audio signal processing unit, the input audio signals (Kim figures 17, 19 and ¶0214 “source directions d1, d2, and d3”) in a manner so as to generate an output audio signal with a directivity (Kim ¶0276, “apply a spatially directive filter (e.g., a beamformer and/or null beamformer) to the multichannel signal to concentrate the energies of sound components arriving from different directions into different corresponding output channels of the spatially directive filter and/or to attenuate energy of a sound component arriving from a particular direction”); wherein the estimated angle of elevation is used to control a directivity parameter of the signal processing on which the directivity of the output audio signal depends (Kim figure 49A and ¶0280-0282, 0295, applying and updating spatially directive filter based on (change in) angle of arrival, wherein the set of filter (i.e. beamformer) coefficients corresponds to the “directivity parameter”), however does not explicitly teach control a directivity parameter of the signal processing on which the directivity of the output audio signal depends. Kubota further teaches to control a directivity parameter of the audio signal processing unit on which the directivity of the output audio signal depends (Kubota figures 3-4 and ¶0073, “Then, the directional control angle θ in the elevation/depression angle direction is obtained on the basis of the deviation angle δ, the relative positional relationship between the camera 15 and the speaker set 11, and the emission characteristics of the sound wave emitted from the speaker apparatus 10”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Kubota to improve the known microphone assembly of Kim to achieve the predictable result of improved directionality of the audio output. Allowable Subject Matter Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the closest prior art either alone or in combination, fail to anticipate or render obvious, the claimed limitation of “wherein the control unit in configured to switch the audio signal processing unit to an omnidirectional mode during times when the estimated angle of elevation is found to be above a threshold” in combination with all other limitations in the claim(s) as defined by the applicant. Claim 4-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the closest prior art either alone or in combination, fail to anticipate or render obvious, the claimed limitation of “the audio signal processing unit includes an adaptive beamformer unit using a beamformer adaptation parameter varying between a lower limit and an upper limit; the beamformer adaptation parameter determines a denoising performance of the beamformer unit and an attenuation of the audio source by the beamformer unit; and the control unit is configured to adapt the lower limit and the upper limit of the beamformer adaptation parameter based on the estimated angle of elevation” in combination with all other limitations in the claim(s) as defined by the applicant. Claim 7-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the closest prior art either alone or in combination, fail to anticipate or render obvious, the claimed limitation of “wherein: the audio signal processing unit includes a postfilter unit using a postfilter adaptation parameter which determines the activity of the postfilter unit; the activity of the postfilter unit determines a denoising performance of the postfilter unit and an attenuation of the audio source; and the control unit is configured to change a weight of the postfilter adaptation parameter based on the estimated angle of elevation” in combination with all other limitations in the claim(s) as defined by the applicant. Claim 13 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the closest prior art either alone or in combination, fail to anticipate or render obvious, the claimed limitation of “wherein the microphone assembly is configured to update the estimated angle of elevation used by the control unit for controlling said directivity parameter of the audio signal processing unit only during times when voice activity is detected” in combination with all other limitations in the claim(s) as defined by the applicant. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NORMAN YU whose telephone number is (571)270-7436. The examiner can normally be reached on Mon - Fri 11am-7pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Any response to this action should be mailed to: Commissioner of Patents and Trademarks P.O. Box 1450 Alexandria, Va. 22313-1450 Or faxed to: (571) 273-8300, for formal communications intended for entry and for informal or draft communications, please label “PROPOSED” or “DRAFT”. Hand-delivered responses should be brought to: Customer Service Window Randolph Building 401 Dulany Street Arlington, VA 22314 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NORMAN YU/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Jun 06, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604123
APPARATUS AND VEHICULAR APPARATUS INCLUDING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12598409
IN-EAR WEARABLE DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12594882
AUTOMOTIVE SOUND AMPLIFICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12593165
ACOUSTIC INPUT-OUTPUT DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12581238
BINDING BAND ASSEMBLY FOR HEADSET AND HEADSET
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+13.5%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month