Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Introduction
This action responds to preliminary amendment filed on 10-12-2023. Claims 1-14, 16, 18, 20-22, and 25 have been amended; claims 15, 17, 16, 19, 23, and 24 have been cancelled. Claims 1-14, 18, 20-22 and 25 are pending.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
6. Claims 1-14, 18, 20 and 25 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sun et al.(US 2015/0156598) in view of Goldstein et al. (US 2021/0281945).
Consider Claim 1, Sun teaches a method for notifying a user of a mute state of a primary microphone (paragraph [0042]-0047}] with Fig. 2: one of the "near-end microphones" corresponds to such primary microphone) arranged to capture the user's speech during a call with one or more other participants, in case the user speaks while the primary microphone of the microphone system is muted (paragraph [0001]: "This application relates generally to mute/unmute notifications, including notifications in video conferencing devices/systems with multiple microphones."), the method comprising
1) performing a noise cancellation algorithm by processing output signals from the primary microphone and output signals from an additional microphone located to capture sound from the user's surroundings to suppress surrounding noise (paragraph [0042]-0047]: "In Fig. 2 , the acoustic echo cancellation and noise reduction circuit receives a far-end audio signal from a far-end signal source and a near-end audio signal from one or more near-end microphones.” NB.: When "more near-end microphones" are used, one of these corresponds to the claimed additional microphone),
- 2) processing output signals from the primary microphone according to a Voice Activity Detection algorithm by means of a processor system while the primary microphone is muted,
- 3} determining if speech is present in accordance with an output of the Voice Activity Detection algorithm (paragraph [0095]-[0097] with Fig. 6: "Silence and voice activity events are detected at S606."; see also paragraph [0042]-[0047] with Fig. 2: "In Fig. 2 , the acoustic echo cancellation and noise reduction circuit receives [...] a near-end audio signal from one or more near-end microphones. Based on these signals, the circuit generates an enhanced signal that is input to a silence event and voice activities detector circuit."),
- 4) determining if an additional condition is fulfilled (e.g. paragraph [0095]-[0097] with Fig. 6: "At S608, which can be executed at the same time as S602-S606, acoustic sources are localized. At S610, faces and/or motion are detected."; see also paragraph [0060-[0065]: "ff SAD = true and VAD = true, then get the sound source position from the acoustic source localization circuit, and if sound source position is outside of the region of interest, then it is an interference event."}), and
- 5} providing a mute slate notification to the user only if it is determined that speech is present and the additional condition is fulfilled (paragraph [0097-[0100] with Fig. 6: "Based on detected faces/motion, localized acoustic sources, and detected silence events and voice activities, a speaker or interference event (or silence event} is determined at $612. An notification of mute/ unmute, based on the determination in S612) is displayed at S614." see also paragraphs [0055]-[0060] and in particular paragraph [0060-[0066]: "If SAD = true and VAD = true and sound source position is inside the region of interest and the sound source position is consistent with the face position, then it is a speaker event.”) but Sun does not explicitly teach generating a noise cancelled version of the output signal from the primary microphone by applying an adaptive noise cancellation algorithm involving an adaptive filter.
However, Goldstein teaches generating(see fig. 6) a noise cancelled (a noise cancelled reads on fig. 6( Echo Canceller))version of the output signal from the primary microphone(see fig. 6(111, 123)) by applying an adaptive noise cancellation algorithm involving an adaptive filter(see fig. 6(610))( see figs. 3-9 and paragraphs [0050]-[0082]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to combine the teaching of An into the teaching of Sun to provide an earpiece (100) and acoustic management module (300) for in-ear canal echo suppression control suitable is provided. The earpiece can include an Ambient Sound Microphone (111) to capture ambient sound, an Ear Canal Receiver (125) to deliver audio content to an ear canal, an Ear Canal Microphone (123) configured to capture internal sound, and a processor (121) to generate a voice activity level (622) and suppress an echo of spoken voice in the electronic internal signal, and mix an electronic ambient signal with an electronic internal signal in a ratio dependent on the voice activity level and a background noise level to produce a mixed signal (323) that is delivered to the ear canal (131).
Consider Claims 2 and 3, Sun teaches the method wherein determining said additional condition comprises determining a likelihood that determined speech comes from a speech source in the user's surroundings, and providing the mute state notification to the user based on the determined likelihood ( see figs. 1-6 and paragraphs [0038]-[0047]); and the method comprising processing output signals from a plurality of microphones so as to allow discrimination between speech from the user and speech from the user's surroundings( see figs. 1-3 and paragraphs [0038]-[0047]).
Consider Claims 4 and 5, Sun teaches the method further comprising processing the output signals from the plurality of microphones to provide a beamforming sensitivity pattern so as to allow discrimination between speech from the user and speech from the user's surroundings( see figs. 1-3 and paragraphs [0038]-[0047]); and the method wherein determining said additional condition comprises determining a likelihood that the user has a physical conversation, and providing the mute state notification to the user based on the determined likelihood ( see figs. 1-6 and paragraphs [0055]-[0068]).
Consider Claims 6 and 7, Sun teaches the method comprising performing a first Voice Activity Detection algorithm on output signals from the primary microphone, and performing a second Voice Activity Detection algorithm on output signals from the additional microphone to determine speech from another source( see figs. 1-6 and paragraphs [0055]-[0068]); and the method further comprising determining a timing between speech from the user and speech from another source so as to determine a likelihood that the user has a physical conversation( see figs. 1-6 and paragraphs [0055]-[0068]).
Consider Claims 8 and 9, Sun teaches the method comprising performing a Voice Activity Detection algorithm on a signal indicative of sound from the at least one other participant in the call, so as to detect speech from the at least one other participant in the call( see figs. 1-3 and paragraphs [0028]-[0047]); and the method providing a mute state notification to the user based on a detection that the user speaks, while at the same time there is no speech detected from the at least one other participant in the call( see figs. 1-3 and paragraphs [0028]-[0047]).
Consider Claims 10 and 11, Sun teaches the method wherein steps 1)-4) are performed by a first processor, while step 5) is performed by a second processor( see figs. 2-6 and paragraphs [0055]-[0097] and discussion above claim 1); and the method according to claim 1 wherein steps 1)-4) are followed by a step of determining to mute audio from the primary microphone if it is determined that speech is present and that the additional condition is fulfilled, so as to avoid transmission of a mute state notification( see figs. 2-6 and paragraphs [0055]-[0097] and discussion above claim 1).
Consider Claims 12 and 13, Sun teaches the method comprising performing a noise cancellation algorithm on the output signals from the primary microphone and from the additional microphone involving a Voice Activity Detector algorithm providing an output indicative of presence of speech, and generating a noise cancelled version of the output signal from the primary microphone based on said output indicative of presence of speech( see figs. 2-6 and paragraphs [0055]-[0097]); and the method further comprising applying said output indicative of presence of speech to a noise estimator which estimates noise in the output signal from the primary microphone in periods without speech present( see figs. 2-6 and paragraphs [0055]-[0097]).
Consider Claims 18 and 20, Sun teaches a device comprising a microphone system comprising a primary microphone and an additional microphone and a processor system arranged to perform at least steps 1)-4) of the method according to claim 1( see figs. 2-6 and paragraphs [0055]-[0097]); and the device wherein said processor system is arranged to determine to mute the primary microphone in response to said additional condition, so as to provide an audio output from the primary microphone based on a likelihood that the user intends to speak in the call ( see figs. 1-3 and paragraphs [0027]-[0057]).
Consider Claim 25, Sun teaches a method of implementing for performing one or more of: a telephone call, an on-line call, and a tele conference call( see figs. 1-3 and paragraphs [0038]-[0047]).
Consider Claim 14, Sun does not explicitly teach the method further comprising multiplying a gain vector with a frequency domain representation with a set of frequency bins of the primary microphone signal, wherein the gain vector has been generated with low gain values for frequency bins not containing speech.; and generating the gain vector in response to an input from the noise estimator
However, Goldstein teaches the method further comprising multiplying a gain vector with a frequency domain representation with a set of frequency bins of the primary microphone signal, wherein the gain vector has been generated with low gain values for frequency bins not containing speech; and generating the gain vector in response to an input from the noise estimator( see figs. 3-9 and paragraphs [0050]-[0082]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to combine the teaching of an into the teaching of Sun to provide an earpiece (100) and acoustic management module (300) for in-ear canal echo suppression control suitable is provided. The earpiece can include an Ambient Sound Microphone (111) to capture ambient sound, an Ear Canal Receiver (125) to deliver audio content to an ear canal, an Ear Canal Microphone (123) configured to capture internal sound, and a processor (121) to generate a voice activity level (622) and suppress an echo of spoken voice in the electronic internal signal, and mix an electronic ambient signal with an electronic internal signal in a ratio dependent on the voice activity level and a background noise level to produce a mixed signal (323) that is delivered to the ear canal (131).
7 Claims 21 and 22 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sun et al.(US 2015/0156598) as modified by Goldstein et al. (US 2021/0281945) as applied to claim 1 above, and further in view of An et al. (US 2018/0225082).
Consider Claim 21, Sun does not explicitly teach the device according further comprising a headset system arranged for two-way audio communication, such as in a wireless format, the headset system comprising - a headset arranged to be worn by the user, the headset comprising a microphone system comprising a mouth microphone, an additional microphone, positioned separate from the mouth microphone, and at least one ear cup with a loudspeaker,- a mute activation function which can be activated by the user to mute sound from the mouth microphone in a mute state during the call, and - a processor system arranged to perform at least steps 1)-4) of the method according to claim so as to determine to notify the user of a mute state, when the user speaks while the mouth microphone is in the mute state, or so as to determine whether to mute the mouth microphone when the user speaks while the mouth microphone is in the mute state.
However, An teaches the device according further comprising a headset system arranged for two-way audio communication(see fig. 1), such as in a wireless format, the headset system comprising - a headset(see fig. 1) arranged to be worn by the user, the headset comprising a microphone system comprising a mouth microphone(see fig. 1(120)), an additional microphone(see fig. 1(FFA)), positioned separate from the mouth microphone(see fig. 1(120)), and at least one ear cup with a loudspeaker(see fig. 1(DA)) and paragraphs[0021]-[0027]),- a mute activation function(see fig. 2) which can be activated by the user to mute sound from the mouth microphone (see fig. 1(120)) in a mute state during the call, and - a processor system(see fig. 1(130)) arranged to perform at least steps 1)-4) of the method according to claim so as to determine to notify the user of a mute state(see figs. 1-5 and paragraphs[0025]-[0032]), when the user speaks while the mouth microphone is in the mute state, or so as to determine whether to mute the mouth microphone when the user speaks while the mouth microphone is in the mute state(see figs. 1-5 and paragraphs[0055]-[0064]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to combine the teaching of An into the teaching of Sun to provide headsets include automatic noise cancellation (ANC) which dramatically reduces perceived background noise and improves user listening experience. Unfortunately, the voice microphones in these devices often capture ambient noise that the headsets output during phone calls or other communication sessions to other users. In response, many headsets and communication devices provide manual muting circuitry, but users frequently forget to turn the muting on and/or off, creating further problems as they communicate. To address this, the present inventors devised, among other things, an exemplary headset that detects the absence or presence of user speech, automatically muting and unmuting the voice microphone without user intervention. Some embodiments leverage relationships between feedback and feedforward signals in ANC circuitry to detect user speech, avoiding the addition of extra hardware to the headset. Other embodiments also leverage the speech detection function to activate and deactivate keyword detectors, and/or sidetone circuits, thus extending battery.
Consider Claim 22, Sun as modified by an teaches the device wherein the processor system is arranged to determine whether it is likely that the user intends to speak, and to transmit audio accordingly from the mouth microphone based on a likelihood that the user intends to speak, so as to avoid any mute state notification being sent by an entity facilitating the call(In An, see figs. 1-5 and paragraphs[0025]-[0032]).
Response to Arguments
8. Applicant’s arguments with respect to claims 1-14, 18, 20-22 and 25
have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant argued that”[ T]here must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness". KSR International Co. V. Teleflex Inc. , 550 U.S. at 418, 82 USPQ2d at 1396 (2007). Here, features of the claims are missing from the references(see the remarks page 7 third paragraph).
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Sun et al.(US 2015/0156598) and Goldstein et al. (US 2021/0281945) both teach audio apparatus. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to combine the teaching of an into the teaching of Sun to provide an earpiece (100) and acoustic management module (300) for in-ear canal echo suppression control suitable is provided. The earpiece can include an Ambient Sound Microphone (111) to capture ambient sound, an Ear Canal Receiver (125) to deliver audio content to an ear canal, an Ear Canal Microphone (123) configured to capture internal sound, and a processor (121) to generate a voice activity level (622) and suppress an echo of spoken voice in the electronic internal signal, and mix an electronic ambient signal with an electronic internal signal in a ratio dependent on the voice activity level and a background noise level to produce a mixed signal (323) that is delivered to the ear canal (131).
Conclusion
9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
10. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Goldstein et al. (US 2014/0341388) is cited to show other related the MICROPHONE MUTE NOTIFICATION WITH VOICE ACTIVITY DETECTION.
11. Any response to this action should be mailed to:
Mail Stop ____(explanation, e.g., Amendment or After-final, etc.)
Commissioner for Patents
P.O. Box 1450
Alexandria, VA 22313-1450
Facsimile responses should be faxed to:
(571) 273-8300
Hand-delivered responses should be brought to:
Customer Service Window
Randolph Building
401 Dulany Street
Alexandria, VA 22314
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Lao,Lun-See whose telephone number is (571) 272-7501 The examiner
can normally be reached on Monday-Friday from 8:00 to 5:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's
supervisor, Nguyen Duc M(SPE), can be reached on (571) 272-7503.
Any inquiry of a general nature or relating to the status of this application or proceeding
should be directed to the Technology Center 2600 whose telephone number is (571) 272-2600.
/LUN-SEE LAO/Primary Examiner, Art Unit 2691 US Patent and Trademark Office
Knox
571-272-7501
Date 02-23-2026