DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to the Applicant’s communication filed on 11/20/2023. Claims 1 – 20 are pending in this application.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 13 and 16 – 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20090196433 (Tanaka).
Regarding claim 13, Tanaka teaches “A method comprising:
processing, at an audio processor (paragraph 0015 and FIG 1: a howling suppression apparatus 20), audio received via a receiver (paragraph 0015 and FIG 1: a sound collection device 12) to generate processed audio (paragraph 0016: The sound collection device 12 generates an acoustic signal X1(z) according to ambient acoustics and supplies the acoustic signal to the howling suppression apparatus 20. Paragraph 0021: The calculation part 221 generates an acoustic signal X2(z) representing “processed audio”. Alternatively, spectrum subtraction part 34 also generates “processed audio”);
searching, via a microphonic detection engine (paragraph 0027: implemented as the frequency identification part 421 identifies a frequency (howling frequency) F at which a howling is caused.), for microphonic noise in the processed audio according to one or more predetermined microphonic parameters (paragraph 0027: The frequency identification part 421 identifies a frequency (howling frequency) F at which a howling is caused. (As the Applicant’s specification in the background section states, microphonic noise generally manifests as a howling sound output by the speaker.) For example, means for identifying the howling frequency F by detecting (i.e. “searching”) the peak of a frequency spectrum of the acoustic signal X2(z) or means for identifying the howling frequency F (the frequency being “one or more predetermined microphonic parameters”) from intensity of each component in which the acoustic signal X2(z) is separated into plural frequency bands is suitable as the frequency identification part 421.);
when the microphonic noise is detected: outputting, via the microphonic detection engine, a microphonic indicator to a microphonic compensation engine (implicit, shown as an arrow F pointing from the frequency identification part 421 towards the filter 423 in FIG 1 (“a microphonic compensation engine”) that causes the filter to suppress this component);
receiving, at the microphonic compensation engine, the microphonic indicator (shown as the arrow F entering the filter 423 in FIG 1); and
responsively compensating, via the microphonic compensation engine, for the microphonic noise in the audio received via the receiver (paragraph 0028: a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. In other words, the filter 423 removes the frequencies indicated to it by the frequency identification part 421 through the communication line shown as the arrow F between the frequency identification part 421 and the filter 423), prior to processing of the audio by the audio processor (further processing is disclosed in paragraph 0029 and includes amplification and digital to analog conversion).”
Regarding claim 16, Tanaka teaches “further comprising: searching for microphonic noise in the processed audio according to the one or more predetermined microphonic parameters that are at least partially range based (paragraph 0027: means for identifying the howling frequency F by detecting the peak of a frequency spectrum of the acoustic signal X2(z) or means for identifying the howling frequency F from intensity of each component in which the acoustic signal X2(z) is separated into plural frequency bands is suitable as the frequency identification part 421. Disclosed plural frequency bands means that the howling frequency is concentrated within the “range” of these bands);
determining one or more of: a level of the microphonic noise (detecting the peak of a frequency spectrum of the acoustic signal X2(z) which represents the “level”. Looking at intensity of each component which also represents the “level”); and a range, of a plurality of ranges, in which the level of the microphonic noise is located (intensity of each component in which the acoustic signal X2(z) is separated into plural frequency bands); and generating the microphonic indicator to indicate one or more of the level and the range of the microphonic noise (paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Since the filter 423 knows which frequencies to suppress and by how much, it means that the frequencies and the intensity have been indicated from the frequency identification part 421 to the filter 423).”
Regarding claim 17, Tanaka teaches “wherein the microphonic indicator is indicative of a level of the microphonic noise, and the method further comprises: compensating for the microphonic noise in the audio received via the receiver according to the level (paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Since the filter 423 knows which frequencies to suppress and by how much, it means that the frequencies and the intensity (“indicative of a level of the microphonic noise”) have been indicated from the frequency identification part 421 to the filter 423).”
Regarding claim 18, Tanaka teaches “wherein the microphonic indicator is indicative of a range, of a plurality of ranges, in which a level of the microphonic noise is located, and the method further comprises: compensating for the microphonic noise in the audio received via the receiver according to the range (paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component (“according to the range”) centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Since the filter 423 knows which frequencies to suppress and by how much, it means that the frequencies (“a range, of a plurality of ranges”) and the intensity have been indicated from the frequency identification part 421 to the filter 423).”
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4 – 7, 11 – 13 and 16 – 18 are rejected under 35 U.S.C. 103 as being unpatentable over US 20130136276 (Chacko) in view of US 20090196433 (Tanaka).
Regarding claims 1 and 13, Chacko teaches “A device (shown in FIG 2 with corresponding description) comprising:
a speaker (speaker 220 in FIG 2); a receiver (paragraph 0022: a receiver (RX) 202 that can receive a communication signal); an audio processor (paragraph 0025: comprising baseband processor 214 cooperatively connected to an audio processor 218) configured to: process audio received via the receiver; and output processed audio to the speaker (paragraph 0025: The audio processor 218 can be substantially equivalent to the audio processor 110 of FIG. 1. Paragraph 0020: An audio processor 110 facilitates the reception and playing acoustic signals. The audio processor receives an audio signal from the baseband processor 104 in digital form. The audio processor converts the digital audio signal to an analog signal and applies amplification to the analog audio signal and applies the amplified audio analog signal to an audio transducer, such as a low audio transducer 122 or a high audio transducer 124.)…”
While Chacko teaches suppressing microphonic feedback in radio receivers (see abstract, paragraphs 0001, 0017, 0018, 0020 and 0025), Chacko does not disclose such details of the structure as “a microphonic detection engine; and a microphonic compensation engine, the microphonic detection engine configured to: search for microphonic noise in the processed audio according to one or more predetermined microphonic parameters; and when the microphonic noise is detected: output a microphonic indicator to the microphonic compensation engine to cause the microphonic compensation engine to compensate for the microphonic noise in the audio; the microphonic compensation engine configured to: receive the microphonic indicator; and responsively compensate for the microphonic noise in the audio received via the receiver, prior to processing of the audio by the audio processor.”
In paragraph 0005, Chacko states that as devices become smaller, the microphonics problem can continue to increase. Accordingly, a smaller device can go unstable at high volumes which causes a howling effect in the audio signal as a result of receiver audio regeneration.
In similar art, Tanaka teaches a howling suppression apparatus that suppresses a howling caused in an acoustic system (see at least abstract).
In particular, Tanaka teaches in FIG 1 with corresponding description “a microphonic detection engine (paragraph 0027: implemented as the frequency identification part 421 identifies a frequency (howling frequency) F at which a howling is caused.); and a microphonic compensation engine (paragraph 0028: implemented as filter 423 generating an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421),
the microphonic detection engine configured to:
search for microphonic noise in the processed audio according to one or more predetermined microphonic parameters (paragraph 0027: The frequency identification part 421 identifies a frequency (howling frequency) F at which a howling is caused in the audio processed by at least parts 221 and 34. For example, means for identifying the howling frequency F by detecting (i.e. “searching”) the peak of a frequency spectrum of the acoustic signal X2(z) or means for identifying the howling frequency F (the frequency being “one or more predetermined microphonic parameters”) from intensity of each component in which the acoustic signal X2(z) is separated into plural frequency bands is suitable as the frequency identification part 421.); and
when the microphonic noise is detected: output a microphonic indicator to the microphonic compensation engine to cause the microphonic compensation engine to compensate for the microphonic noise in the audio (implicit, shown as an arrow F pointing from the frequency identification part 421 towards the filter 423 in FIG 1 that causes the filter to suppress this component);
the microphonic compensation engine configured to:
receive the microphonic indicator (shown as the arrow F entering the filter 423); and responsively compensate for the microphonic noise in the audio received via the receiver (paragraph 0028: a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. In other words, the filter 423 removes the frequencies indicated to it by the frequency identification part 421 through the communication line shown as the arrow F between the frequency identification part 421 and the filter 423), prior to processing of the audio by the audio processor (further processing is disclosed in paragraph 0029 and includes amplification and digital to analog conversion).”
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date of the application to utilize disclosed by Tanaka device and method of howling suppression, in the device of Chacko. Doing so would have been merely substitution of one type of processing circuitry for another with predictable results and the court stated in KSR, "when a patent claims a structure already known in the prior art that is altered by the mere substitution of one element for another known in the field, the combination must do more than yield a predictable result." KSR Int'l Co. v. Teleflex Inc., 127 S.Ct. 1727, 1740 (2007) (citing United States v. Adams, 383 U.S. 39, 50-51 (1966)). Doing so would have also allowed to suppress the component of the frequency band including the frequency at which the howling is actually caused among the acoustic signal, so that the howling can be suppressed effectively (see Tanaka, paragraph 0009).
Regarding claims 4 and 16, Chacko in combination with Tanaka teaches “wherein the microphonic detection engine is further configured to: search for microphonic noise in the processed audio according to the one or more predetermined microphonic parameters that are at least partially range based (Tanaka, paragraph 0027: means for identifying the howling frequency F by detecting the peak of a frequency spectrum of the acoustic signal X2(z) (“the processed audio”) or means for identifying the howling frequency F from intensity of each component in which the acoustic signal X2(z) is separated into plural frequency bands is suitable as the frequency identification part 421. Disclosed plural frequency bands means that the howling frequency is concentrated within the “range” of these bands); determine one or more of: a level of the microphonic noise (detecting the peak of a frequency spectrum of the acoustic signal X2(z) which represents the “level”. Looking at intensity of each component which also represents the “level”); and a range, of a plurality of ranges, in which the level of the microphonic noise is located (intensity of each component in which the acoustic signal X2(z) is separated into plural frequency bands); and generate the microphonic indicator to indicate one or more of the level and the range of the microphonic noise (paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Since the filter 423 knows which frequencies to suppress and by how much, it means that the frequencies and the intensity have been indicated from the frequency identification part 421 to the filter 423).”
Regarding claims 5 and 17, Chacko in combination with Tanaka teaches “wherein the microphonic indicator is indicative of a level of the microphonic noise, and the microphonic compensation engine is further configured to: compensate for the microphonic noise in the audio received via the receiver according to the level (Tanaka, paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Since the filter 423 knows which frequencies to suppress and by how much, it means that the frequencies and the intensity have been indicated from the frequency identification part 421 to the filter 423).”
Regarding claims 6 and 18, Chacko in combination with Tanaka teaches “wherein the microphonic indicator is indicative of a range, of a plurality of ranges, in which a level of the microphonic noise is located, and the microphonic compensation engine is further configured to: compensate for the microphonic noise in the audio received via the receiver according to the range (Tanaka, paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component (“according to the range”) centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Since the filter 423 knows which frequencies to suppress and by how much, it means that the frequencies (“a range, of a plurality of ranges”) and the intensity have been indicated from the frequency identification part 421 to the filter 423).”
Regarding claim 7, Chacko in combination with Tanaka teaches “wherein the microphonic compensation engine is further configured to compensate for the microphonic noise in the audio at least partially based on the one or more predetermined microphonic parameters (Tanaka, paragraph 0028: The filter 423 generates an acoustic signal X4(z) by suppressing a component of a frequency band including the howling frequency F identified by the frequency identification part 421 among the acoustic signal X3(z). For example, a notch filter for variably controlling frequency characteristics so as to attenuate a narrow band component centering on the howling frequency F among the acoustic signal X3(z) is suitable as the filter 423. Specific howling frequency F represents “one or more predetermined microphonic parameters” and suppression is thus based on that).”
Regarding claim 11, Chacko in combination with Tanaka teaches “further comprising a digital signal processor (DSP) (Chacko, paragraph 0034: one or more generic or specialized processors (or "processing devices") such as microprocessors, digital signal processors; Tanaka, paragraph 0018: the howling suppression apparatus 20 is a digital signal processor (DSP)) and an applications processor (Chacko, paragraph 0019: The communication device 100 can further include a main or application processor 106), wherein the audio processor is implemented at the DSP (Tanaka, amplifier 50 in FIG 1 shown as part of the apparatus 20. Paragraph 0029: the acoustic signal Y(z) outputted by the amplifier 50 is converted into an analog signal through a D/A converter, which would also be part of the digital processing), and wherein the microphonic detection engine and the microphonic compensation engine are implemented at one or more of the DSP (Tanaka, paragraph 0018: the howling suppression apparatus 20 (which includes both “the microphonic detection engine and the microphonic compensation engine” as explained in the rejection of claim 1 above) is a digital signal processor (DSP)) and the applications processor.”
Regarding claim 12, Chacko teaches “further comprising one or more of a land mobile radio (LMR), a digital mobile radio (DMR), a two-way radio (Chacko, paragraph 0003: a two-way handheld radio unit), and first responder radio.”
Claims 2, 8 – 10, 14, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20130136276 (Chacko) in view of US 20090196433 (Tanaka) as applied to claims 1 and 13 above, and further in view of US 20170201836 (Hui).
Regarding claims 2 and 14, Chacko teaches “further comprising a transmitter (Chacko, paragraph 0018: The RF section 102 receives baseband signals from a baseband processor 104, and transmits them at radio frequencies.), wherein the microphonic detection engine is further configured to: continue to search for the microphonic noise in compensated processed audio (In the system of Tanaka, the detection and suppression of the howling component F is performed continuously, which corresponds to the limitations “continue to search for the microphonic noise in compensated processed audio” since the audio is processed first, by the calculation part 221 and the second, by the spectrum subtraction part 34. The audio is also at least partially “compensated” since, as stated in paragraph 0026, a persistent component by which a howling is caused among the feedback sound is suppressed by action of the spectrum subtraction part 34 and the calculation part 221.)…”
Tanaka and Chacko do not disclose “when the microphonic noise in the compensated processed audio continues to be detected or is no longer detected, provide, via the transmitter, to an external communication device, a respective notification thereof.”
Hui also teaches mitigation of the acoustic feedback by leveraging a dynamic range controller and a howling detector, thus being in the same art.
Additionally, Hui teaches “wherein the microphonic detection engine (paragraph 0046: The howling detector 208) is further configured to: continue to search for the microphonic noise in compensated processed audio (paragraph 0046: The howling detector 208 can be employed after the DRC 206, and the DRC 206 can be employed after the amplifier 204. When the microphone 202 receives a sound signal, the amplifier 204 can amplify the sound signal. However, the DRC 206 can constrain the output signal of the amplifier 204 to a certain amplitude to restrict the howling sound to a certain decibel level to protect a user's hearing from damage. In other words, it is the “compensated processed audio” which is fed into the howling detector 208, which means that the howling detector 208 searches for the “microphonic noise in compensated processed audio”); and, when the microphonic noise in the compensated processed audio continues to be detected or is no longer detected, provide, via the transmitter, to an external communication device, a respective notification thereof (paragraph 0047: the howling detector 208 can provide a warning signal via the status indicator 212 to inform the user that the howling protection mode has been activated. For example, the status indicator 212 could send a message to a mobile device (“an external communication device”), thereby alerting the user.).”
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date of the application to utilize disclosed by Hui amplifier volume control to reduce the amount of howling as well as howling indication transmitted to an external device, in the system of combined Chacko and Tanaka’s system by applying Hui’s volume control based on the amount of howling to Tanaka’s amplifier 50. Doing so would have allowed to implement an additional method of howling control thus increasing the effectiveness of the system.
Regarding claims 8 and 19, Chacko in combination with Tanaka teaches “wherein the microphonic compensation engine is further configured to, when the microphonic detection engine continues to detect microphonic noise in the processed audio after the microphonic compensation engine compensates for the microphonic noise in the audio…” “…continue to compensate for the microphonic noise in the audio received via the receiver, prior to processing of the audio by the audio processor (please see explanation of operation of Tanaka’s system in the rejection of claim 1 above, the explanation being incorporated herein by reference. In the system of Tanaka, the detection and suppression of the howling component F is performed continuously, which corresponds to the limitations “when the microphonic detection engine continues to detect microphonic noise in the processed audio after the microphonic compensation engine compensates for the microphonic noise in the audio” and “continue to compensate for the microphonic noise in the audio received via the receiver”).”
Tanaka and Chacko do not disclose “reduce volume of sound emitted by the speaker”.
Hui also teaches mitigation of the acoustic feedback by leveraging a dynamic range controller and a howling detector, thus being in the same art.
Additionally, Hui teaches “reduce volume of sound emitted by the speaker (paragraph 0046: the DRC (dynamic range controller) 206 can constrain the output signal of the amplifier 204 to a certain amplitude (“reduces volume of sound emitted by the speaker”) to restrict the howling sound to a certain decibel level to protect a user's hearing from damage. Par. 0040: a speaker of the apparatus can output a second acoustic signal in accordance with the constrained amplitude.)”.
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date of the application to utilize disclosed by Hui amplifier volume control to reduce the amount of howling, in the system of combined Chacko and Tanaka’s system by applying Hui’s volume control based on the amount of howling to Tanaka’s amplifier 50. Doing so would have allowed to implement an additional method of howling control thus increasing the effectiveness of the system.
Regarding claims 9 and 20, Chacko in combination with Tanaka teaches “wherein the microphonic detection engine and the microphonic compensation engine continue to respectively detect and compensate for the microphonic noise in a feedback loop (please see explanation of operation of Tanaka’s system in the rejection of claim 1 above, the explanation being incorporated herein by reference.)…”
Tanaka and Chacko do not disclose “with each instance of the feedback loop where the microphonic noise continues to be detected, the microphonic compensation engine reduces volume of sound emitted by the speaker until the microphonic noise is no longer detected.”
Hui also teaches mitigation of the acoustic feedback by leveraging a dynamic range controller and a howling detector, thus being in the same art.
Additionally, Hui teaches “with each instance of the feedback loop where the microphonic noise continues to be detected, the microphonic compensation engine reduces volume of sound emitted by the speaker until the microphonic noise is no longer detected (paragraph 0046: the DRC (dynamic range controller) 206 can constrain the output signal of the amplifier 204 to a certain amplitude (“reduces volume of sound emitted by the speaker”) to restrict the howling sound to a certain decibel level to protect a user's hearing from damage. When the howling sound occurs and is detected by the howling detector 208, the howling detector 208 can mute the speaker 210 by setting an amplification gain to zero. It is implicit that when the gain is zero, there is no output from the amplifier which would mean that “the microphonic noise is no longer detected”).”
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date of the application to utilize disclosed by Hui amplifier volume control to reduce the amount of howling, in the system of combined Chacko and Tanaka’s system by applying Hui’s volume control based on the amount of howling to Tanaka’s amplifier 50. Doing so would have allowed to implement an additional method of howling control thus increasing the effectiveness of the system.
Regarding claims 10 and 20, Chacko in combination with Tanaka teaches “wherein the microphonic detection engine and the microphonic compensation engine continue to respectively detect and compensate for the microphonic noise in a feedback loop (please see explanation of operation of Tanaka’s system in the rejection of claim 1 above, the explanation being incorporated herein by reference.)…”
Tanaka and Chacko do not disclose “with each instance of the feedback loop where the microphonic noise continues to be detected, the microphonic compensation engine reduces volume of sound emitted by the speaker until a predetermined minimum volume is reached.”
Hui also teaches mitigation of the acoustic feedback by leveraging a dynamic range controller and a howling detector, thus being in the same art.
Additionally, Hui teaches “with each instance of the feedback loop where the microphonic noise continues to be detected, the microphonic compensation engine reduces volume of sound emitted by the speaker until a predetermined minimum volume is reached (Par. 0040: a speaker of the apparatus can output a second acoustic signal in accordance with the constrained amplitude. Paragraph 0046: the DRC (dynamic range controller) 206 can constrain the output signal of the amplifier 204 to a certain amplitude (“reduces volume of sound emitted by the speaker” to “a predetermined minimum volume”) to restrict the howling sound to a certain decibel level to protect a user's hearing from damage. When the howling sound occurs and is detected by the howling detector 208, the howling detector 208 can mute the speaker 210 by setting an amplification gain to zero. It is implicit that when the gain is zero, there is no output from the amplifier. In this case, “a predetermined minimum volume” equals to zero).”
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date of the application to utilize disclosed by Hui amplifier volume control to reduce the amount of howling, in the system of combined Chacko and Tanaka’s system by applying Hui’s volume control based on the amount of howling to Tanaka’s amplifier 50. Doing so would have allowed to implement an additional method of howling control thus increasing the effectiveness of the system.
Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over US 20130136276 (Chacko) in view of US 20090196433 (Tanaka) as applied to claims 1 and 13 above, and further in view of JP 2006166375 (Yamakawa) (references are given according to English translation).
Regarding claims 3 and 15, Chacko in combination with Tanaka does not teach “wherein the one or more predetermined microphonic parameters are indicative of one or more predetermined microphonic audio data sets combined with one or more clean audio samples.”
Yamakawa also teaches howling suppression system. Particularly, Yamakawa teaches “search for microphonic noise in the processed audio according to one or more predetermined microphonic parameters (as in claim 1), wherein the one or more predetermined microphonic parameters are indicative of one or more predetermined microphonic audio data sets (paragraphs 0031 and 0034: The waveform analysis unit 8 performs FFT on the signal y(k) input from the microphone 1 to obtain a signal Y(f). Furthermore, the peak frequency is detected from the FFTed signal Y(f). Paragraph 0035: When a peak frequency is detected and continues for a predetermined time or longer, sine waveform data corresponding to the detected peak frequency is read out from the waveform storage unit 9, which is a storage device (s5). Thereafter, the cross-correlation function is calculated (s6). Therefore, if the value of the cross-correlation function with a sine waveform is large, it can be determined that the signal is due to howling, and if the value of the cross-correlation function is small, it can be determined that the signal is not due to howling. In other words, the waveform storage unit 9 contains “one or more predetermined microphonic parameters are indicative of one or more predetermined microphonic audio data sets” with which the actual waveform from the microphone is compared).”
Therefore, since Tanaka does not limit the method of howling determination in the frequency identification part 421 to any particular one, it would have been obvious to a person of ordinary skill in the art at the effective filing date of the application to utilize disclosed by Yamakawa method and system based on comparing with predetermined microphonic audio data set, in the system of Tanaka simply as design choice with predictable results, the results being identification of howling condition based on the value of the cross-correlation function. Doing so would have been an obvious matter since, according to the Supreme Court, “[t]he combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.” KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 416 (2007).
With respect to the requirement that the predetermined microphonic audio data sets are “combined with one or more clean audio samples”, the Applicant’s own specification in paragraph in paragraph 0043 states that clean audio samples represent audio output by audio processor 206 when no audio signal is being received at the receiver. In other words, “clean audio samples” represent complete silence.
It would have been obvious to a person of ordinary skill in the art at the effective filing date of the application that complete silence would likely result in the audio samples being at or close to the zero level and thus would not make any meaningful difference (and thus would not contain any patentable significance) whether they are combined or not with the audio samples of howling, since the howling sound represents result of a positive feedback and the amplitude or value of these audio samples are incomparably larger than the value of audio samples representing complete silence.
Claims 14, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20090196433 (Tanaka) as applied to claim 13 above in section 5 above, and further in view of US 20170201836 (Hui).
Regarding claims 14, 19 and 20, these claims are rejected in view of Hui because of the same reasons as explained in the rejection of same claims in section 9 above, the explanation being incorporated herein by reference.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over US 20090196433 (Tanaka) as applied to claim 13 above in section 5 above, and further in view of JP 2006166375 (Yamakawa).
Regarding claim 15, this claim is rejected in view of Yamakawa because of the same reasons as explained in the rejection of same claim in section 10 above, the explanation being incorporated herein by reference.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GENNADIY TSVEY whose telephone number is (571)270-3198. The examiner can normally be reached Mon-Fri 9-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wesley Kim can be reached at 571-272-7867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GENNADIY TSVEY/ Primary Examiner, Art Unit 2648