Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 15, 16, 27, 28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yamada et al. (US2013/0144622A1).
As to Claim 15, Yamada teaches a method for directional signal processing for a hearing system having at least a first hearing instrument (hearing aid, Figure 2B capable of determining speech in a noisy environment using directional microphone array 120, [0035]- [0038] and Figure 6), the method comprising: generating a first input signal from an ambient sound by an electroacoustic first input transducer of the first hearing instrument, and generating a second input signal by an electroacoustic second input transducer of the hearing system ([0043] teaches signals picked up by microphones of the microphone array 120. Also, Figure 6, S1100 teaches microphone array 120 receives acoustic signals within one frame. S1300, Figure 6 teaches determining the presence of other speech where other speech is speech from other persons than user 200, [0048]-[0049], [0070] and in S1200 determines the presence of self-speech, [0068]-[0069]); determining a number of interlocutors of a wearer of the hearing system based on the first input signal and based on the second input signal (direction-specific speech detector 430 detects from a sound signal, uttered speech from the speakers, and extracts a front, a left, and a right speech from the four-channel A/D-converted digital acoustic signals through microphone array 120. Specifically, direction-specific speech detector 430 applies a known directivity control technique to the four-channel digital acoustic signals. Direction-specific speech detector 430 uses such a technique to determine the directivity for each of the front, the left, and the right of user 200 and then detects a front, a left, and a right speech. Direction-specific speech detector 430 determines the presence or absence of speech at short time intervals using the power information on the extracted direction-specific speeches and determines the presence or absence of other speech from each direction for every frame, on the basis of the results of the determination. Direction-specific speech detector 430 then outputs speech or non-speech information indicating the presence or absence of other speech of every frame and each direction to total-amount-of-speech calculator 440 and established-conversation calculator 450. See at least [0047]); modifying at least one parameter selected from the group consisting of a compression, a directional microphony, and a noise suppression in processing at least one of the first input signal or the second input signal depending on the determined number of interlocutors for generating a first output signal, Yamada teaches on [0065] teaches Speech processing device 400 in FIG. 4 therefore determines only the left speaker to be a conversational partner of user 200 and narrows the directivity of microphone array 120 to the left. Speech processing device 400 in FIG. 5 determines the front, left, and right speakers to be conversational partners of user 200 and widens the directivity of microphone array 120 to a wide range over the left and the right, thus teaching the directional microphony is modified. Further, [0100]-[0104] teaches [0103] Conversational-partner determining unit 470 determines that four persons (i.e., user 200, a left speaker, a facing speaker, and a right speaker) are in conversation in step S2202, and the process returns to FIG. 6. That is, conversational-partner determining unit 470 determines the left, the facing, and the right speakers to be conversational partners of user 200 and outputs directional information indicating the left, the front, and the right to output sound controller 480. As a result, microphone array 120 is directed toward a wide range covering the front (see FIG. 7A), thus speech processing device 400 may successively determine whether conversation is held, and gradually release the directivity of microphone array 120 if the conversation comes to an end. [0099].
As to Claim 16, Yamada teaches the limitations of Claim 15, and wherein: the hearing system is a binaural hearing system having the first hearing instrument( 110L) and a second hearing instrument (Figure 1, hearing aid 100 is a binaural hearing aid, [0025], 110R); and the second input signal is generated by a second input transducer ( microphones of the right hearing aid 110R) of the second hearing instrument(110R, [0026]-[0027].
As to Claim 27, Yamada teaches a hearing system (hearing aid 100, Figure 1) comprising at least a first hearing instrument (110L), and wherein the hearing system (100) is configured to carry out the method according to claim 15(Figure 1 and Figure 6, abstract).
As to Claim 28, Yamada teaches a binaural hearing system (hearing aid 100 is a binaural hearing aid, [0025]), comprising a first hearing instrument(110L, [0026]) with a first input transducer for generating a first input signal from an ambient sound and a second hearing instrument with a second input transducer for generating a second input signal from the ambient sound, ( the right two and the left two, define microphone array 120. The four microphones are located at predetermined positions with respect to the user wearing hearing aid 100. See at least abstract, [0026], Figure 1) said first and second hearing instruments (binaural hearing aid 100) being configured to carry out the method according to claim 15(abstract, Figure 1, and Figure 6).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
2. Claims 17, 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2013/0144622A1) in view of Jensen (US2020/0053486A1).
As to Claim 17, Yamada teaches the limitations of Claim 15, but does not explicitly teach which comprises: generating at least one of a first half-space signal or a second half-space signal in each case by the directional microphony based on the first input signal and the second input signal; and depending on the determined number of interlocutors, applying at least one of the compression, the directional microphony, or the noise suppression, to the first half-space signal or the second half-space signal; or separately to each of the first half-space signal or the second half-space signal. However, Jensen in related field (Hearing aid) teaches a hearing aid that processes multitude of input signals in a sound environment and providing beamformer filtering units to provide a beamformed or “claimed first half-space signal” and “second half-space signal” beamformed signals from M1 and M2 (left and right microphone signals, Figure 1A,1B) and Figure 3 to extract a signal originating from a particular one of the multitude of special segments. See at least abstract, [0018], Figure 3. It would have been obvious to one of ordinary skill in the art to modify Yamada to further provide beamforming signal processing to the left and right microphone signals to target speech which is sufficiently spatially separated from noise sources.
As to Claim 21, Yamada teaches the limitations of Claim 16 and further teaches which comprises: monitoring a speech component in the ambient sound on a basis of the first and second input signals, CPU 160 receives speech picked up by microphone array 120 and executes a control program pre-stored in memory 170. Thereby, CPU 160 performs directivity control processing and hearing-assistance processing on four-channel acoustic signals input via microphone array 120.See at least [0029] but does not explicitly teach determining an angular direction of a sound source of the ambient sound on a basis of the first and second input signals; and determining a presence and a position of an individual speaker based on the speech component and the angular direction. However, Jensen in related field ( Hearing device) teaches on Figure 6 and Figure 2, [0204] the user (U) wears an exemplary binaural hearing system comprising first and second hearing devices located at left and right ears of the user, as e.g. illustrated in FIG. 1C. Values (S(k,l,θ.sub.i) and S(k,l,θ.sub.i′) of a signal S in a specific frequency band (k) at a specific time (l) are indicated for two different spatial segments corresponding to angular parameters a and a respectively . In an embodiment, specific values of the signal S is determined for a multitude of, such as all, segments of the space around the user. The number of segments is preferably larger than or equal to three, such as larger than or equal to four. The segments may represent a uniform angular division of space around the user, but may alternatively represent different angular ranges, e.g. a predetermined configuration, e.g. comprising a left and a right quarter-plane in front of the user and a half-plane to the rear of the user. The segments (or cells of FIG. 2, [0165]) may be dynamically determined, e.g. in dependence of a current distribution of sound sources (target and/or noise sound sources). It would have been obvious to modify the signal processing of the hearing aid as taught by Yamada to further determine an angular direction of a sound source of the ambient sound on a basis of the first and second input signals; and determining a presence and a position of an individual speaker based on the speech component and the angular direction to use the input signals from both microphones so that the resulting beamformer is more advanced with more angular sensitivity. See at least [0151].
As to Claim 22, Yamada in view of Jensen teaches the limitations of Claim 21, and Yamada further teaches which comprises: based on the first and second input signals, determining for the individual speaker at least one of: a length of a conversation contribution ( the amount of speech representing the total time of speech given by the user, [0051]; or an overlap of a conversation contribution with a conversation contribution of the wearer; and identifying the individual speaker therefrom as an interlocutor of the wearer, [0071] and [0072] teaches he four sound sources are a sound source of self-speech, and a front sound source, a left sound source, and a right sound source of the other speeches. In addition, the self-speech sound source is S.sub.0, the front sound source is S.sub.1, the left sound source is S.sub.2, and the right sound source is S.sub.3. This case involves the processing of the following six combinations, S.sub.0,1, S.sub.0,2, S.sub.0,3, S.sub.1,2, S.sub.1,3, and S.sub.2,3.
[0078] Total-amount-of-speech calculator 440 then calculates the total amount of speech H.sub.i,j(p) in a present segment Seg (p) using sound-source-specific speech or non-speech information on the pair (i,j) of sound sources S.sub.i,j in a previous one segment in step S1600.) The total amount of speech H.sub.i,j(p) is sum of the number of frames in which the speech from the sound source S.sub.i is detected and the number of frames in which the speech of the sound source S.sub.j is detected.
As to Claim 23, Yamada in view of Jensen teaches the limitations of Claim 21, and which comprises determining the presence and the position of the individual speaker in only one half-space which corresponds to one of two half-space signals, Jensen teaches on [0164] beamformer preserves signal components from position (r, θ) perfectly, while suppressing maximally signal components from other directions (this reduces “leakage” of unwanted signal components into X(k, l, θ, r) and ensures an optimal estimate of the noisy signal component originating from position (r, θ)).
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2013/0144622A1) in view of Jensen (US2020/0053486A1) in further view of Sporer et al. (US2022/0159403A1), hereinafter “ Sporer”.
As to Claim 24, Yamada in view of Jensen teaches the limitations of Claim 21, but does not explicitly teach which comprises tracking a change in a position of the individual speaker, Jensen further teaches [0204] teaches specific values of the signal S is determined for a multitude of, such as all, segments of the space around the user. The number of segments is preferably larger than or equal to three, such as larger than or equal to four. The segments may represent a uniform angular division of space around the user, but may alternatively represent different angular ranges, e.g. a predetermined configuration, e.g. comprising a left and a right quarter-plane in front of the user and a half-plane to the rear of the user. The segments (or cells of FIG. 2, [0165]) may be dynamically determined, e.g. in dependence of a current distribution of sound sources (target and/or noise sound sources). Yamada in view of Jensen does not explicitly teach tracking the change in position of the individual speaker. However, Sporer in related field (Hearing devices) teaches acquiring tracking data with concerning a position and/pr orientation of user and determining the one or more room acoustic parameters depending on the microphone data and tracking data [0031],[0032]. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention to further acquire the orientation or position of the user to determine room acoustic parameters.
Allowable Subject Matter
Claims 18 -20, 25 and 26 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUNITA JOSHI whose telephone number is (571)270-7227. The examiner can normally be reached 8-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 5712727503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUNITA JOSHI/Primary Examiner, Art Unit 2691