DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered. Claims 1-20 are pending in the application and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The response filed on 11/24/2025 has been correspondingly accepted and considered in this Office Action. Claims 1-20 have been examined.
Response to Arguments
Applicant's arguments filed 11/24/2025 have been fully considered as follows:
Applicant’s arguments with respect to claim 1 (also representative of claims 8 and 15) state that
“The rejections to these claims are traversed by appropriate amendments to these claims…. Applicant respectfully submits that the cited references are silent on a second
sound that is different from a plurality of voices and also silent on frequency. As such, the cited references are also silent on attenuating the frequency of the second sound, as claimed. ”
The examiner respectfully disagrees, Lord teaches identifying engine noise (see Lord, col 129 lines 46-58) . Using the broad interpretation of a frequency of the second sound as noise(Specifications [0038]), Lord teaches using a noise model to improve speaker specific noisy environment (Noise models in speech processing define how unwanted acoustic signals mix with speech to improve recognition and clarity, for attenuation—reducing undesired sounds to improve quality or intelligibility—include traditional statistical models, signal-processing techniques, and modern deep-learning approaches) (see Lord, col 22, lines 57-64) which is processed as identifying a second sound at the hearing aid, wherein the second sound is different from the plurality of voices; attenuating a frequency of the second sound. Another, Prior Art, Visser et.al. (US PgPub. 2007/0021958 ) further teaches separation of speech signals in a noisy environment, specifically teaches attenuation of other frequencies to present clean speech (see Visser, [0014, 0064, 0066]) and therefore, the rejections of Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 are sustained and further updated accordingly.
In response to the art rejection(s) of the remainder of dependent claims are rejected under 35 U.S.C 103, in case said claims are correspondingly discussed and/or argued for at least the same rationale presented in Remarks filed 11/24/2025, Examiner respectfully notes as follows. For completeness, should the mentioned claims be likewise traversed for similar reasons to independent claims 1, 8 and 15 correspondingly, Examiner respectfully directs Applicant to the same previous supra reasons provided in the response directed towards claims 1, 8 and 15 correspondingly discussed above. For at least the same supra provided reasons, Examiner likewise respectfully disagrees, and Applicant's arguments have been fully considered but they are not persuasive.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-20 are under 35 U.S.C. 103 as being unpatentable over as obvious by Lord et. al., US Patent 10,875,525 (cited in IDS) in view of Bhowmik et. al. US PgPub. 2020/0219515 further in view of Tanaka., US PgPub. 2020/0184845.
Regarding claim 1, Lord teaches a system comprising: one or more processors; and logic encoded in one or more non-transitory computer-readable storage media for execution by the one or more processors and when executed operable to cause the one or more processors to perform operations (see Lord, Fig. 4)comprising: receiving sound at a hearing aid that is worn by a user (see Lord, Col 11 lines 34-37 At block 3.101, the process performs receiving data representing a speech signal obtained at a hearing device associated with a user, the speech signal representing an utterance of a speaker. see Lord, col 9 lines 24-31 describes earpiece devices worn by a user as shown in Fig. 1B ); detecting a voice from the sound, wherein the voice is one voice of a plurality of voices detected from the sound (see Lord, Col 11 lines 34-37 describes detecting a speaker utterance(voice)); identifying the voice(see Lord, col 11 lines 38-39 At block 3.102, the process performs identifying the speaker based on the data representing the speech signal. See Lord, col 96 lines 36-48 describes voice identification when multiple speakers are speaking in steps 15.1100 and 15.1801in Fig. 15.18); and providing identity information associated with the voice ,wherein the identity information is provided to the user based on a predetermined announcement policy and wherein the predetermined announcement policy determines a timing of the providing of the identity information, and wherein the timing involves delivering the identity information at a delayed time (see Lord, col 11 lines 40-44 At block 3.103, the process performs determining speaker-related information associated with the identified speaker. At block 3.104, the process performs informing the user of the speaker-related information via the hearing device; Lord, col 11 lines 55-60 At block 3.201, the process performs informing the user of an identifier of the speaker. In some embodiments, the identifier of the speaker may be or include a given name, surname (e.g., last name, family name), nickname, title, job description, or other type of identifier of or associated with the speaker(predetermined policy); Lord, col 22 lines 22-29 in multiple speaker conversations, takes a turn speaking during the ongoing conversation, informing the user of a name or other speaker-related information associated with the speaker. In this manner, the process may, in substantially real time, provide the user with indications of a current speaker (timing the announcement); Lord, col 28, lines 55-58, At block 3.7801, the process waits for a time period before jumping in to provide the speaker-related information. ); identifying a second sound at the hearing aid, wherein the second sound is different from the plurality of voices (see Lord, col 129 lines 46-58 identifies engine noise ( different from the plurality of voices); attenuating a frequency of the second sound(see Lord, col 22, lines 57-64 further teaches using a noise model to improve speaker specific noisy environment (Noise models in speech processing define how unwanted acoustic signals mix with speech to improve recognition and clarity, for attenuation—reducing undesired sounds to improve quality or intelligibility—include traditional statistical models, signal-processing techniques, and modern deep-learning approaches)); determining a direction of the voice relative to a microphone of the hearing aid (see Lord, col 122 lines 33-46, teaches an Ability Enhancement Facilitator System (“AEFS”) 17.100 is enhancing the ability of the user 17.104 to operate his vehicle 17.110b via the wearable device 17.120a(hearing aid); the microphone in the AEFS device provides directional information of the audio source(voice); directional information of the audio source can be extended to include directional information of voice by a POSITA); determining a position of the voice relative to the user (see Lord, col 122 lines 46-56, teaches determining position of audio source(voice); position of the audio source can be extended to include position of voice by a POSITA).
However, Lord fails to teach identifying the voice as a primary voice based on the position of the voice relative to user.
However, Bhowmik teaches receiving sound at a hearing aid that is worn by a user(see Bhowmik, [0036] The sound may be relayed to the ear of a user (e.g., via a hearing aid receiver). ); detecting a voice from the sound, wherein the voice is one voice of a plurality of voices detected from the sound (see Bhowmik, [0081] teaches identifying voice signals); identifying the voice(see Bhowmik [0081] discusses identification of the particular voice or speaker); providing identity information associated with the voice (see Bhowmik, [0082-0083] displays representations of identified voices);determining a direction of the voice relative to a microphone of the hearing aid(See Bhowmik [0040] the user may select a person, select a direction, or both.); determining a position of the voice relative to the user (see Bhowmik, [0037] discusses determining the position of voice from the user reference point/relative position of speaker with the user); and identifying the voice as a primary voice based on the position of the voice relative to user (See Bhowmik [0040] the user may select a person, select a direction, or both. The system may identify where Gilbert(primary voice)is, as they move relative to the ear-wearable devices. see Bhowmik, [0049] the system identifies different speakers based on the direction, speaker recognition. See Bhowmik [0084] discusses priority/primary voice identification).
Lord and Bhowmik are considered to be analogous to the claimed invention because they relate to ear-wearable device for processing an input audio signals. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Lord on provide cognitive help to a user by identifying voices and provide identity information associated with such voices with the audio processing teachings of Bhowmik to identify distinct voice signals (see Bhowmik, [0002]).
However, Lord in view of Bhowmik fail to teach determining a break in a conversation; delivering the identity information based on the delayed time and during the break in the conversation.
However, Tanaka teaches determining a break in a conversation (see Tanaka, [0045] detects a pause of voice from the conversation partner or blank period of conversation); delivering the identity information based on the delayed time and during the break in the conversation (see Tanaka, [0045] Further, in place of audibly outputting the name from one of the pair of channels of stereo earphone 20, the name can be started to audibly output from both the pair of channels of stereo earphone 20 during a blank period of conversation by detecting a beginning of pause of voice from the conversation partner. Or, both the output of audibly informed name from only one of the pair of channels of stereo earphone 20 and the output of audibly informed name during a blank period of conversation by detecting a beginning of pause of voice from the conversation partner can be adopted in parallel for the purpose differentiating the audibly informed name from the voice from the conversation partner ).
Lord and Bhowmik and Tanaka are considered to be analogous to the claimed invention because they relate to ear-wearable device for processing an input audio signals. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Lord in view of Bhowmik on provide cognitive help to a user by identifying voices and provide identity information associated with such voices with the audio processing teachings of Tanaka to improve personal identification for assisting elderly people or patients (see Tanaka, [0002]).
Regarding claim 2, Lord in view of Bhowmik further in view of Tanaka teach the system of claim 1. Lord further teaches determining characterization information from the voice (see Lord, col 10 lines 35-37 he speaker recognizer 214 identifies the speaker based on acoustic properties of the speaker's voice, as reflected by the speech data received from the hearing device 120 ); matching the characterization information from the voice to characterization information in a database (see Lord, col 10 lines 38-40 The speaker recognizer 214 may compare a speaker voice print to previously generated and recorded voice prints stored in the data store 240 in order to find a best or likely match ); and identifying a person based on the matching(see Lord, col 11 lines 10-14 the speaker recognizer 214 may provide to the agent logic 220 indications of multiple candidate speakers, each having a corresponding likelihood. The agent logic 220 may then select the most likely candidate based on the likelihoods alone).
Regarding claim 3, Lord in view of Bhowmik further in view of Tanaka teach the system of claim 1. Lord further teaches detecting a plurality of voices from the sound (see Lord, col 11 lines 10-12 the speaker recognizer 214 may provide to the agent logic 220 indications of multiple candidate speakers ); identifying a primary voice from the plurality of voices (see Lord, col 11 lines 12-17 The agent logic 220 may then select the most likely candidate based on the likelihoods alone or in combination with other information ); and providing the identity information, wherein the identity information is associated with the primary voice (see Lord, col 10 lines 63-col 11 line 5 provides the identity of current speaker, see Lord, col 11 lines 40-44 provides the identity information with the speaker).
Regarding claim 4, Lord in view of Bhowmik further in view of Tanaka teach the system of claim 1. Lord further teaches generating a notification that identifies a person associated with the voice, wherein the notification comprises the identity information (see Lord, col 11 lines 55-56 information of the identifier of speaker ) ; and providing the identity information in the notification (see Lord, col 11 lines 43-44 performs informing the user of the speaker-related information via the hearing device ).
Regarding claim 5, Lord in view of Bhowmik further in view of Tanaka teach the system of claim 1. Lord further teaches wherein the sound comprises one or more non-voice sounds (see Lord, col 22, lines 58-64 discusses processing in noisy environments ( includes non-voice sounds)), wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising: detecting one or more non-voice sounds (see Lord, col 22, lines 58-64 discusses processing in noise model to improve operation in noisy environment; Lord col 129 lines 46-57 discusses determining engine noise for different vehicles); and filtering at least one non-voice sound of the one or more non-voice sounds. (see Lord, col 22, lines 58-64 discusses processing in noise model to improve (filter non-voice sounds) operation in noisy environment ).
Regarding claim 6, Lord in view of Bhowmik further in view of Tanaka teach the system of claim 1. Lord further teaches establishing communication between the hearing aid and a mobile device (see Lord, col 9 lines 46-56 discusses a smart phone hearing device ) ; and accessing an Internet via the mobile device, wherein the hearing aid sends and receives data to and from the Internet via the mobile device (see Lord, col 9 lines 55-64 discusses access to the remote devices/internet ).
Regarding claim 7, Lord in view of Bhowmik further in view of Tanaka teach the system of claim 1. Lord further teaches identifying one or more voices from the sound in real time based on artificial intelligence (see Lord, col 18 lines 5-10 teaches techniques for speech recognition may include neural networks, stochastic modeling, or the like.).
Regarding claim 8, is directed to a computer-readable medium claim corresponding to the system claim presented in claim 1 and is rejected under the same grounds stated above regarding claim 1.
Regarding claim 9, is directed to a computer-readable medium claim corresponding to the system claim presented in claim 2 and is rejected under the same grounds stated above regarding claim 2.
Regarding claim 10, is directed to a computer-readable medium claim corresponding to the system claim presented in claim 3 and is rejected under the same grounds stated above regarding claim 3.
Regarding claim 11, is directed to a computer-readable medium claim corresponding to the system claim presented in claim 4 and is rejected under the same grounds stated above regarding claim 4.
Regarding claim 12, Lord in view of Bhowmik further in view of Tanaka teach the computer-readable storage medium of claim 8. Lord further teaches wherein the identity information is provided in an in-ear notification, wherein the in-ear notification is audible to the user of the hearing aid (see Lord, col 11 lines 43-44 performs informing the user of the speaker-related information via the hearing device ).
Regarding claim 13, is directed to a computer-readable medium claim corresponding to the system claim presented in claim 6 and is rejected under the same grounds stated above regarding claim 6.
Regarding claim 14, is directed to a computer-readable medium claim corresponding to the system claim presented in claim 7 and is rejected under the same grounds stated above regarding claim 7.
Regarding claim 15, is directed to a method claim corresponding to the system claim presented in claim 1 and is rejected under the same grounds stated above regarding claim 1.
Regarding claim 16, is directed to a method claim corresponding to the system claim presented in claim 2 and is rejected under the same grounds stated above regarding claim 2.
Regarding claim 17, is directed to a method claim corresponding to the system claim presented in claim 3 and is rejected under the same grounds stated above regarding claim 3.
Regarding claim 18, is directed to a method claim corresponding to the system claim presented in claim 4 and is rejected under the same grounds stated above regarding claim 4.
Regarding claim 19, is directed to a method claim corresponding to the computer-readable medium claim presented in claim 12 and is rejected under the same grounds stated above regarding claim 12.
Regarding claim 20, is directed to a method claim corresponding to the system claim presented in claim 6 and is rejected under the same grounds stated above regarding claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kwasiborski et. al., US Patent 11,594,228 teaches a method to identify the user of the hearing aid and in hearing devices, a microphone array beamformer is often used for spatially attenuating background noise sources(see Kwasiborski, Fig. 1, [0070]).
Carter et. al., US PgPub. 2022/0076663 teaches a method to identify one/more words in the speech data and modify as needed for the hearing aid (see Carter, Fig. 4).
Wexler et. al. US PgPub. 2021/0258703 teaches processing of audio signal in the hearing aid of the individual (see Wexler, Fig. 19).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANDINI SUBRAMANI whose telephone number is (571)272-3916. The examiner can normally be reached Monday - Friday 12:00pm - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached at (571)272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NANDINI SUBRAMANI/ Examiner, Art Unit 2656
/BHAVESH M MEHTA/ Supervisory Patent Examiner, Art Unit 2656