Prosecution Insights
Last updated: April 19, 2026
Application No. 18/097,955

Systems and Methods for Optimizing Voice Notifications Provided by Way of a Hearing Device

Final Rejection §103
Filed
Jan 17, 2023
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Sonova AG
OA Round
4 (Final)
57%
Grant Probability
Moderate
5-6
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 01/12/2026. Claims 1-8, 10-13 and 15-20 are pending and have been examined. Hence, this action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The reply filed on 01/12/2026 has been entered. Applicant’s arguments with respect to claims 1-8, 10-13 and 15-20 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. The following terms in the claims have been given the following interpretations in light of the specification: Hearing loss profile: paragraph [0029], “The hearing loss profile 304 may include any suitable information associated with hearing capabilities of user 204, fitting parameters used to fit hearing device 202 to user 204, current settings of hearing device 202, and/or hearing preferences of user 204. For example, hearing loss profile 304 may include information associated with a hearing impairment type of user 204, an amount of hearing impairment of user 204, an audiogram of user 204, and/or any other suitable information.” Thus, a hearing loss profile is a per-user profile comprising any suitable information associated with hearing capabilities of the user, fitting parameters used to fit hearing devices to the user, current settings of hearing devices, or hearing preferences of the user. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Voice notification: paragraph [0028], “Hearing device 202 may be configured to provide one or more voice notifications to user 204 during use of hearing device 202. Such voice notifications may be used to inform user 204 regarding a status of hearing device 202, a status of another device communicatively coupled to hearing device 202, and/or for any other suitable purpose. To illustrate an example, an exemplary voice notification may include hearing device 202 playing a message indicating that ‘your hearing device battery is low’ to inform user 204 that a battery of hearing device 202 needs to be charged.” Thus, a voice notification is any message that uses language to communicate information to a user via a hearing device. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Acoustic parameters: paragraph [0030], “Acoustic parameters 308 may correspond to any suitable attribute of a voice notification that may affect the perceptibility of the voice notification for user 204. For example, acoustic parameters 308 may include one or more of a pitch, a speed, a frequency band gain, and/or a tone associated with a voice notification.” Thus, an acoustic parameter is any attribute of a voice notification that may affect the perceptibility of the voice notification. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10-13, and 16-20 are rejected under 35 U.S.C. 103 as obvious over European Patent Publication 3772735 A1 (Heckmann et al.) in view of US Patent Publication 20220322015 A1 (Song) in view of US Patent 8650035 B1 (Conway). Claim 1 Regarding claim 1, Heckmann et al. teach a system (Heckmann et al. ¶ [0001], “The present invention regards an assistance system and a corresponding method for assisting a user, wherein the system and method use speech output for providing information to a user.”) comprising: a memory that stores instructions (Heckmann et al. ¶ [0028], "The processor 4 is further connected to a memory. In the memory 7, the obtained information on a hearing capacity of the assisted person 2 may be stored. Further, all executable programs that are needed for the analysis of the acoustic environment, generation of a speech presentation, a database for storing vocabulary for the speech presentation, a table for determining a modality for the speech presentation based on the analysis result of the acoustic environment, and the like, are stored in this memory 7. The processor 4 is able to retrieve information from the memory 7 and store back information to the memory 7."); and a processor communicatively coupled to the memory and configured to execute the instructions to perform a process (Heckmann et al. ¶ [0028]) comprising: accessing a hearing loss profile of a user of a hearing device (Heckmann et al. ¶ [0010], "This information on the assisted person's hearing capacity can be stored in a memory a priori or it can be (continuously) analyzed from an interaction between the assisted person and the assistance system."), the hearing loss profile including information associated with a hearing impairment type of the user (Heckmann et al. ¶ [0049], "the information on the assisted person’s hearing capacity does not have to be limited to an audiogram but might also contain the results of other assessments (e.g. hearing in noise test, modified rhyme test ...)." Information regarding the hearing capacity of a user based on various hearing diagnostic tests is considered analogous to information associated with a hearing impairment type) and an audiogram of the user (Heckmann et al. ¶ [0049], "Information on the assisted person’s hearing capacity might be represented in the form of an audiogram."); determining, based on the hearing loss profile including the information associated with the hearing impairment type of the user and the audiogram of the user, one or more acoustic parameters (Heckmann et al. ¶ [0010],"Based on the knowledge about the assisted person's hearing capacity and the estimated interference, the modality of speech presentation is determined such that an expected intelligibility of the speech output is optimized (improved)"; ¶ [0015], "It is particularly preferred that the determined modality defines parameters of the speech presentation including at least one of: voice, frequency, timing, combination of speech output and gestures, intensity, prosody, speech output complexity level, and position of the speech output origin as perceived by the user.") [for voice notifications]; and directing the hearing device to apply the one or more acoustic parameters (Heckmann et al. ¶ [0010], "The determined modality is then used for the speech presentation. The modality defines the parameters to be used for the speech presentation including a position of a perceived origin of the speech output. Using these defined parameters, a speech presentation signal is generated. ") [to the voice notification] to be presented to the user by way of the hearing device (Heckmann et al. ¶ [0010], “This speech presentation signal is then supplied at least to a loudspeaker for outputting the intended speech output presentation and to other actuators of the assistance system to provide the additional multimodal information of the speech presentation to the assisted person.”), [the voice notification transmitted, during use of the hearing device, to the hearing device from an external device,] the voice notification having a first voice [accent] type (Heckmann et al. ¶ [0043], "Before the modality can be applied in step S8 at first, information to be provided to the user is generated in step S6. The generated information is then converted into an intended speech outputs in step S7." Generating speech output (S7) from textual information (S6) implies an initial voice type associated with the speech output), wherein the directing of the hearing device to apply the one or more acoustic parameters to the voice notification includes directing the hearing device to modify the voice notification from having the first voice [accent] type to having a second voice [accent] type different than the first voice [accent] type (Heckmann et al. ¶ [0043], "The parameters defined in the determined modality are then applied on this intended speech output to generate the speech presentation." ¶ [0015], "modality defines parameters of the speech presentation including at least one of: voice" Applying a modality (S8) to speech output (generated in S7) is considered analogous to modifying the voice notification from having a first voice type to having a second voice type) based on the hearing loss profile of the user (Heckmann et al. ¶ [0015], "having knowledge about the individual hearing capacity, the system will select a voice that can easily be understood by the assisted person."). Heckmann et al. do not explicitly teach all of voice notifications. However, Song teaches determining, based on the hearing loss profile including the information associated with the hearing impairment type of the user (Song ¶ [0057], "the providing unit 140 may include hearing data of the user including volume and a frequency the user prefers, an amplification value of a degree to which the user does not feel a sense of difference, or a volume and frequency range.") [and the audiogram of the user], one or more acoustic parameters for voice notifications (Song ¶ [0059],"The providing unit 140 may set control parameters of at least one or more of a change in amplification value, volume adjustment, and frequency adjustment according to an environmental change based on hearing data of the user and a natural language or a non-natural language specified according to the matched similar data and may provide a sound of one side in a user-customized form."); and directing the hearing device to apply the one or more acoustic parameters to a voice notification (Song ¶ [0094], "According to the matching of the similar data, the smart hearing device according to an embodiment of the inventive concept may set control parameters of at least any one or more of a change in amplification value, volume adjustment, and frequency adjustment in the natural language, may extract feedback in response to the sound data of 'Hello', and may provide the user with the feedback with a sound. For example, the smart hearing device according to an embodiment of the inventive concept may provide the user with a voice message notification, such as 'Yes, Hello', 'Long time no see', or 'May I help you?', as feedback on the sound data of 'Hello'.") to be presented to the user by way of the hearing device (Song ¶ [0036], "The inventive concept is a technology about a smart hearing device for distinguishing a natural language or a non-natural language, an artificial intelligence hearing system, and a method thereof, which is the gist of analyzing sound data of a voice signal and a noise signal received from smart hearing devices worn on ears of a user and providing control parameter settings and feedback according to matching of similar data depending on the determined result."), [the voice notification transmitted, during use of the hearing device, to the hearing device from an external device,] the voice notification having a first voice [accent] type (Song ¶ [0094], "smart hearing device according to an embodiment of the inventive concept may provide the user with a voice message notification, such as “Yes, Hello”, “Long time no see”, or “May I help you?”, as feedback on the sound data of “Hello”." A voice message implies having a first voice type). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Heckmann et al.’s determining and applying of acoustic parameters to voice information to include Song’s voice notifications because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Heckmann et al.’s voice information and Song’s voice notification perform the same general and predictable function, the predictable function being providing voice information in the form of speech that has been modified using acoustic parameters based on a user’s hearing capacity. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of Heckmann et al.’s voice information by replacing it with Song’s voice notification. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Heckmann et al. in view of Song do not explicitly disclose all of a voice accent type. However, Conway discloses applying one or more acoustic parameters to a voice [notification] to be presented to a user by way of a hearing device (Conway ¶ (36), "prior to beginning a conference call with the second parties 14b...14n, a first party 14a selects the at least one conversion heuristic 25a from speech converter library 24a by using a converter selection interface 32a for converting a speech signal 20b ... After the conversion heuristic or heuristics 25 have been selected and a conference call is initiated, the parties 14 will receive converted speech signal 22 in accordance with the particular conversion heuristic or heuristics 25 selected for the respective parties 14" See Fig. 2. Parties 14B-14N (the receiving parties) are implied to use a hearing device in order to receive converted speech signals during a conference call. Converting a speech signal is considered analogous to applying acoustic parameters to a voice), the voice [notification] transmitted, during use of the hearing device, to the hearing device from an external device (Conway Fig. 2. illustrates an embodiment of the voice conversion system during a conference call. Either first party 14A or the speech converter 18a associated with first party 14A is considered analogous to an external device with respect to parties 14B-14N), the voice notification having a first accent voice type (Conway ¶ (13), "Speech converter 18 may be any speech converting device known to those skilled in the art capable of receiving an original voice signal 20 and converting the received original signal 20 to a different voice signal 22. For example, speech converter 18 may be configured to perform speech conversions including ... accent translations"), wherein the directing of the hearing device to apply the one or more acoustic parameters to the voice notification includes directing the hearing device to modify the voice notification from having the first accent voice type to having a second accent voice type different than the first accent voice type (Conway ¶ (36), "In one embodiment, prior to beginning a conference call with the second parties 14b . . . 14n, a first party 14a selects the at least one conversion heuristic 25a from speech converter library 24a.... For example, the party 14a may choose at least one conversion heuristic 25a that converts a speech signal 20b from speech spoken with an Texas accent to speech spoken with a British accent for transmitting to first party 14b, and selects at least one conversion heuristic 25b that converts speech spoken with a Texas accent to speech spoken with a New York accent for transmitting to a second party 14c.") [based on the hearing loss profile of the user]. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Heckmann et al. in view of Song to incorporate Conway’s voice accent conversion. The suggestion/motivation for doing so would have been that, “In different regions of the United States, for example, people speak with widely varying accents, some of which may sound quite strong and be quite difficult to understand for a person from another region of the country,” as noted by the Conway disclosure in paragraph (7). Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Heckmann et al. further teach wherein the process further comprises: detecting a change in the hearing loss profile of the user (Heckmann et al. ¶ [0045], "The reaction of the person 2 is monitored in step S10 by the camera 10 and microphone 5. In Step S11, a deviation from an expected reaction of a person 2 is determined and from such deviation, a hearing capacity model is generated or updated in step S12. This updated hearing capacity model is then stored in step S13 in the memory 7 and is available for future application"; ¶ [0046], "Apart from a deviation of the assisted person's reaction from an expected reaction, it is also possible that the assisted person 2 explicitly gives feedback when he did not understand the assistance system 1. Such a direct feedback could either be a sentence like "I could not understand you" or "please repeat". Additionally, from images recorded by the camera 10, the assistance system 1 may interpret facial expressions and other expressive gestures allowing to conclude that the assisted person 2 has difficulties understanding the assistance system 1."); determining, based on the change in the hearing loss profile of the user, one or more additional acoustic parameters (Heckmann et al. ¶ [0047], "From these reactions on speech presentation, the assisted person's hearing capacity is inferred"; ¶ [0048], "Such information on hearing capacity of the assisted person 2 may be used to update the information that was initially obtained;" [0010], “This information on the assisted person’s hearing capacity can be stored in a memory a priori or it can be (continuously) analyzed from an interaction between the assisted person and the assistance system. Based on the knowledge about the assisted person’s hearing capacity and the estimated interference, the modality of speech presentation is determined such that an expected intelligibility of the speech output is optimized (improved).”; [0015], “It is particularly preferred that the determined modality defines parameters of the speech presentation including at least one of: voice, frequency, timing, combination of speech output and gestures, intensity, prosody, speech output complexity level, and position of the speech output origin as perceived by the user.”) [for voice notifications]; and directing the hearing device to apply the one or more additional acoustic parameters [to the voice notification] to be presented to the user by way of the hearing device (Heckmann et al. ¶ [0010], "The determined modality is then used for the speech presentation. The modality defines the parameters to be used for the speech presentation including a position of a perceived origin of the speech output. Using these defined parameters, a speech presentation signal is generated. … This speech presentation signal is then supplied at least to a loudspeaker for outputting the intended speech output presentation and to other actuators of the assistance system to provide the additional multimodal information of the speech presentation to the assisted person.”)). Song further teaches directing the hearing device to apply the one or more additional acoustic parameters to the voice notification to be presented to the user by way of the hearing device (Song ¶ [0094], "According to the matching of the similar data, the smart hearing device according to an embodiment of the inventive concept may set control parameters of at least any one or more of a change in amplification value, volume adjustment, and frequency adjustment in the natural language, may extract feedback in response to the sound data of 'Hello', and may provide the user with the feedback with a sound. For example, the smart hearing device according to an embodiment of the inventive concept may provide the user with a voice message notification, such as 'Yes, Hello', 'Long time no see', or 'May I help you?', as feedback on the sound data of 'Hello'.”). Claim 3 Regarding claim 3, the rejection of claim 1 is incorporated. Heckmann et al. further teach wherein the directing of the hearing device to apply the one or more acoustic parameters further includes modifying [the voice notification] based on the one or more acoustic parameters (Heckmann et al. ¶ [0010], "The determined modality is then used for the speech presentation. The modality defines the parameters to be used for the speech presentation including a position of a perceived origin of the speech output. Using these defined parameters, a speech presentation signal is generated."). Song further teaches wherein the directing of the hearing device to apply the one or more acoustic parameters to the voice notification includes modifying the voice notification based on the one or more acoustic parameters (Song ¶ [0094], “According to the matching of the similar data, the smart hearing device according to an embodiment of the inventive concept may set control parameters of at least any one or more of a change in amplification value, volume adjustment, and frequency adjustment in the natural language, may extract feedback in response to the sound data of “Hello”, and may provide the user with the feedback with a sound.”). Claim 4 Regarding claim 4, the rejection of claim 1 is incorporated. Heckmann et al. further teach wherein the one or more acoustic parameters [for the voice notifications] include at least one of pitch, speed, frequency band gain, or tone (Heckmann et al. ¶ [0015], "It is particularly preferred that the determined modality defines parameters of the speech presentation including at least one of: voice, frequency, timing, combination of speech output and gestures, intensity, prosody, speech output complexity level, and position of the speech output origin as perceived by the user."). Song further teaches wherein the one or more acoustic parameters for the voice notifications include at least one of pitch, speed, frequency band gain, or tone (Song ¶ [0094], “According to the matching of the similar data, the smart hearing device according to an embodiment of the inventive concept may set control parameters of at least any one or more of a change in amplification value, volume adjustment, and frequency adjustment in the natural language, may extract feedback in response to the sound data of “Hello”, and may provide the user with the feedback with a sound.”). Claim 5 Regarding claim 5, the rejection of claim 1 is incorporated. Heckmann et al. further teach wherein: the one or more acoustic parameters include a first voice type option and a second voice type option (Heckmann et al. ¶ [0015]; parameters of voice, prosody, voice complexity, etc. are suitable attributes that serve a particular implementation, and thus are considered voice type options) [for the voice notifications]; and the determining of the one or more acoustic parameters includes selecting, based on the hearing loss profile of the user, either the first voice type option or the second voice type option (Heckmann et al. ¶ [0015], "Depending on the individual hearing loss with respect to the frequency range, it is for some people easier to understand a women's voice compared to a man's voice and vice versa. Thus, having knowledge about the individual hearing capacity, the system will select a voice that can easily be understood by the assisted person.") for use in presenting the voice [notifications] to the user by way of the hearing device (Heckmann et al. ¶ [0010], "This speech presentation signal is then supplied at least to a loudspeaker for outputting the intended speech output presentation and to other actuators of the assistance system to provide the additional multimodal information of the speech presentation to the assisted person.”). Song further teaches presenting the voice notifications to the user by way of the hearing device (Song ¶ [0036], "The inventive concept is a technology about a smart hearing device for distinguishing a natural language or a non-natural language, an artificial intelligence hearing system, and a method thereof, which is the gist of analyzing sound data of a voice signal and a noise signal received from smart hearing devices worn on ears of a user and providing control parameter settings and feedback according to matching of similar data depending on the determined result."). Claim 6 Regarding claim 6, the rejection of claim 1 is incorporated. Heckmann et al. further teach wherein: the first voice type option corresponds to a male type of voice (Heckmann et al. ¶ [0015], “Depending on the individual hearing loss with respect to the frequency range, it is for some people easier to understand a women's voice compared to a man's voice and vice versa. Thus, having knowledge about the individual hearing capacity, the system will select a voice that can easily be understood by the assisted person”; ¶ [0052], “Hence, the assistance system 1 is capable of predicting the intelligibility of a speech output it will produce for the person 2. This will allow the assistance system 1 to perform internal simulations on how the intelligibility will change when parameters of the sound production are changed. This includes changes of the voice (male, female, voice quality ...), sound level and spectral characteristics (e.g. Lombard speech).”); and the second voice type option corresponds to a female type of voice (Heckmann et al. ¶ [0015], [0052]). Claim 7 Regarding claim 7, the rejection of claim 1 is incorporated. Heckmann et al. further teach wherein the process further comprises: [receiving a user input] selecting a voice type option (Heckmann et al. ¶ [0015], “Thus, having knowledge about the individual hearing capacity, the system will select a voice that can easily be understood by the assisted person.”) for use in providing [the voice notifications] to the user by way of the hearing device (Heckmann et al. ¶ [0010], “Using these defined parameters, a speech presentation signal is generated. This speech presentation signal is then supplied at least to a loudspeaker for outputting the intended speech output presentation and to other actuators of the assistance system to provide the additional multimodal information of the speech presentation to the assisted person.”); and directing the hearing device to use the voice type option (Heckmann et al. ¶ [0010]) [selected by the user for the voice notification]. Song further teaches receiving a user input selecting [a voice type option] (Song ¶ [0147], "Furthermore, the mobile device 500 may turn on or off a power supply of each of the first smart hearing device 300 and the second smart hearing device 400 depending on a selective input of the user and may manually control numerical values such as amplification values, volume, and frequencies of the first smart hearing device 300 and the second smart hearing device 400.") for use in providing the voice notifications to the user (Song ¶ [0094], “According to the matching of the similar data, the smart hearing device according to an embodiment of the inventive concept may set control parameters of at least any one or more of a change in amplification value, volume adjustment, and frequency adjustment in the natural language, may extract feedback in response to the sound data of “Hello”, and may provide the user with the feedback with a sound. For example, the smart hearing device according to an embodiment of the inventive concept may provide the user with a voice message notification, such as 'Yes, Hello', 'Long time no see', or 'May I help you?', as feedback on the sound data of 'Hello'.”) by way of the hearing device (Song ¶ [0036], "The inventive concept is a technology about a smart hearing device for distinguishing a natural language or a non-natural language, an artificial intelligence hearing system, and a method thereof, which is the gist of analyzing sound data of a voice signal and a noise signal received from smart hearing devices worn on ears of a user and providing control parameter settings and feedback according to matching of similar data depending on the determined result."); and directing the hearing device to use the [voice type option] selected by the user for the voice notification (Song ¶ [0094], [0147]). Heckmann teaches voice type options as one of the acoustic parameters applied to voice messages. Claim 8 Regarding claim 8, the rejection of claim 7 is incorporated. Heckmann et al. further teach wherein the directing of the hearing device to apply the one or more acoustic parameters [to the voice notification] further includes directing the hearing device to modify [the voice notification] based on the one or more acoustic parameters and the voice type option (Heckmann et al. ¶ [0015], “It is particularly preferred that the determined modality defines parameters of the speech presentation including at least one of: voice, frequency, timing, combination of speech output and gestures, intensity, prosody, speech output complexity level. ... Depending on the individual hearing loss with respect to the frequency range, it is for some people easier to understand a women's voice compared to a man's voice and vice versa. Thus, having knowledge about the individual hearing capacity, the system will select a voice that can easily be understood by the assisted person.”) [selected by the user]. Song further teaches directing the hearing device to modify the voice notification based on the one or more acoustic parameters (Song ¶ [0094], “According to the matching of the similar data, the smart hearing device according to an embodiment of the inventive concept may set control parameters of at least any one or more of a change in amplification value, volume adjustment, and frequency adjustment in the natural language, may extract feedback in response to the sound data of “Hello”, and may provide the user with the feedback with a sound. For example, the smart hearing device according to an embodiment of the inventive concept may provide the user with a voice message notification, such as 'Yes, Hello', 'Long time no see', or 'May I help you?', as feedback on the sound data of 'Hello'.”) [and the voice type option] selected by the user (Song ¶ [0147], "Furthermore, the mobile device 500 may turn on or off a power supply of each of the first smart hearing device 300 and the second smart hearing device 400 depending on a selective input of the user and may manually control numerical values such as amplification values, volume, and frequencies of the first smart hearing device 300 and the second smart hearing device 400."). Heckmann teaches voice type options as one of the acoustic parameters applied to voice messages. Claim 10 Regarding claim 10, Heckmann et al. teach a hearing device (Heckmann et al. ¶ [0001], “The present invention regards an assistance system and a corresponding method for assisting a user, wherein the system and method use speech output for providing information to a user.") comprising: a memory that stores instructions (Heckmann et al. ¶ [0028], "The processor 4 is further connected to a memory… . all executable programs that are needed for the analysis of the acoustic environment, generation of a speech presentation, a database for storing vocabulary for the speech presentation, a table for determining a modality for the speech presentation based on the analysis result of the acoustic environment, and the like, are stored in this memory 7."); and a processor communicatively coupled to the memory and configured to execute the instructions to perform a process (Heckmann et al. ¶ [0028], “The processor 4 is able to retrieve information from the memory 7 and store back information to the memory 7.”). The remaining limitations of claim 10 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 11 Regarding claim 11, the rejection of claim 10 is incorporated. The limitations of claim 11 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 12 Regarding claim 12, the rejection of claim 10 is incorporated. The limitations of claim 12 are similar in scope to that of claim 5 and therefore are rejected for similar reasons as described above. Claim 13 Regarding claim 13, the rejection of claim 12 is incorporated. The limitations of claim 13 are similar in scope to that of claim 6 and therefore are rejected for similar reasons as described above. Claim 16 Regarding claim 16, the limitations of claim 16 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 17 Regarding claim 17, the rejection of claim 16 is incorporated. The limitations of claim 17 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 18 Regarding claim 18, the rejection of claim 16 is incorporated. The limitations of claim 18 are similar in scope to that of claim 5 and therefore are rejected for similar reasons as described above. Claim 19 Regarding claim 19, the rejection of claim 18 is incorporated. The limitations of claim 19 are similar in scope to that of claim 6 and therefore are rejected for similar reasons as described above. Claim 20 Regarding claim 19, the rejection of claim 16 is incorporated. The limitations of claim 20 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claim 15 is rejected under 35 U.S.C. 103 as obvious over Heckmann et al. in view of Song in view of Conway as applied to claim 10, and further in view of US Patent Publication 20140369536 A1 (Kirkwood et al.). Claim 15 Regarding claim 15, the rejection of claim 10 is incorporated. Heckmann et al. in view of Song in view of Conway disclose all the elements of the claimed invention as stated above. Heckmann et al. in view of Song in view of Conway do not explicitly disclose wherein the voice notifications are stored in the memory of the hearing device. However, Kirkwood et al. teach a hearing device (Kirkwood et al. ¶ [0011]-[0012], "A new hearing instrument system is also provided, having a hearing instrument and a device, wherein the device has a central processor configured for controlling") wherein the voice notifications are stored in the memory of the hearing device (Kirkwood et al. ¶ [0115], "The audio samples of the speech message are stored in an audio file in the memory 48 together with the time and date, at which the audio file, i.e. the speech message, has to be played back to the user."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Heckmann et al. in view of Song in view of Conway to include Kirkwood et al.’s memory storage. The suggestion/motivation for doing so would have been that, “a text-to-speech processor is not required in the hearing instrument,” as noted by the Kirkwood et al. disclosure in paragraph [0031], or that, “the user is relieved from the task of consulting other equipment for updates on upcoming events and incoming communication,” as noted Kirkwood et al. disclosure in paragraph [0172]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 02/20/2026
Read full office action

Prosecution Timeline

Jan 17, 2023
Application Filed
Apr 14, 2025
Non-Final Rejection — §103
Jul 22, 2025
Response Filed
Aug 07, 2025
Final Rejection — §103
Sep 25, 2025
Interview Requested
Oct 01, 2025
Examiner Interview Summary
Oct 01, 2025
Applicant Interview (Telephonic)
Oct 07, 2025
Request for Continued Examination
Oct 10, 2025
Response after Non-Final Action
Oct 20, 2025
Non-Final Rejection — §103
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 07, 2026
Examiner Interview Summary
Jan 12, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month