DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to a reply communication filed on January 20, 2026, to the Restriction/Election Requirement office action mailed on November 20, 2025 and wherein claims 1, 20 amended, claims 8-19, 26-32 canceled, and claims 33-40 newly added and details in title Response to Applicant’s Rely as set forth below.
In virtue of this communication, claims 1-7, 20-25, 33-40 are currently pending in this Office Action.
In the response to this office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Response to Applicant’s Reply
In the reply, applicant elected Invention I, claims 1-7, 20-25 for prosecution without traverse and claims 8-19, 26-32 are no longer from further consideration on the merits pursuant to 37 CFR 1.142(b), as being drawn to a non-elected invention and currently, claims 8-19, 26-32 have been canceled. Claims 33-40 added and presented as of Invention I, see paragraph 2 of Remarks filed on January 20, 2026.
Specification
Specification failed to disclose the claimed features: “wherein the processor is further configured to adjust the first gain based at least in part on a second user input indicating a change to the first gain, wherein the second user input is received via a second visual control interface of the second electronic device for adjusting the first gain applied to the ambient sound” as recited in claim 38, i.e., “the user” and “the second user” shared a usage of the same “processor” with the same “second electronic device”, but “the user” is with “the visual control interface” and “the second user” is with “the second visual control interface of the same “the second electronic device” and instead, the application specification broadly “By offering this level of control and customization, the fine-tuning slider empowers individuals to tailor the amplification of their own voice to their specific comfort and communication needs. It facilitates that the hearing device or audio processing system delivers a personalized and optimized listening experience based on the user's preferred voice gain settings. This feature provides a user-friendly interface and enhances user engagement by allowing them to actively participate in the hearing profile enrollment process. By fine-tuning the voice gain according to their own preferences, individuals can achieve a more personalized and satisfactory listening experience”, i.e., the disclosed features applied their own hearing devices and customized according their own needs, but not necessarily disclosure of sharing the same claimed “device” and the same “the second electronic device” and in addition, the applicant specification never disclosed that the specified “hearing aid” embodiment would be shared by multiple users with the same “second electronic device” for profiling process.
Appropriate correction is required.
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, claimed “wherein the processor is further configured to adjust the first gain based at least in part on a second user input indicating a change to the first gain, wherein the second user input is received via a second visual control interface of the second electronic device for adjusting the first gain applied to the ambient sound” as recited in claim 38 must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention..
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5, 20, 40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sabin et al. (US 20210345047 A1, hereinafter Sabin).
Claim 1: Sabin teaches a device (title and abstract, ln 1-11, a wearable assistant device in fig. 1) comprising:
a microphone (one or more microphones 114 in fig. 1); and
a processor (one or more programmable processors, and in wearable hearing assistant device 100 in fig. 1, para 65) configured to:
receive an audio signal corresponding to the microphone (audio signals outputted from microphone inputs 116 in fig. 1, and representing captured acoustic signals, para 40, abstract);
detect at least one or more of an ambient sound or a voice of a user of the device in the received audio signal (based on VAD 110 upon captured phase difference between two of the microphones signals, whether the acoustic signal being captured is the user’s voice or an external ambient acoustic signal is determined or an averaged phase difference over frequencies is compared to a speech-versus-noise threshold to determine whether the signal is the user’s voice or an external ambient acoustic signal, para 35-36, and ambient sounds such as traffic sounds, music, etc., or the voice noise of another person, para 35); and
apply a first gain to the ambient sound when the ambient sound is detected in the received audio signal (a second set of ANR filters for reducing environmental noise in response to no voice signals being detected, i.e., ambient noise detected, para 41 or gain level is returned from a second level to a first level when user’s voice is no longer detect, para 38) and apply a second gain different than the first gain to the voice of the user of the device when the voice of the user of the device is detected in the received audio signal (a first set of ANR filters for reducing the occlusion in user’s voice is detected, para 41, e.g., gain level is reduced to a second level by amplification of the audio signals in response to the detection of voice of the user, para 38, e.g., achieve a personalized gain reduction target, para 41-42).
Claim 20 recited a method that is implemented by the processor of claim 1 and claim 20 has been analyzed and rejected according to claim 1 above.
Claim 40 has been analyzed and rejected according to claims 1, 20 above and Sabin further teaches a non-transitory computer-readable medium storing instructions that when executed by the processor, cause the processor to perform operations of the method of claim 20 above (one or more non-transitory machine-readable media with computer program product, para 63).
Claim 2: Sabin further teaches, according to claim 1 above, wherein the one or more of the ambient sound or the voice of the user of the device are detected based at least in part on a classification indicating that the received audio signal contains the voice of the user of the device and does not contain the ambient sound (via analyzing the phase difference by the VAD to indicate by the determination that the captured acoustic signal is the user’s voice, i.e., only contains user’s voice, does not contain the ambient sound and vise verse, para 36).
Claim 3 has been analyzed and rejected according to claims 1-2 above and Sabin further teaches, according to claim 1 above, wherein the one or more of the ambient sound or the voice of the user of the device are detected based at least in part on a classification indicating that the received audio signal contains the ambient sound and does not contain the voice of the user of the device (via analyzing the phase difference by the VAD to indicate by the determination that the captured acoustic signal is the external ambient acoustic signal, i.e., only contains the ambient sound, does not contain the user’s voice, para 36).
Claim 5: Sabin further teaches, according to claim 1 above, the device further comprising an accelerometer (accelerometer 112 in fig. 1), wherein the processor configured to detect the one or more of the ambient sound or the voice of the user of the device is further configured to detect the voice of the user of the device in the received audio signal based at least in part on one or more measurements from the accelerometer (through VAD 110 and discussion in claim 1 above).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 6-7, 21-23, 33 are rejected under 35 U.S.C. 103 as being unpatentable over Sabin (above) and in view of reference Paniconi (US 8428946 B1).
Claim 4: Sabin further teaches, according to claim 1 above, wherein the one or more of the ambient sound or the voice of the user of the device are detected based at least in part on a classification (through the analyses of the captured phase difference of the signals, e.g., by comparing the averaged phase difference to a speech-versus-noise threshold, para 36), except explicitly teaching wherein the received audio signal contains a combination of the ambient sound and the voice of the user of the device.
Paniconi teaches an analogous field of endeavor by disclosing a device (title and abstract, ln 1-18, a multi-channel noise suppression system in fig. 1) comprising:
a microphone (microphone 105A, …, 105N in fig. 1); and
a processor (DSPs, FPGAs, or ASICS, col 22, ln 28-32 and for noise suppression module 160 in fig. 1) configured to:
receive an audio signal corresponding to the microphone (captured sound signal from microphones 105A, 105B, …, 105N in fig. 1, col 6, ln 52-55 or 200A, 200B, …, 200N in fig. 2, col 8, ln 34-39);
detect at least one or more of an ambient sound (from sources such as computers, fans, office equipment, col 1, ln 21-27) or a voice of a user of the device (voice in voice communication with participants, col 1, ln 21-27, i.e., including the user’s voice inherently) in the received audio signal (C=0 as noise is detected while C=1 as speech is detect, via a speech/noise probability function in a speech/noise classification module 140 in figs. 1-2, col 9, ln 49-58); and
apply a first gain to the ambient sound when the ambient sound is detected in the received audio signal (via a gain filter 145 for reducing or removing the estimated amound of noise from the input frame, col 7, ln 62-67, col 8, ln 1-3) and apply a second gain different than the first gain to the voice of the user of the device when the voice of the user of the device is detected in the received audio signal (via post-noise suppression processes on the input frame following a gain filter 145, and increasing the power of speech resent only in speech frame, and no change if the frame is found to be noise, col 8, ln 4-21) and wherein the one or more of the ambient sound or the voice of the user of the device are detected based at least in part on a classification indicating that received audio signal contains a combination of the ambient sound and the voice of the user of the device (represented by speech/noise probability via the speech/noise probability function, e.g., Yi(k,t) is observed noisy frequency spectrum for the input channel I at time/frame index t for frequency k, col 9, ln 49-58, e.g., the speech/noise classification is based on probabilistic classifier with thresholding a conditional probability, col 10, 32-37 and noise is updated for segments where the speech probability is determined to be below a threshold, col 10, ln 39-47) for benefits of effectively detecting speech/noise in variety of complex environment situations (e.g., user is moving, or the room acoustic filter is hard to be estimated, etc. col 1, ln 37-44).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the one or more of the ambient sound or the voice of the user of the device are detected based at least in part on the classification indicating that received audio signal contains the combination of the ambient sound and the voice of the user of the device, as taught by Paniconi, to the one or more of the ambient sound or the voice of the user of the device being detected based at least in part on the classification in the device, as taught by Sabin, for the benefits discussed above.
Claim 6: the combination of Sabin and Paniconi further teaches, according to claim 1 above, wherein the processor configured to detect the one or more of the ambient sound or the voice of the user of the device is further configured to determine an own voice presence probability value for one or more frequency bins associated with the received audio signal (Sabin, the own voice is detected, the discussion in claim 1 above, and Paniconi, by determining speech probability and the discussed in claim 4 above, e.g., based on speech probability applied to a threshold, col 2, ln 64-67), the own voice presence probability value indicating a likelihood that the voice of the user of the device is present in a frequency bin of the one or more frequency bins (Sabin, averaging the phase difference values over a few different frequencies for own voice detection, para 36 and Paniconi, the speech probability through speech/noise probability or likelihood function, is determined based on the threshold, col , and based on the signal classification feature that is the geometric average of a time-smoothened likelihood ratio LR, by:
PNG
media_image1.png
43
249
media_image1.png
Greyscale
and i is frequency bin and N is the number of frequency bins, col 13, ln 10-35).
Claim 7: the combination of Sabin and Paniconi further teaches, according to claim 6 above, wherein the processor configured to apply the second gain is further configured to adjust the second gain based at least in part on the own voice presence probability value (Sabin, the gain level is reduced to a second level by amplification of the audio signals in response to the detection of voice of the user, para 38, and for achieving a personalized gain reduction target, para 41-42, and based on own voice detection by using the speech-versus-noise threshold, para 35-36, and Paniconi, increasing speech power by scaling an energy of the speech segments based on energy lost in the frame due to the noise estimation and filtering processes, col 8, ln 18-21, and wherein the noise estimation and filter processes are upon the determined speech probability and noise estimate via the noise estimation update unit 135, the formula in col 10, ln 49-67, e.g., the noise estimation is obtained by applying a speech/noise probability related weight col 10, ln 49-67, col 10, ln 49-67 and the weight is related to probability set {Fi}, etc., col 11, ln 9-17, and col 13, 25-28 and the discussion in claim 6 above).
Claim 21: the combination of Sabin and Paniconi further teaches, according to claim 20 above, the method further comprising determining a dominant signal between the ambient sound and the voice of the user of the device for one or more frequency bins associated with the received audio signal (Sabin, discussed in claim 20 above, i.e., using the phase difference in different frequencies, para 36, and Paniconi, the noise is only updated for segments such as Y(k, t), of which the speech probability is determined to be below the threshold through a template learned noise spectrum a(k, t), i.e., the noise in the Y(k, t) is as dominant signal between the ambient sound and the voice of the user of the device in frequency bin k and the discussion in claim 6 above) based on at least partial overlap of the ambient sound with the voice of the user of the device in frequency (Y(k, t) as input magnitude spectrum of the input noisy speech, col 13, ln 25-49, i.e., and suppressing the noise is on the estimated amount of noise from the input frame, col 7, ln 62-67 and col 8, ln 1-3 and may also cause the energy loss of speech component, col 8, ln 15-21, i.e., overlap in frequency domain).
Claim 22: the combination of Sabin and Paniconi further teaches, according to claim 21 above, the method further comprising refraining from applying noise suppression to the dominant signal in a frequency bin of the one or more frequency bins based on a determination that the dominant signal corresponds to the voice of the user of the device (Sabin, the 2nd set of ANR filters applied to no voice signals being detected, while the 1st set of ANR is applied to reduce an occlusion if voice signals being detect, para 17, 41 and Paniconi, energy scaling as second gain is performed upon that only input frames is determined to be speech and the frames found to be noise are left alone, col 8, ln 12-15, and scaling back of the speech energy if the the energy lost in the frame due to the noise estimation and filtering processes, col 8, ln 18-21).
Claim 23: the combination of Sabin and Paniconi further teaches, according to claim 21 above, the method further comprising applying noise suppression to the dominant signal by attenuating the dominant signal in a frequency bin of the one or more frequency bins based on a determination that the dominant signal corresponds to the ambient sound (Sabin, the 2nd set of ANR applied to reduce environmental noise if VAD is inactive, i.e., no voice is detected, or only noise is detected, para 41 and Paniconi, the noise suppression is performed in a multi-channel environment and in the frequency domain, col 5, ln 4-9).
Claim 33 has been analyzed and rejected according to claims 1, 6 above.
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Sabin (above) and in view of reference Gauger, JR. (US 20110235813 A1).
Claim 24: Sabin teaches, according to claim 20 above, wherein the first gain and the second gain are different (the 1st set of ANR and the 2nd set of ANR corresponding to own voice and ambient noise are detected, as discussed in claim 20 above and Paniconi, the noise suppression related to the noise estimation and the speech enhancement is in post-processing after the noise suppression and applying the scaling or gain based on the energy lost in the frame due to the noise estimation and filtering processes, col 8, ln 15-21), and the first gain and the second gain corresponding to the different gains (Sabin and Paniconi, the discussed in claim 20 above), except explicitly teaching wherein the first gain and the second gain correspond to different psychoacoustic loudness growth functions.
Gauger, JR. teaches an analogous field of endeavor by disclosing a method (title and abstract, ln 1-6 and a method implemented on a system in fig. 1, e.g., method of claim 1) and wherein a first gain and a second gain are disclosed (the input audio signal 131 including ambient noise and speech, and adjusting desired signal such as speech to mask ambient noise or adjust level of ambient being less distraction by user to select between a number of different settings, para 17) to correspond to different psychoacoustic loudness growth functions (based on psychoacoustic principle, para 17 and practiced by a module of psychoacoustic principles to determine the amount of gain to relate the degree of intelligibility of speech signals in the face of noise and reverberation, para 24, e.g., masking a desired audio signal by residual ambient noise or masking residual ambient noise by an audio signal, para 17) for benefits of improving user’s listening experience (by effectively eliminating the distraction without requiring a loud level adjustment, para 21, by adapting a speech signal for presentation in the presence of noise to achieving intelligibility for the speech by psychoacoustic compression, para 5).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the first gain and the second gain and wherein the first gain and the second gain correspond to different psychoacoustic loudness growth functions, as taught by Gauger, Jr., to the first gain and the second gain in the method, as taught by Sabin, for the benefits discussed above.
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Sabin (above) and in view of reference Bartunek (US 20110235813 A1).
Claim 25: Sabin further teaches, according to claim 20 above, the second gain (Sabin, the discussion in claim 20 above), except explicitly teaching wherein the second gain corresponds to a user defined gain setting that is configurable via user input during a hearing profile enrollment process of the device.
Bartunek teaches an analogous field of endeavor by disclosing a method (title and abstract, ln 1-8 and method steps in fig. 3) and wherein a second gain corresponding to spoken words is disclosed (via an input transducer 105 in fig. 1, and passed to gain control 151, noise reduction 152, and frequency transition that high frequency component of impaired hearing is transitioned to a low frequency range of better hearing, para 13, i.e., gain control over frequency, in noise and in spoken words in fig. 1) and also corresponds to a user defined gain setting that is configurable via user input during a hearing profile enrollment process of the device (user input through a communication s interface 110 in fig. 1, para 12 and user input data are processing parameters modifying device operation during the fitting process using feedback from the patient, para 2-3 and the parameters applied in filtering/amp 150, para 14, including gain values inherently and the fitting processing results are in fig. 4) for benefits of improving the device performance and configuration optimization (by making the human perceived sound more clear, para 13, and by adaptively applied to parameter adjustment during the fitting processing by using user’s feedback, para 3),
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the second gain corresponds to the user defined gain setting that is configurable via the user input during the hearing profile enrollment process of the device, as taught by Bartunek, to the second gain in the method, as taught by Sabin, for the benefits discussed above.
Claims 34-36 are rejected under 35 U.S.C. 103 as being unpatentable over Sabin (above) and in view of reference Pedersen et al. (US 20160227332 A1, hereinafter Pedersen).
Claim 34: Sabin teaches, according to claim 1 above, wherein the processor is further configured to adjust the second gain (the first set of ANR filters for reducing the occlusion in user’s voice and the gain is reduced to a second level by amplification of the audio signals in response to the detection of voice of the user, para 38, e.g., achieve a personalized gain reduction target, para 41-42), except explicitly teach that it is based at least in part on a user input indicating a change to the second gain.
Pedersen teaches an analogous field of endeavor by disclosing a device (title and abstract, ln 1-29 and a binaural hearing system in figs. 4A-4B) and wherein a processor is disclosed (signal processing unit SPU in fig. 4A/4B) configured to adjust the second gain (adjusting level/frequency dependent gain according to the needs of the user, para 94 and including user’s own voice detected by an own voice detector for detecting whether a given input sound or voice originates from the voice of the user of the system, para 45) based at least in part on a user input indicating a change to the second gain (through an auxiliary device having user interface UI in fig. 6B and providing user control of the volume of playback with clicking option of increasing and decreasing volume on the UI in fig. 6B, para 108, and including detected user’s own voice detected by the own voice detector, para 45) for benefits of improving hearing performance (by enhancing a target acoustic source among multitude of acoustic sources in environment, para 31, and thus, strengthening the intelligibility of speech in the communication environment, para 87, and near-field sound source, para 88 and provide flexibility in reducing unwanted sound in separating different voices, para 45).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the second gain and the processor and wherein the processor is configured to adjust the second gain based at least in part on the user input indicating the change to the second gain, as taught by Pedersen, to the second gain and the processor configured to adjust the second gain in the device, as taught by Sabin, for the benefits discussed above.
Claim 35: the combination of Sabin and Pedersen further teaches, according to claim 34, wherein the user input is received via a visual control interface of a second electronic device (Pedersen, the auxiliary device with UI in fig. 6B) for adjusting the second gain applied to the voice of the user of the device (discussed in claim 34 above, by touching “Increase” or “decrease” in Volume title in fig. 6B, para 108), except explicitly teaching wherein the visual control interface comprises a fine-tuning slider.
An Official Notice is taken that a visual control interface comprising a fine-tuning slider and displayed for operation by the user is well-known in the art for providing a smooth and continuation of adjustment of signal gain to avoid uncomfortable sound perception.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the visual control interface comprising the fine-tuning slide, as taught by the well-known in the art above, to the visual control interface in the device, as taught by the combination of Sabin and Pedersen, for the benefits discussed above.
Claim 36: the combination of Sabin and Pedersen further teaches, according to claim 35, wherein the user input is received based at least in part on one or more interactions between the user and the fine-tuning slider (Pedersen, interaction between the UI and the user, para 108 and fine-tuning slider by the well-known in the art above), wherein the one or more interactions comprise movement of the fine-tuning slider along a continuum that indicates a desired level of amplification for the voice of the user of the device (Pedersen, the fine-tuning slider discussed above, and Sabin and Pedersen, adjusting the desired level of amplification for the voice of the user through the second gain, and the operation of the fine-tuning slider along a continuum that indicates a desired level of amplification is inherency).
Claims 37-39 are rejected under 35 U.S.C. 103 as being unpatentable over Sabin (above) and in view of references Pedersen (above) and Bartunek (above).
Claim 37: the combination of Sabin and Pedersen teaches, according to claim 35, the processor and the second electronic device (the discussion in claim 35 above, the processor by Sabin and Pedersen, the second electronic device by Pedersen in fig. 6A/6B) and wherein the fine- tuning slider is provided for display on the second electronic device (well-known in the art, and discussed in claim 35 above), except explicitly teaching wherein a hearing profile enrollment process at the second electronic device and wherein the processor is further configured to initiate a hearing profile enrollment process at the second electronic device, including the disclosed fine- tuning slider is provided for the display on the second electronic device is also during the hearing profile enrollment process.
Bartunek teaches an analogous field of endeavor by disclosing a device (title and abstract, ln 1-8 and a hearing aid device in fig. 1) and wherein a hearing profile enrollment process at the second electronic device is disclosed (the mapping processor 200 connected to the hearing aid 250 in fig. 2 and through the communication interface 110 in fig. 1 and retrieving the hearing response profile from user input at step S1 and through the communications interface 110 to perform mapping and display, programming the hearing aid, para 11,) and wherein the processor is further configured to initiate a hearing profile enrollment process at the second electronic device (from step S1 to S7 in fig. 3) for the same benefits as discussed in claim 35 above.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the hearing profile enrollment process that is initiated by the processor at the second electronic, as taught by Bartunek, to the processor, the second electronic device, and the display of the fine-tuning slider in the device, as taught by the combination of Sabin and Pedersen, for the benefits discussed above.
Claim 38: the combination of Sabin, Pedersen, and Bartunek further teaches, according to claim 35, wherein the processor is further configured to adjust the first gain based at least in part on a second user input (multiple patients, para 2) indicating a change to the first gain (Sabin, Pedersen, adjusting the gain corresponding to the noise, as discussed in claims 1, 34-35 above, and Bartunek, using noise reduction 152 applied to the noise signal picked up by the input transducer 105 in fig. 1), wherein the second user input is received via a second visual control interface of the second electronic device for adjusting the first gain applied to the ambient sound (discussed in claim 35, and Pedersen and Bartunek, displays on the second electronic devices and discussed in claims 35-37).
Claim 39: the combination of Sabin, Pedersen, and Bartunek further teaches, according to claim 34,wherein the device is further configured to update a hearing profile associated with the user of the device using the change to the second gain (Sabin, the first gain is modified, as discussed in claims 1, 34 above, and Pedersen, user input through the UI in fig. 6A-6B and changing the settings of the hearing device, including processing parameters of the left and right hearing devices, para 107, and Bartunek, the programming and parameters are modified through the communication interface 110, para 11-12, including feedback from measurement of spoken words in fig. 4).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached Monday-Friday 6:30amp-4:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LESHUI ZHANG/
Primary Examiner,
Art Unit 2695