DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see pp. 9-10 or Remarks filed 1/2/26, with respect to claims 10-16 have been fully considered and are persuasive. The 35 USC 103 rejection of claims 10-16 has been withdrawn.
Applicant’s arguments with respect to claims 1, 3-9 and 17-21 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3 and 9 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Reinhart et al. (US 2023/0239634 A1), hereinafter “Reinhart.”
As to claim 1, Reinhart discloses an earbud (hearing device 100/700, Figs. 1 and 7), comprising:
one or more microphones (¶0041, Fig. 7. Microphones 730.); and
processing circuitry (¶0043, Fig. 7. Processor 720 and reverberation detection and mitigation 738.) configured to:
receive an audio input from the one or more microphones (¶0043, Fig. 7. Microphones 702 in communication with Reverberation detection and mitigation module 738.);
determine, based on the audio input whether reverberation is present in an environment of the earbud (¶0043, Fig. 7. “During operation of the hearing device 700, the reverberation detection and mitigation module 738 can be used to detect reverberation.”);
operate at least one of a digital signal processor or a neural network to reduce the reverberation in the audio input responsive to a determination that the reverberation is present (¶0036 and ¶0043, Figs. 5 and 7. “In response to detecting 500 the reverberation condition, the method may involve enabling 503 the sound processing capability with a reverberation mitigation setting if the sound processing capability is currently disabled.”); and
deactivate, responsive to a determination that the reverberation is not present, the at least one of: the digital signal processor or the neural network for the audio input (¶0036 and ¶0043, Figs. 5 and 7. “The reverberation mitigation is stopped 505 when the reverberation condition is no longer detected.”).
As to claim 3, Reinhart discloses wherein the processing circuitry is further configured to determine an amount of the reverberation that is present in the environment of the earbud, and the at least one of the digital signal processor or the neural network is configured to reduce the reverberation in the audio input based on the amount of the reverberation (¶0035. “The reverberation condition is predicted to impact clarity of the amplified sound (e.g., satisfies a measured threshold… ). A sound processing capability is determined 501 that will affect the reverberation.”).
As to claim 9, Reinhart discloses wherein the processing circuitry is further configured to deactivate one or more additional digital signal processors or one or more additional neural networks based on a noise condition determined by the processing circuitry (¶0036 and ¶0043, Figs. 5-7. “The reverberation mitigation is stopped 505 when the reverberation condition is no longer detected. This may involve removing the reverberation mitigation setting, changing the sound processing capability to the default setting, or disabling the sound processing capability altogether.”).
Claims 17 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhao et al. (US 2020/0236466 A1), hereinafter “Zhao.”
As to claim 17, Zhao discloses a method, comprising:
receiving, at processing circuitry of a media output device, an audio input from one or more microphones of the media output device (¶0110-0111 and ¶0015, Fig. 1. “The device may be a headphone, a loudspeaker box, or other electronic devices capable of playing an audio signal.” “The ambient sound acquisition microphone 13 is configured to pick up an ambient sound signal, and feed the picked-up ambient sound signal to the control module 21, the active noise cancellation module 23 and the ambient sound adjustment module 24 respectively.”);
determining, by the processing circuitry based on the audio input, whether the media output device is in an indoor environment or an outdoor environment (¶0119, ¶0133, ¶0157. “A sound pressure level of an ambient sound signal acquired by the ambient sound acquisition microphone 13 is calculated, and an energy distribution and an spectral distribution of the ambient sound signal is analyzed. Components of the ambient sound may be obtained by analyzing the energy distribution and the spectral distribution of the ambient sound signal, for example, whether the ambient sound contains a voice component, a warning sound component like an alarm honk, a wind noise component, and so on, and the energy of these components.” “The environment types in the embodiments of the disclosure may be divided into “indoor” and “outdoor”. “For example, it can be accurately determined that the user is in the outdoor environment in combination with the geographic location data after determining that there is a very strong wind noise signal included in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal.” Indoor/outdoor determination based on microphone input.); and
modifying, based on whether the media output device is in the indoor environment or the outdoor environment, an operation of at least a portion of at least one of: a digital signal processor or a neural network for the audio input at the media output device (¶0119 and ¶0121. “when the user is in an outdoor environment, it can be determined, according to an energy distribution and an spectral distribution of the wind noises, whether it is needed to enable the wind noise suppression submodule 241. When the user is in an indoor environment, the wind noise suppression submodule 241 may be disabled.” “In another specific example, when the user is in the outdoor environment, the dynamic range control submodule 243 must be enabled; when the user is in the indoor environment, because there are a relatively few burst sounds in the indoor environment, the dynamic range control submodule 243 may be disabled.”).
As to claim 19, Zhao discloses wherein the at least one of the digital signal processor or the neural network comprises a multi-channel linear prediction block or a wind noise suppressor (¶0118-0119, Fig. 1. Wind noise suppression submodule 241.).
As to claim 20, Zhao discloses identifying, by the processing circuitry based on the audio input, a speaker presence condition and modifying a voice isolation block based on the speaker presence condition (¶0120. “In a specific example, when the user is in a talking state, the voice enhancement submodule 24[2] is enabled. In a specific example, when the user is in a state of being necessary to hear an outside prompt voice, the voice enhancement submodule 24[2] is enabled.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4-8 are rejected under 35 U.S.C. 103 as being unpatentable over Reinhart, as applied to claim 1 above, in view of Zhao.
As to claim 4, Reinhart does not expressly disclose wherein the processing circuitry is further configured to deactivate one or more additional digital signal processors or one or more additional neural networks based on an indoor/outdoor condition determined by the processing circuitry.
Zhao discloses wherein the processing circuitry is further configured to deactivate one or more additional digital signal processors or one or more additional neural networks based on an indoor/outdoor condition determined by the processing circuitry (Zhao, ¶0119 and 0121. “When the user is in an outdoor environment, it can be determined, according to an energy distribution and an spectral distribution of the wind noises, whether it is needed to enable the wind noise suppression submodule 241. When the user is in an indoor environment, the wind noise suppression submodule 241 may be disabled.” “In another specific example, when the user is in the outdoor environment, the dynamic range control submodule 243 must be enabled; when the user is in the indoor environment, because there are a relatively few burst sounds in the indoor environment, the dynamic range control submodule 243 may be disabled.”).
Reinhart and Zhao are analogous art because they are from the same field of endeavor with respect to listening devices.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to deactivate certain signal processing based on an indoor/outdoor condition, as taught by Zhao. The motivation would have been to reduce unnecessary processing and save energy.
As to claim 5, Reinhart in view of Zhao discloses wherein the processing circuitry is further configured to determine an amount of the reverberation that is present in the environment of the earbud, and the at least one of the digital signal processor or the neural network is configured to reduce the reverberation in the audio input based on the amount of the reverberation (Reinhart, ¶0035. “The reverberation condition is predicted to impact clarity of the amplified sound (e.g., satisfies a measured threshold… ). A sound processing capability is determined 501 that will affect the reverberation.”).
As to claim 6, Reinhart in view of Zhao discloses wherein the one or more additional digital signal processors or one or more additional neural networks that are deactivated are configured to remove wind noise from the audio input when active (Zhao, ¶0119, Fig. 1. “When the user is in an indoor environment, the wind noise suppression submodule 241 may be disabled.”).
The motivation is the same as claim 4 above.
As to claim 7, Reinhart in view of Zhao discloses wherein the processing circuitry is further configured to deactivate one or more additional digital signal processors or one or more additional neural networks based on a lack of a voice at a predetermined location (Zhao, ¶0120. “In a specific example, when the user is in a talking state, the voice enhancement submodule 24[2] is enabled. In a specific example, when the user is in a state of being necessary to hear an outside prompt voice, the voice enhancement submodule 24[2] is enabled.” Disabling/deactivating is implicit.).
As to claim 8, Reinhart in view of Zhao discloses wherein one or more additional digital signal processors or one or more additional neural networks that are deactivated are configured to enhance a voice component of the audio input (Zhao, ¶0120. “In a specific example, when the user is in a talking state, the voice enhancement submodule 24[2] is enabled. In a specific example, when the user is in a state of being necessary to hear an outside prompt voice, the voice enhancement submodule 24[2] is enabled.” Disabling/deactivating is implicit.).
Claims 18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao, as applied to claim 17 above, in view of Reinhart.
As to claim 18, Zhao does not expressly disclose identifying, by the processing circuitry based on the audio input, a reverb condition.
Reinhart discloses identifying, by the processing circuitry based on the audio input, a reverb condition (Reinhart, ¶0043, Fig. 7. “During operation of the hearing device 700, the reverberation detection and mitigation module 738 can be used to detect reverberation.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to detect a reverb condition, as taught by Reinhart. The motivation would have been for improved playback effects beyond noise cancellation (Zhao, ¶0002).
Zhao in view of Reinhart does not expressly disclose modifying a multi-channel linear prediction block based the reverb condition.
However, Reinhart discloses detecting and mitigating reverberation in a signal (¶0043, Fig. 7). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to use a multi-channel linear prediction block to reduce the reverberation of the signal. The motivation would have been applying a known technique to a known device to yield predictable results.
As to claim 21, Zhao in view of Reinhart discloses providing, an indication of whether the media output device is in the indoor environment or the outdoor environment to a companion device of the media output device (Zhao, ¶0133 and ¶0157 and Reinhart, ¶0044-0045, Fig. 7. “The communication device 736 is operable to allow the hearing device 700 to communicate with an external computing device 704, e.g., a smartphone, laptop computer, etc.”)
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to communicate with a companion device, as taught by Reinhart. The motivation would have been for access to additional processing power.
Allowable Subject Matter
Claims 10-16 are allowed.
The following is a statement of reasons for the indication of allowable subject matter: Applicant’s arguments (see pp. 9-10 of Remarks filed 1/2/26) regarding the differences between the instant application and the closest prior art of record were persuasive. The combination of the argued feature with the other elements of the claims would not have been obvious to a person of ordinary skill in the art.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES K MOONEY whose telephone number is (571)272-2412. The examiner can normally be reached Monday-Friday, 9:00 AM -5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 5712727848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES K MOONEY/Primary Examiner, Art Unit 2695