DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 14 July 2025 have been fully considered but they are not persuasive.
Applicant argues that Wingate does not disclose sensing motion while the noise reduction unit is in the idle mode, or that motion information is provided to the noise reduction unit while the unit is idle. Wingate specifies in an embodiment that the DSP module (which would analogize to the noise reduction unit in this instance) is the module that goes into a low-power state, and the motion sensing module is a distinct module in Figure 3. When taking this together with paragraph [0080] of Wingate where the motion sensing module “may push movement information to another component […] such as DSP module), this reads on the claim language.
Applicant argues that Wingate does not disclose adapting to noise in the audio signals based at least partially on the motion information after switching to active mode in response to detecting speech. [0076] discloses activating the DSP module in response to voice activity. Meanwhile, [0066] and [0092] disclose that the DSP module may handle beamforming (adapting to noise), and that it can be based on motion information in [0080]. These elements together read on the limitation.
Applicant argues that Wingate does not disclose storing the motion information in a buffer, wherein the motion information is provided to the noise reduction unit after it is activated. Firstly, [0080] discloses that motion information may be pushed to another component of the intelligent microphone module, which as shown in Figure 3 includes a memory module 350. Secondly, the combination of the sleep mode teachings of [0076] for the DSP module in addition to “DSP module may periodically request updated information from the motion sensing module” in [0080] inherently teach this limitation, as [0080] also teaches that the motion sensing module may use data from sensors to determine movement information such as translation and rotation, as opposed to only pushing the current sensor status. Providing this information in response to periodic update requests necessarily requires that the movement information be stored, and the DSP module couldn’t make an update request in sleep mode.
Applicant argues that Wingate does not teach motion information being provided to the noise reduction unit while it is idle or that it is provided after the noise reduction unit is switched to the active mode. Paragraph [0080] states that the motion sensing module may push movement information to the DSP module, this embodiment would read on the former limitation as only the DSP module would be in a sleep mode, not the motion sensing module. That paragraph also states that the DSP module may periodically request updated information from the motion sensing module which would read on the latter limitation as it would only do so after it leaves sleep mode.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wingate (US 20170243577 A1).
Regarding claim 1, Wingate discloses a method of processing audio signals in a voice activated device, comprising:
switching a noise reduction unit in the voice activated device from an active mode to an idle mode after the noise reduction unit is adapted to noise in audio signals at a first position of the voice activated device, wherein in idle mode the noise reduction unit does not process audio signals; ([0076] enter a low power state; [0080]-[0081]: adapt the beamforming steering direction to the current position and orientation of the device)
sensing a motion of the voice activated device when the voice activated device is displaced from the first position and the first orientation to at least one of a second position or second orientation while the noise reduction unit is in the idle mode; ([0080]-[0081]: the motion sensing module detects a change in position; Fig 3, [0076]: the DSP module alone can be set to a low-power mode, motion sensing module is a distinct module)
generating motion information for a change in at least one of position and orientation of the voice activated device and providing the motion information to the noise reduction unit while the noise reduction unit is in the idle mode; ([0080]: motion sensing module uses sensor data to generate movement information, which can be pushed to DSP module; [0076]: DSP module can be in a sleep mode)
switching the noise reduction unit in the voice activated device from the idle mode to an active mode in response to detecting speech in the audio signals by the voice activated device while at the at least one of the second position or second orientation, wherein in active mode the noise reduction unit processes audio signals; ([0076]: DSP module can leave standby state in response to voice activation)
adapting to noise in the audio signals by the noise reduction unit at the at least one of the second position or second orientation based at least partially on the motion information ([0080]-[0081]: adapt steering direction and noise reduction to the new position ascertained from movement information)
performing, via the noise reduction unit, noise reduction of audio signals received at the at least one of the second position or second orientation. ([0027]: performing noise reduction)
Regarding claim 2, (dependent on claim 1) Wingate further discloses the method wherein performing noise reduction of audio signals comprises one or more of speech enhancement, signal-to-noise ratio (SNR) enhancement, spatial filtering, beam forming, interference cancellation, noise cancelation, or any combination thereof. ([0030]: beamform steering; [0088]: “Processing may include beamforming, noise reduction”)
Regarding claim 3, (dependent on claim 1) Wingate further discloses the method wherein adapting to noise in the audio signals without the presence of speech in the audio signals comprises:
estimating a current energy level of sound from a sound source based on a previous energy level of the sound source determined by the noise reduction unit while at the first position and the first orientation, and measured linear displacement and rotational displacement from the motion information. ([0030], [0117]: steer the beam according to energy levels of a microphone array; [0081]: use information from the motion sensing module to determine changes in position or orientation of the microphone array module relative to an audio source of interest; adaptive beamforming from energy levels adjusted from movement information)
Regarding claim 4, (dependent on claim 3) Wingate further discloses the method wherein performing the noise reduction of audio signals received at the at least one of the second position or second orientation is based at least in part on the current energy level ([0117]: steer beamforming in accordance to speaker voice energy and/or relative movement)
Regarding claim 5, (dependent on claim 1) Wingate further discloses the method further comprising storing the motion information in a buffer and wherein the motion information is provided to the noise reduction unit after the noise reduction unit is switched to the active mode (Fig 3 memory module; [0076], [0080]: DSP module can be set to a sleep mode and can be set to periodically request updates from the motion sensing module, thus it would only request updates while active; motion sensing module generates movement information from sensor data, providing this information in response to update requests inherently involves storage of the information on a fundamental level)
Regarding claim 6, (dependent on claim 1) Wingate further discloses the method wherein the noise reduction unit receives the motion information but remains in idle mode until speech is detected in the audio signals by the voice activated device. ([0076]: leaving idle state after detecting voice activity; [0080]: motion sensing module can be the one to push motion information rather than DSP module requesting it)
Regarding claim 7, (dependent on claim 1) Wingate further discloses the method wherein adapting to noise in the audio signals comprises determining a steering direction for beam forming to receive the audio signals based on a steering state at the first position and the first orientation and the motion information, and wherein performing the noise reduction of audio signals received at the at least one of the second position or second orientation is based at least in part on the steering direction. ([0066], [0080]-[0081], [0101]: adjusting beamforming functionality and steering direction in response to movement detected by the motion sensor)
Regarding claims 8-21, they are analogous to that of claims 1-7 and are thus rejected for similar reasoning.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
Nyshadham (US 20160125891 A1) discloses determination of an environmental profile and audio processing configuration based on the determined profile.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALVIN ISKENDER whose telephone number is (703)756-4565. The examiner can normally be reached M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HAI PHAN can be reached on (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALVIN ISKENDER/Examiner, Art Unit 2654
/HAI PHAN/Supervisory Patent Examiner, Art Unit 2654