Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
This action is responsive to an amendment filed on 11/12/2025. Claims 1-20 are pending. Claims 16-20 have been newly added.
Response to Arguments
Applicant’s arguments with respect to claims 11/12/2025 have been considered but they are not persuasive because of the following:
Regarding claim 1, the applicant argues on pages 1-3 that Li does not describe automatically enabling inbound noise removal when the classification is voice. It is because, Li does not describe inbound noise removal and Li describes the telecommunications device 102 collecting and aggregating audio sensor data (i.e., an audio stream), removing ambient/background noise, and then transmitting the audio data. Examiner respectfully disagrees with this argument. It is because, Li teaches a telecommunications device configured to collect session data of a communication session that includes an audio stream between a user of the telecommunications device and at least one other user of a remote telecommunications device (see the abstract). Thus, it is clear that communication session can be either an inbound or an outbound. Further, in paragraph 0040, Li also teaches that the anomaly detection module 252 is further configured to analyze incoming session data, such as may be collected by the session data collection module 220. From here, it is also clear that Li describes noise removal of incoming session data. Therefore, Li teaches this limitation.
Thus, the rejection of the claim will remain. The rejection of the claims 9 and 13 will remain for the same reasons as discussed above with respect to claim 1.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, 5-11 and 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al. (International Pub. No. WO2017/112240A1).
Regarding claim 1, with respect to Figures 1-3, Li teaches a method, comprising:
initiating an audio session (fig.3; paragraphs 0043-0044);
receiving a classification for the audio session (fig.1-2, fig.3, block 316; paragraphs 0048-0049, 0065); and
automatically enabling inbound noise removal when the classification is voice (figs. 1-2; paragraphs 0048-0049, 0065).
Regarding claim 3, Li teaches wherein the inbound noise removal removes noise from audio received from a remote computing device (paragraphs 0048-0049, 0053-0055).
Regarding claim 5, Li teaches wherein the classification is determined based on an application handling the audio session (paragraphs 0016, 0043).
Regarding claim 6, Li teaches wherein the classification is determined based on a machine learning analysis of metadata (paragraph 0035).
Regarding claim 7, Li teaches analyzing video associated with the audio session (paragraphs 0032-0033).
Regarding claim 8, Li teaches making a second classification for the audio session based on the video analysis (paragraphs 0050, 0064-0065).
Regarding claim 9, with respect to Figures 1-3, Li teaches a apparatus (fig.1; paragraph 0012), comprising:
an audio interface (paragraph 0024);
a network interface (paragraphs 0019, 0026); and
a processor (paragraphs 0015-0016) to:
receive audio from the network interface (fig.3; paragraphs 0044-0045);
determine a type of the audio (paragraph 0046);
automatically enable noise removal for the audio when the type is voice (fig.1-fig.3; paragraphs 0048-0049, 0065);
perform noise removal on the audio when the type is voice (figs. 1-2; paragraphs 0048-0049, 0065); and
play the audio using the audio interface (paragraphs 0015, 0024).
Regarding claim 10, Li teaches wherein the processor is to further join an online call, and wherein the received audio is part of the online call (paragraphs 0016, 0032).
Regarding claim 11, Li teaches wherein the automatic enabling of the noise removal is accomplished without user input (figs. 1-2; paragraphs 0048-0049, 0065) (Note; since the automatic enabling of the noise removal is done by the system of Li, it is clear that the claimed “the automatic enabling of the noise removal” is performed without user input.).
Regarding claim 16, Li teaches wherein the initiating, the receiving, and the
automatically enabling is at a computing device and further comprising:
receiving, at the computing device, first audio having first noise from a remote
computing device (abstract; fig.3; paragraphs 0040, 0048-0049, 0053-0055, 0065);
removing, at the computing device, the first noise from the first audio to generate filtered
first audio in response to the automatic enabling of inbound noise removal (abstract; fig.3; paragraphs 0003, 0040, 0048-0049, 0053-0055); and
playing, using an audio interface of the computing device, the filtered first audio (figs. 1-2; paragraphs 0003, 0023, 0038-0041, 0048-0049).
Regarding claim 17, Li teaches wherein the classification is a first classification and
a video is part of the audio session and further comprising determining, at the computing device,
that a user is speaking in a video (fig.1-2, fig.3, block 316; paragraphs 0013, 0032, 0048-0049, 0065); and
wherein automatically enabling, at the computing device, inbound noise removal is based
on determining that the user is speaking in the video (abstract; paragraphs 0013, 0032, 0040, 0048-0049, 0065).
Regarding claim 18, Li teaches wherein removing the first noise from the first audio
to generate filtered first audio in response to the automatic enabling of inbound noise removal
includes calling an application programming interface (API) to cause the setting of the inbound
noise removal (paragraph 0041) (abstract; fig.3; paragraphs 0003, 0040, 0041, 0048-0049, 0053-0055).
Regarding claim 19, Li teaches wherein receiving the classification for the audio
session includes:
determining the classification for the audio session is based on at least one of metadata,
number of audio channels, sample rate, bit resolution, bit duration, or operational environment
parameter (paragraphs 0013, 0034, 0037, 0050).
Regarding claim 20, Li teaches wherein receiving the classification for the audio
session includes: using a deep-learning neural network algorithms [i.e., machine learning model] to provide the classification (paragraphs 0034-0037, 0048-0049, 0065).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2 and 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (International Pub. No. WO2017/112240A1) in view of Eikkula (European Pub. No. EP1214799B1).
Regarding claims 2 and 12, Li does not specifically teach automatically disabling the inbound noise removal when the classification is not voice. Eikkula teaches automatically disabling the inbound echo [i.e., noise] removal when the classification is not voice (paragraphs 0008-0009, 0012). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Li to incorporate the feature of automatically disabling the inbound noise removal when the classification is not voice in Li’s invention as taught by Eikkula. The motivation for the modification is to do so in order to easily control the use of echo cancellation based on the change of the call type.
Regarding claim 13, Li teaches a non-transitory tangible computer-readable medium comprising instructions when executed cause a processor (paragraphs 0043) to:
automatically enable inbound noise removal for received far-end audio when a classification of the received far-end audio is voice (fig.2; paragraphs 0044, 0049, 0064-0065).
Li does not specifically teach automatically disable inbound noise removal for the received far-end audio when the classification is not voice. Eikkula teaches automatically disable inbound echo [i.e., noise] removal for the received far-end audio when the classification is not voice (paragraphs 0008-0009, 0012). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Li to incorporate the feature of automatically disable inbound noise removal for the received far-end audio when the classification is not voice in Li’s invention as taught by Eikkula. The motivation for the modification is to do so in order to easily control the use of echo cancellation based on the change of the call type.
Claim 14 is rejected for the same reasons as discussed above with respect to claims 6 and 9. Furthermore, Li in view of Eikkula does not specifically teach determining the classification for the received far-end audio based on a parameter of an operational environment of a playback device. Examiner takes an official notice that determining the classification for the received far-end audio based on a parameter of an operational environment of a playback device is well known in the art. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Li in view of Eikkula to incorporate the feature of determining the classification for the received far-end audio based on a parameter of an operational environment of a playback device in Li’s invention in view of Eikkula’s invention in order to easily determine the type of the received audio based on the environment of the playback device.
Claim 15 is rejected for the same reasons as discussed above with respect to claims 7 and 8.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (International Pub. No. WO2017/112240A1).
Regarding claim 4, Li does not specifically teach wherein the classification is determined based on a file type. Examiner takes an official notice that wherein the classification is determined based on a file type is well known in the art. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Li to incorporate the feature of wherein the classification is determined based on a file type in Li’s invention in order to easily determine the classification of an audio based on a file type.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MD S ELAHEE whose telephone number is (571)272-7536. The examiner can normally be reached on Monday thru Friday; 8:30AM to 5:00PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FAN TSANG can be reached on 571-272-7547. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/MD S ELAHEE/
MD SHAFIUL ALAM ELAHEE
Primary Examiner,
Art Unit 2694
January 31, 2026