Prosecution Insights
Last updated: April 19, 2026
Application No. 18/807,073

REFERENCE-LESS CROSS-MICROPHONE ECHO CANCELLATION

Non-Final OA §102§103§112
Filed
Aug 16, 2024
Examiner
MATAR, AHMAD
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
2y 7m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
5 granted / 13 resolved
-23.5% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
6 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
23.1%
-16.9% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/16/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Drawings The drawings are objected to because: In Fig. 2 - 7, the letter M for the microphone is associated with the loudspeaker, and the letter L for loudspeaker is associated with the microphone. This causes a mismatch with the specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: PP 0033 states : FIG. 4 is an illustration of local endpoints 102 that implement a second embodiment of the No-Ref AEC. ……………. Inactive local endpoint B includes AEC module 306 to implement conventional AEC, as described above.”. However, in Fig. 4, local endpoint B is labeled as being “active”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “determining whether audio distortion ….. is present and taking action ……… to prevent echo”. However, “distortion” is not the same as “echo”. An “original sound” that is not an echo may be distorted. Independent claims 13 and 18 are rejected for the same reason. Claim 6 recites: “….. indicates whether a local copy of remote audio transmitted by the remote endpoint device over the network is present in the endpoint device or the neighbor endpoint device”, it is unclear as how the remote audio would be present in the endpoint since claim 1 recites that the loudspeaker of the endpoint device is muted. Similarly, claim 8 recites “remote audio is present in the endpoint device”, however, the loudspeaker of the endpoint device is muted, according to claim 1. Dependent claims 2-12, 14-17 and 19-20 are rejected because they depend upon a rejected base claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 13 and 18, as best understood, are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 11516579 B2 (Feng et al, hereinafter “Feng”). Feng discloses “Echo Cancellation In Online Conference Systems” and teaches selecting one speaker/loudspeaker of the devices in each of the plurality of groups as a representative speaker for each of the plurality of groups. Feng teaches a method performed by an endpoint device (device A used by participant A in Fig. 4A) that includes a microphone and a loudspeaker (see Fig. 4A and col. 6, line 57 – col. 7, line 15), the method comprising: muting the loudspeaker (Feng teaches selecting one speaker of the devices in each of the plurality of groups as a representative speaker for each of the plurality of groups and teaches that speaker B in Room 1 plays the sound. In this example in Fig. 4A. speaker B is active and speaker A is muted, see abstract, Summary, and col. 7, 56- col. 8, line 14); participating in a conference session with a neighbor endpoint device (device B, Fig. 4A) that shares a space (room 1) with the endpoint device; detecting audio in the space using the microphone to produce detected audio (microphone of device A detects audio played by speaker B such as audio generated by remote device C (Col. 6, line 57– col. 7, line 15); determining whether audio distortion, originating at a neighbor loudspeaker (speaker B) of the neighbor endpoint device (device B), is present or absent in the detected audio. In Col 12, line 39 - 62, Feng teaches the use of flag adding module 540 to determine if the audio is “original” or “not original” (i.e., echo) and taking action to transmit the detected audio to the conference session, or not transmit the detected audio to the conference session to prevent echo, based on a result of the determining (in Col. 6, line 57– col. 7, line 15, the flag adding module 540 may add a signal of the flag to the original received audio data. Once the audio data is received, the flag adding module 540 may detect whether there is a signal of the flag together with the received audio data. If there is no signal of the flag, the flag adding module 540 may determine that the received audio data is original. Then, the flag adding module 540 may add a signal of the flag to the received audio data. If there is a signal of the flag, the flag adding module 540 may determine that the received audio data is not original, but rather is forwarded once again. Then, the flag adding module 540 may remove the received audio data. Thus, original audio is transmitted and audio that is not original/forwarded/echo is not transmitted. See also Fig. 10 and its description on Col. 13 for “canceling echo”. Claims 13 and 18 are rejected for the same reasons as discussed above with respect to claim 1. Note for example, that Feng teaches the use of computer system readable media and program modules 42 (Fig. 1) to carry out the functions of the embodiments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 13-15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Feng in view of US 10154148 (Chu et al, hereinafter “Chu”). As discussed above, Feng teaches the limitations of claims 1, 13 and 18. While Feng teaches echo cancelation, it does not explicitly recite detecting “distortion” and then deciding to transmit or not to transmit based on the presence/absence of the distortion in the audio signal. In the same filed of endeavor, Chu discloses an analogous system wherein a microphone situated near loudspeaker receives audio and detects distortion and teaches (Col. 3, lines 9- 11) that: “Logically, the distortion signal is not transmitted to the far end. The detector microphone, and the adaptive filter just described, thus form a ‘distortion detector/unit’”. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the feature of detecting and blocking (not transmitting) distortion, as taught by Chu, in the system disclosed by Feng for audio quality improvement. For claims 2, 14 and 19, the above combination clearly teaches not transmitting the audio when distortion is present, and obviously (or inherently) this means transmitting the audio when there is no distortion (distortion is absent). Claims 3, 15 and 20 are also rejected since the combination teaches the determination/detection of the presence of distortion. It is noted that detecting distortion is notoriously well known in the art and has been used for many years to improve the audio quality in conferencing. Claim 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Feng as applied to claim 1 above, and further in view of US 12477070 B1 (Chen et al). Claim 4 recites using an artificial intelligence model trained to distinguish the audio distortion from other types of audio content. Artificial intelligence/machine learning has been used for many years before the effective filing date of the current application. For example, Chen et al disclose (see abstract) methods and systems which provide machine-learning assisted acoustic echo cancellation (AEC). The AEC can be used, as an example, to improve the audio quality for online audio and video conferences. The system includes a pre-trained, machine-learning, AI model designed to detect, in real time, a unitary voice signal, or a signal representing the speech of a single speaker as opposed to that of multiple speakers. A digital signal processing (DSP) algorithm can then detect the echo state, for example, whether distortion results primarily from an echo. Based on these characteristics, the system can, alternatively and automatically apply either a default mode of AEC to the audio signal, or apply a more aggressive mode of AEC. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to utilize machine learning/AI model to detect distortion in order to enhance the quality of the audio in the conference system of Feng. For claim 5, removing the audio distortion from the received audio, leaving the desired near-talker speech; and transmitting only the desired near-talker speech reads on the well-known conventional Acoustic Echo Cancelation (AEC). An example of the use of AEC is disclosed by Chen et al (see abstract). The technique generally includes detecting distortion originating at the near-end and subtracting it from the original source audio for transmission. The removal of distortion in order to improve the quality of the audio is old and well-known in the art. Claims 6, 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Feng as applied to claim 1 above, and further in view of US 20180041639 A1 (Gunawan et al hereinafter “Gunawan”). For claim 6, as discussed above with respect to claim 1, Feng discloses a conference session between the end device (A, Fig. 4A) and remote endpoint device (C), wherein when the audio distortion is present and the local copy is present (audio from remote endpoint device C), not transmitting the detected audio; and when the audio distortion is not present, transmitting the detected audio. Feng does not teach the use of “side information” to indicate whether the audio from remote endpoint device is present or not present. However, Gunawan teaches (PP 0004 and PP 0025) the use of near-end voice data and far-end voice data in a conference call to indicate whether the near-end is presenting or the far-end is presenting. It teaches modifying the playback on the audio device. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to utilize the technique used by Gunawan in Feng conferencing system so that the endpoint device A will be informed about the presence of the audio from the remote endpoint C and will apply the needed quality enhancement such as blocking (not transmitting) the distortion or echo cancelation. Claims 8 and 16 are rejected for the same reasons. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Feng in view of Gunawan, as applied to claim 6 above, and further in view of US 20110149815 A1 (James et al) Claim 7 recites: when the audio distortion (e.g.,. echo, as recited in claim 1) and the local copy (audio from remote endpoint C) are both present within a predetermined time window, not transmitting the detected audio. This is not taught by Feng/Gunawan combination. However James et al teach the use a feature to predict a delay for an echo, e.g., when the echo is to be expected. For example, an echo canceller may have an adaptive filter that performs the echo cancellation (no transmission) in a time window identified by predicting a delay for the echo. Claim 6 of James et al recite “wherein the echo canceller performs an echo cancellation function within a time window …..” . Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to use the feature taught by James et al in the Feng/Gubawan combination in order to provide a higher probability of successfully locating and canceling echo. Claim 17 is rejected for the same reasons. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Feng as applied to claim 1 above, and further in view of Nighman et al (US 20230115674-A1) For claim 12, on one hand, by definition, the microphone array uses acoustic beamforming in order to calculate the direction of the sound, enhance certain sounds and suppress unwanted noise (e.g., distortion). This reads on the claimed pointing a null in the direction of the audio distortion to suppress it. On the other hand, Nighman et al teach the use of microphone array in a teleconferencing meeting room and teach (PP 69) separating and classifying sources of audio captured by one or more microphone arrays. The system can identify speech and separate sound sources including by separating speech from noise sources (e.g., distortion) or other audio sources. It also teaches (PP 0098) the use of an acoustic echo canceller (AEC) 301 and microphone arrays 140 to perform adaptive acoustic echo cancellation on the signals output by the microphone array(s) 140 to reduce echoes and/or reverberation, and to generate one or more echo cancelled signals. This, it would have been obvious to one of ordinary skill in the art before the effective filing date of the current invention to use the well-known microphone array feature (as also taught by Nighman et al) in the conferencing system of Feng in order to enhance the quality of the audio. Allowable Subject Matter Claims 9 – 11 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Claim 9 recites : “The method of claim 6, wherein the receiving includes receiving, from the neighbor endpoint device, the side information such that the side information indicates whether the local copy of the remote audio is present in the neighbor endpoint device and that the neighbor loudspeaker of the neighbor endpoint device is actively playing the local copy into the space.” Claims 10-11 depend on claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMAD F. MATAR whose telephone number is (571)272-7488. The examiner can normally be reached M-F 9 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMAD F. MATAR/Supervisory Patent Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Feb 26, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574458
METHOD FOR TRANSMITTING CALL AUDIO DATA AND APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12563143
Pre-Authentication for Interactive Voice Response System
2y 5m to grant Granted Feb 24, 2026
Patent 12549669
System and method to evaluate microservices integrated in Interactive Voice Response (IVR) operations
2y 5m to grant Granted Feb 10, 2026
Patent 12462816
AUDIO ENCODING METHOD AND CODING DEVICE
2y 5m to grant Granted Nov 04, 2025
Patent 9137370
Call center input/output agent utilization arbitration system
2y 5m to grant Granted Sep 15, 2015
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
50%
With Interview (+11.5%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month