Prosecution Insights
Last updated: April 19, 2026
Application No. 18/258,302

ACOUSTIC FEEDBACK MANAGEMENT IN REAL-TIME AUDIO COMMUNICATION

Non-Final OA §102§112
Filed
Jun 19, 2023
Examiner
HASHEM, LISA
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Dolby Laboratories Licensing Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
263 granted / 355 resolved
+12.1% vs TC avg
Moderate +13% lift
Without
With
+12.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
11 currently pending
Career history
366
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
18.1%
-21.9% vs TC avg
§102
32.5%
-7.5% vs TC avg
§112
30.5%
-9.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12-8-2023 is acknowledged by the examiner. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Drawings The drawings are objected to because FIGS 1 – 5 shows reference numbers associated with empty circles and boxes with no corresponding text. For example, FIG 1 is a flow chart, however details of what is contained within the flow chart are not shown which makes it unclear as to what the applicant is showing. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 1 is objected to because of the following informalities: the comma ‘,’ in the following: Line 16 (‘space,’) should be changed to a semicolon ‘;’ . Appropriate correction is required. Claim 3 is objected to because of the following informalities: the comma ‘,’ in the following: Line 4 (‘device,’), Line 6 (‘device,’), Line 8 (‘mode’), Line 10 (‘device,’), and Line 12 (‘device,’) should be changed to a semicolon ‘;’ . Appropriate correction is required. Claim 4 is objected to because of the following informalities: the comma ‘,’ in the following: Line 5 (‘device,’) should be changed to a semicolon ‘;’ . Appropriate correction is required. Claim 5 is objected to because of the following informalities: the comma ‘,’ in the following: Line 5 (‘device,’) should be changed to a semicolon ‘;’ . Appropriate correction is required. Claim 6 is objected to because of the following informalities: there should be a comma ‘,’ in line 8 after the limitation ‘mode’. Appropriate correction is required. Claim 7 is objected to because of the following informalities: the colon ‘:’ should be after the limitation: ‘include’ in Line 6. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 discloses ‘a method for managing acoustic feedback in real-time audio communications in a communications system…’ and ‘…providing, to a mitigation module, a request signal for requesting one or more measures against acoustic feedback…’. It is not clear how claim 1 manages acoustic feedback and what the definition of acoustic feedback is. It is not clear what ‘…one or more measures against acoustic feedback’ means and the inventive concept of managing acoustic feedback in the claim limitations. It is not clear how the request signal requests one or more measures against acoustic feedback. Appropriate action is required. Claims 2-14 and 17-20 depend on claim 1. Claim 15 discloses ‘…include, in the encoded audio signal, metadata indicating whether there is a need for one or more measures against acoustic feedback…’ It is not clear what the definition of acoustic feedback is. It is not clear what ‘…one or more measures against acoustic feedback’ means and the inventive concept of whether there is a need for one or more measures against acoustic feedback in the claim limitations. It is not clear how the encoded audio signal includes metadata to indicate whether there is a need for one or more measures against acoustic feedback. The mention of an encoder for processing audio data with metadata containing about information for measures against acoustic feedback is not mentioned in the disclosure. Appropriate action is required. Claim 16 discloses ‘…extract, from the decoded audio signal, metadata indicating whether there is a need for measures against acoustic feedback…’. . It is not clear what the definition of acoustic feedback is. It is not clear what ‘…measures against acoustic feedback’ means and the inventive concept of whether there is a need for measures against acoustic feedback in the claim limitations. It is not clear how the decoded audio signal includes metadata to indicate whether there is a need for measures against acoustic feedback. The mention of a decoder for processing audio data with metadata containing about information for measures against acoustic feedback is not mentioned in the disclosure. Appropriate action is required. Claim 6 recites the limitation "the playback volume" in line 3. There is insufficient antecedent basis for this limitation in the claim. Claim 6 recites the limitation "the distance" in lines 11-12. There is insufficient antecedent basis for this limitation in the claim. Claim 6 recites the limitation "the second communication unit" in line 13. There is insufficient antecedent basis for this limitation in the claim. Claim 7 recites the limitation "the distance" in lines 9-10. There is insufficient antecedent basis for this limitation in the claim. Claim 7 recites the limitation "the playback volume" in lines 10-11. There is insufficient antecedent basis for this limitation in the claim. Claim 7 recites the limitation "the second communication unit" in line 11. There is insufficient antecedent basis for this limitation in the claim. Claim 8 recites the limitation "the first device" in line 2. There is insufficient antecedent basis for this limitation in the claim. Claim 10 recites the limitation "the first device" in lines 2-3. There is insufficient antecedent basis for this limitation in the claim. Claim 10 recites the limitation "the first client" in line 3. There is insufficient antecedent basis for this limitation in the claim. Claim 10 recites the limitation "the second client" in line 3. There is insufficient antecedent basis for this limitation in the claim. Claim 20 discloses ‘…the mitigation module is trained using a machine learning algorithm…’. It is not clear what the definition of machine learning algorithm is. It is not clear what trains a machine learning algorithm for a mitigation module. Appropriate action is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 8-16, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by EP 3,070,876 by Neystadt et al, hereinafter Neystadt (prior art cited in the IDS filed on 12-8-2023 by Applicant). Regarding claim 1, Neystadt discloses a method for managing acoustic feedback in real-time audio communications in a communications system (Fig. 1), the method comprising: determining, by means of a detection module (Fig. 1, 117; i.e. multipoint control unit or MCU), whether a first communication device (Fig. 1: 113, 115, or 116; i.e. personal communications device) is in loudspeaker mode based on hardware information (i.e. device information) in the first communication device (i.e. information on whether each personal communications device has headphones connected or not, may be included in signaling) (paragraphs: 0011, 0033); determining, by means of the detection module (i.e. MCU), whether the first communication device is in real-time audio communications (i.e. teleconference) with a second communication device (Fig. 1: 113, 115, or 116; i.e. personal communications device) based on connection information in the first communication device (i.e. each communications device taking part in the teleconference signs-in to the MCU using a pre-distributed conferencing id or URL) (paragraph: 0033); determining, by means of the detection module (i.e. MCU or conference server or conference service provider), whether the first communication device and the second communication device are in a same acoustic space (i.e. collocation) based on sensor information in the first communication device (i.e. another method for collocation can be done by matching audio, received from multiple participants; the audio signals captured by the different devices in a same physical space will be very similar to each other with just some small differences in delay and attenuation related to the different distances from each of the devices to the different speakers) (paragraphs: 0051, 0054); upon determining by means of the detection module (i.e. MCU) that: the first communication device is in loudspeaker mode, the first communication device is in real-time audio communications with the second communication device, and the first communication device and the second communication device are in the same acoustic space, providing, to a mitigation module (i.e. conference service server), a request signal for requesting one or more measures against acoustic feedback (i.e. echo; paragraph: 0009) (paragraph: 0059; i.e. when in a location, the participants personal communications devices are co-located with a shared device, the Conference Service Server can decide whether to receive audio streams from all the participants personal communications devices in the location (and as before, mix it in order to avoid echo and/or to boost stereo effect and have multiple audio channels from the same room); alternatively, the server can decide to only use the audio stream coming from the shared device, suppressing the individual audio streams (coming from the communications device co-located with the shared device) wherein the Conference Server may send a message to the personal communications devices co-located with a shared device, asking them not to send their audio streams). Regarding claim 2, Neystadt discloses the method according to claim 1, further comprising: providing, by the mitigation module, one or more measures against acoustic feedback (paragraph: 0009) in response to receiving, at the mitigation module, the request signal (paragraph: 0059; i.e. when in a location, the participants personal communications devices are co-located with a shared device, the Conference Service Server can decide whether to receive audio streams from all the participants personal communications devices in the location (and as before, mix it in order to avoid echo and/or to boost stereo effect and have multiple audio channels from the same room); alternatively, the server can decide to only use the audio stream coming from the shared device, suppressing the individual audio streams (coming from the communications device co-located with the shared device) wherein the Conference Server may send a message to the personal communications devices co-located with a shared device, asking them not to send their audio streams). Regarding claim 3, Neystadt discloses the method according to claim 1, wherein the one or more measures against acoustic feedback include one or more of: decreasing, by means of the mitigation module, a playback volume of the first communication device (paragraphs: 0040, 0066), decreasing, by means of the mitigation module, a microphone gain of the second communication device , sending a notification to the first communication device requesting a user to switch to headphone mode (paragraphs: 0040, 0066), sending a notification to the first communication device requesting the user to mute a microphone of the first communication device, sending a notification to the first communication device requesting the user to mute a loudspeaker of the first communication device, and suppressing audio received from the first communication device. Regarding claim 4, Neystadt discloses the method according to claim 1, further comprising: determining, by means of the detection module, a distance between the first communication device and the second communication device based on sensor information (i.e. GPS or in-door positioning) in the first communication device (paragraph: 0042), wherein the first communication device and the second communication device are determined to be in the same acoustic space if the distance between the first communication device and the second communication device is less than a distance threshold (paragraph: 0051). Regarding claim 8, Neystadt discloses he method according to claim 1, wherein the sensor information of the first device is based on a non-acoustic sensor (i.e. GPS or in-door positioning) in the first communication device (paragraph: 0042). Regarding claim 9, Neystadt discloses he method according to claim 1, wherein the sensor information of the first device is based on a wireless communication interface (i.e. GPS or in-door positioning) in the first communication device (paragraph: 0042). Regarding claim 10, Neystadt discloses the method according to claim 1, wherein one or more of the detection module and the mitigation module is provided in the first device (i.e. MCU) (paragraph: 0033, 0059) or, wherein the communications system comprises the first client, the second client, and a communications server (i.e. conference service provider or MCU or conference service server), wherein one or more of the detection module and the mitigation module is provided in the communications server (paragraphs: 0051, 0054). Regarding claim 11, Neystadt discloses the method according to claim 1, wherein the first communication device comprises built-in loudspeakers and wherein the second communication device comprises a built-in microphone (Fig. 1: 113, 115, or 116; i.e. personal communications device) (paragraphs: 0032, 0040). Regarding claim 12, Neystadt discloses a communication device (i.e. MCU) comprising circuitry configured to perform the method according to claim 1 (paragraphs: 0033, 0051, 0066). Regarding claim 13, Neystadt discloses a communications system (Fig. 1) comprising a first communication device, a second communication device, a detection module, and a mitigation module, the system being configured to perform the method according to claim 1 (paragraphs: 0033, 0051, 0066). Regarding claim 14, Neystadt discloses a non-transitory computer-readable storage medium comprising instructions which, when executed by a device having processing capability, causes the device to carry out the method of claim 1 (paragraphs: 0033, 0051, 0066, 0075). Regarding claim 15, Neystadt discloses an encoder (Fig. 1, 117; i.e. MCU; paragraph: 0033) configured to: encode an audio signal (paragraph: 0059; i.e. when in a location, the participants personal communications devices are co-located with a shared device, the Conference Service Server can decide whether to receive audio streams from all the participants personal communications devices in the location (and as before, mix it in order to avoid echo and/or to boost stereo effect and have multiple audio channels from the same room); alternatively, the server can decide to only use the audio stream coming from the shared device, suppressing the individual audio streams (coming from the communications device co-located with the shared device) wherein the Conference Server may send a message to the personal communications devices co-located with a shared device, asking them not to send their audio streams); and include, in the encoded audio signal, metadata indicating whether there is a need for one or more measures against acoustic feedback (i.e. echo; paragraph: 0009) (paragraph: 0059; i.e. when in a location, the participants personal communications devices are co-located with a shared device, the Conference Service Server can decide whether to receive audio streams from all the participants personal communications devices in the location (and as before, mix it in order to avoid echo and/or to boost stereo effect and have multiple audio channels from the same room); alternatively, the server can decide to only use the audio stream coming from the shared device, suppressing the individual audio streams (coming from the communications device co-located with the shared device) wherein the Conference Server may send a message to the personal communications devices co-located with a shared device, asking them not to send their audio streams). Regarding claim 16, Neystadt discloses a decoder (Fig. 1, 117; i.e. MCU; paragraph: 0033) configured to: decode an encoded audio signal (i.e. another method for collocation can be done by matching audio, received from multiple participants; the audio signals captured by the different devices in a same physical space will be very similar to each other with just some small differences in delay and attenuation related to the different distances from each of the devices to the different speakers) (paragraphs: 0051, 0054); and extract, from the decoded audio signal, metadata indicating whether there is a need for measures against acoustic feedback (i.e. echo; paragraph: 0009) (paragraph: 0059; i.e. when in a location, the participants personal communications devices are co-located with a shared device, the Conference Service Server can decide whether to receive audio streams from all the participants personal communications devices in the location (and as before, mix it in order to avoid echo and/or to boost stereo effect and have multiple audio channels from the same room); alternatively, the server can decide to only use the audio stream coming from the shared device, suppressing the individual audio streams (coming from the communications device co-located with the shared device) wherein the Conference Server may send a message to the personal communications devices co-located with a shared device, asking them not to send their audio streams). Regarding claim 20, Neystadt discloses the method of claim 1, wherein the mitigation module is trained using a machine learning algorithm (paragraphs: 0033, 0051, 0066, 0075). Allowable Subject Matter Claims 5, 6, 7, 17, 18, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 Form. Any response to this action should be mailed to: Commissioner for Patents P.O. Box 1450 Alexandria, VA 22313-1450 Or faxed to: (571) 273-8300 (for formal communications intended for entry) Or call: (571) 272-2600 (for customer service assistance) Any inquiry concerning this communication or earlier communications from the examiner should be directed to LISA HASHEM whose telephone number is 571-272-7542. The examiner can normally be reached on Monday and Thursday 10 a.m. - 7 p.m. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R. Edwards can be reached on 571-270-7136. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /LISA HASHEM/Primary Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jun 19, 2023
Application Filed
Mar 15, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603959
METHOD AND ELECTRONIC DEVICE FOR REMOVING ECHO FLOWING IN DUE TO EXTERNAL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12581012
UNIQUE CALL PROGRESS TONE INVOCATION TIMER PER CLIENT OR ACCESS TYPE
2y 5m to grant Granted Mar 17, 2026
Patent 12581011
USER INTERFACE TO SELECT OR CHANGE CAPTION LANGUAGE FOR CAPTIONED TELEPHONE SERVICE SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12568175
SPEAKERPHONE AND SERVER DEVICE FOR ENVIRONMENT ACOUSTICS DETERMINATION AND RELATED METHODS
2y 5m to grant Granted Mar 03, 2026
Patent 12531943
Exercise-Based Call Processing Method, Apparatus, and Electronic Device
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+12.7%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month