Prosecution Insights
Last updated: April 19, 2026
Application No. 17/398,527

EXTRANEOUS VOICE REMOVAL FROM AUDIO IN A COMMUNICATION SESSION

Final Rejection §103
Filed
Aug 10, 2021
Examiner
OGUNBIYI, OLUWADAMILOL M
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Avaya Management L.P.
OA Round
7 (Final)
78%
Grant Probability
Favorable
8-9
OA Rounds
2y 12m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
236 granted / 304 resolved
+15.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
31 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 304 resolved cases

Office Action

§103
DETAILED ACTION Claims 1 – 8, 11 – 18 and 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment With regard to the Non-Final Office Action from 24 March 2025, the Applicant has filed a response on 24 July 2025. Response to Arguments The Applicant has amended the independent claims to attempt to distinguish it over the most-recent prior art. Particularly, the Applicant indicates that the Jasleen et al. reference does not disclose a machine learning algorithm that outputs a confidence score as provided in the currently-recited claim 1 (Remarks: page 7 par 5), and that this reference does not output the degree of confidence (Remarks: page 8 par 2). The Examiner acquiesces to the remarks provided by the Applicant in this situation, and based on the amendment to the independent claims necessitating new grounds of rejection, the Examiner will address the claims by their current presentation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 3, 5, 11, 12, 13, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Metzar et al. (US 2020/0336848 A1: hereafter — Metzar) in view of Sivaraman et al. (US 2022/0084509 A1: hereafter — Sivaraman, provisional 63/077,928 filed on 14 September 2020) further in view of Yamamoto et al. (US 2022/0059120 A1: hereafter — Yamamoto) and further in view of Jasleen et al. (US 2022/0238091 A1: hereafter — Jasleen). For claim 1, Metzar discloses a method comprising: receiving audio comprising a digital signal representing sound captured from an endpoint connected to a communication session over a network, [[wherein the communication session includes an audio channel for exchanging user communications between a user operating the endpoint and another user operating another endpoint connected to the communication session]] (Metzar: [0128] — In another example, the mobile devices of the users, such as 2544/2543, etc., may be used to record user voice and submit the data wirelessly via the application and submit the data to the server 2532, the user voice data and other data may be submitted 2530; [0010] — a communication session over a bridge for allowing communication); after removing the components, transmitting the digital signal to the other endpoint over the audio channel (Metzar: [0131] — The data set may have unwanted audio data, such as background noise identified by a particular frequency and/or amplitude and may need to be subtracted 2656 to clarify the primary purpose of the audio which is likely to be the user's voice only. In this case, the audio data may be filtered to remove a noise floor, other voices, undesired noises, etc, and the modified signal can then be forwarded to a presentation device, such as a display 2608 and/or a loudspeaker 2612. As the signal has unwanted audio data removed to create the modified presentation signal, then the modified presentation signal 2658 is forwarded and queued or played on a presentation device as audio, images, video, etc. 2662). The reference of Metzar fails to teach the further limitation of this claim, for which the reference of Sivaraman is now introduced to teach as: receiving audio comprising a digital signal representing sound captured from an endpoint connected to a communication session over a network, wherein the communication session includes an audio channel for exchanging user communications between a user operating the endpoint and another user operating another endpoint connected to the communication session (Sivaraman: [0115] — speech signals getting enhanced at a server in a multi-party communication service (indicating communication over an audio channel for exchanging user communications between one user at one endpoint and another user at another endpoint)). The reference of Metzar provides teaching for receiving audio and removing components of extraneous voice from the audio. It however differs from the claimed invention in that the claimed invention further provides teaching for a communication session including an audio channel for exchanging user communication between a user operating the endpoint connected to the communication session. This isn’t new to the art as the reference of Sivaraman is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Sivaraman which provides teaching for a communication session including an audio channel for exchanging user communication between a user operating the endpoint connected to the communication session, with that of Metzar which receives audio and remove components of extraneous voice present in the audio, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of processing audio received in an audio channel by performing noise removal so that users at the endpoints engaged in an audio communication may receive clean audio free of unwanted noises. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Metzar in view of Sivaraman fails to teach the further limitations of this claim, for which Yamamoto is now introduced to teach as: inputting the audio into a machine learning algorithm trained to output confidence scores indicating how confident the machine learning algorithm is that voices identified in the audio are extraneous (Yamamoto: [0184]–[0185] — machine learning being used to obtain a probability of a particular sound detection identified in input audio signals, whereby non-user-sounds are detected which could be voices of humans other than the wearer of the headphones, the non-user-sound detecting uses machine learning which outputs a probability of the particular sound detection); obtaining a confidence score, output by the machine learning algorithm, for an extraneous voice from a person other than the user (Yamamoto: [0184]–[0185] — machine learning being used to obtain a probability of a particular sound detection identified in input audio signals, whereby non-user-sounds are detected which could be voices of humans other than the wearer of the headphones, the non-user-sound detecting uses machine learning which outputs a probability of the particular sound detection). The combination of Metzar in view of Sivaraman provides teaching for a communication session over a network whereby components comprising extraneous voice present in audio are removed with the audio transmitted over an audio channel to a user at an endpoint. This combination however fails to teach of a machine learning algorithm able obtain a confidence score of the determination of an extraneous voice from a person other than the user. This is not new to the art as the reference of Yamamoto is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Yamamoto which uses a machine learning model to output the confidence as a probability of detecting the voice of a person other than a user in an audio signal, with the teaching of the combination which provides teaching for a communication session including an audio channel for exchanging user communication between a user operating the endpoint connected to the communication session, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of confidently identifying unwanted noises in an audio signal, so that a next step of removing such noises may be taken. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Metzar in view of Sivaraman further in view of Yamamoto fails to teach the further limitations of this claim, for which the reference of Jasleen is now introduced to teach as: upon determining the confidence score satisfies a threshold level of confidence, removing components of the digital signal comprising the extraneous voice from the audio (Jasleen: [0018] — having a machine learning engine which evaluates data in order to be able to perform noise cancellation, such that environmental noise is listened for and background noise is cancelled based on a determined degree of confidence so that if a threshold of the obtained confidence regarding the noise is met, noise cancellation gets performed; [0017] — performing noise cancellation to cancel out background noises such as a baby crying (a baby crying is an example of an extraneous voice of another person other than the user, teaching of cancelling out the extraneous voice when a threshold level of confidence is met)). The combination of Metzar in view of Sivaraman further in view of Yamamoto provides teaching for the use of a machine learning algorithm to detect the presence of extraneous sounds within a received audio, the machine learning model able to indicate a confidence of the detection. This combination however fails to teach of removing the extraneous voice from the audio after the machine learning model’s confidence score is determined to satisfy a threshold. This is not new to the art as the reference of Jasleen is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Jasleen which receives audio and remove components of extraneous voice present in the audio, with the teaching of the combination which detects the presence of the extraneous voice by a machine learning model, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of removing noises from other people which could be distracting, at a point that the system determines the intensity of the noise (extraneous voice) to be distracting as to interfere with understanding the audio. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 2, claim 1 is incorporated and the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen discloses the method, wherein the machine learning algorithm is trained to recognize a user voice of the user (Sivaraman: [0028]-[0033] with provisional application support in section titled Speech Separation Model — describing how the neural network model is trained using target speaker based on speaker recognition and for which a mask is determined to remove the interfering speakers). For claim 3, claim 1 is incorporated and the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen discloses the method, comprising: training the machine learning algorithm using one or more samples of the user voice (Sivaraman: [0030] with provisional application support in section titled Speech Separation Model — training audio signals using samples of the target speaker). For claim 5, claim 3 is incorporated and the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen discloses the method, wherein training the machine learning algorithm comprises: training the machine learning algorithm using one or more extraneous voice samples that were not intended for transmittal (Sivaraman: [0030] with provisional application support in section titled Speech Separation Model — training data where training data set includes other speaker signals and is mixed with random utterances from other speakers, (these training data set is not one intended for transmittal during the audio communication session)). As for claim 11, apparatus claim 11 and method claim 1 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Metzar in [0026] provides a non-transitory computer readable storage medium, and in [0148], provides a processing unit, these being suitable to read upon the limitations of the claim. Accordingly, claim 11 is similarly rejected under the same rationale as applied above with respect to method claim 1. As for claim 12, apparatus claim 12 and method claim 2 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 12 is similarly rejected under the same rationale as applied above with respect to method claim 2. As for claim 13, apparatus claim 13 and method claim 3 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 13 is similarly rejected under the same rationale as applied above with respect to method claim 3. As for claim 15, apparatus claim 15 and method claim 5 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 15 is similarly rejected under the same rationale as applied above with respect to method claim 5. As for claim 20, computer program product claim 20 and method claim 1 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Metzar in [0026] provides a non-transitory computer readable storage medium suitable to read upon the limitations of the claim Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to method claim 1. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Metzar (US 2020/0336848 A1) in view of Sivaraman (US 2022/0084509 A1) further in view of Yamamoto (US 2022/0059120 A1) and further in view of Jasleen (US 2022/023091 A1) as applied to claim 3, and further in view of Park et al. (US 2021/0019641: hereafter — Park) For claim 4, claim 3 is incorporated but the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen fails to teach the limitation of this claim, for which Park is now introduced to teach as the method, wherein training the machine learning algorithm includes: in response to the user initiating the communication session, requesting the one or more samples from the user (Park: [0104] — where processor requests voice data from the user to be used in training the model). The combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen provides teaching for training a machine learning algorithm using samples of the user voice. It differs from the claimed invention in that the claimed invention further provides teaching for the requesting of one or more samples from the user. This isn’t new to the art as the reference of Park is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Park which requests user samples to be used for training, with the teaching of the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen which provides the training of the machine learning algorithm using samples of a user voice, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of improving accuracy in user voice recognition. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 14, apparatus claim 14 and method claim 4 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 14 is similarly rejected under the same rationale as applied above with respect to method claim 4. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Metzar (US 2020/0336848 A1) in view of Sivaraman (US 2022/0084509 A1) further in view of Yamamoto (US 2022/0059120 A1) and further in view of Jasleen (US 2022/023091 A1) as applied to claim 3, and further in view of GUPTA et al (US 2022/0108701: hereafter – Gupta, provisional 63/086,384 filed on 01 October 2020). For claim 6, claim 3 is incorporated but the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen fails to disclose the limitation of this claim, for which Gupta is now introduced to teach as the method, wherein the confidence scores include a second confidence score for a voice of the user, wherein the second confidence score does not satisfy the threshold level of confidence (Gupta: [0047] — machine learning technique applied to speaker (voice) recognition; [0026], [0027] — computing a confidence score for authenticating a speaker based on input audio signal and checking see if it fails to meet a threshold). The combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen provides teaching for training a machine learning algorithm using samples of the user voice. It differs from the claimed invention in that the claimed invention further provides teaching for the presence of a further confidence score for the voice of the user and checking if it does not satisfy a threshold confidence level. This isn’t new to the art as the reference of Gupta is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Gupta which checks if a confidence level of voice recognition does not meet a threshold, with the teaching of the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen which provides the training of the machine learning algorithm using samples of a user voice, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of denying a user access to have his/her speech included in an audio signal if the system is unable to confidently recognise the speaker’s voice. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 16, apparatus claim 16 and method claim 6 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 16 is similarly rejected under the same rationale as applied above with respect to method claim 6. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Metzar (US 2020/0336848 A1) in view of Sivaraman (US 2022/0084509 A1) further in view of Yamamoto (US 2022/0059120 A1) and further in view of Jasleen (US 2022/023091 A1) as applied to claim 1, and further in view of Paranjpe et al. (EP 2,257,034 B1: hereafter — Paranjpe). For claim 7, claim 1 is incorporated but the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen fails to disclose the limitation of this claim, for which Paranjpe is now introduced to teach as the method, wherein the machine learning algorithm considers intensity of the extraneous voice and/or a language spoken by the extraneous voice when generating the confidence score (Paranjpe: [0145] — where the noise detector checks the sharpness and width of the noise as well as the correlation between the signal envelope in the time or frequency and then additionally identifies the noise using a probability measure). The combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen provides teaching for the presence of a machine learning algorithm to determine the presence of extraneous voice. It differs from the claimed invention in that the claimed invention further provides teaching for the machine learning algorithm considering the intensity of the extraneous voice when generating the confidence score. This however isn’t new to the art as the reference of Paranjpe is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Paranjpe which checks the intensity of noise in a signal, with the teaching of the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen which checks for the confidence of the presence of extraneous voice, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result that high intensity of extraneous voice present in an audio signal would serve as a direct measure of noise present in the applicable section of the signal, to thereby have the system perform a noise removal process. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 17, apparatus claim 17 and method claim 7 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to method claim 7. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Metzar (US 2020/0336848 A1) in view of Sivaraman (US 2022/0084509 A1) further in view of Yamamoto (US 2022/0059120 A1) and further in view of Jasleen (US 2022/023091 A1) as applied to claim 1, and further in view of Zhang et al. (US 2019/0149927 A1: hereafter — Zhang). For claim 8, claim 1 is incorporated but the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen fails to disclose the limitation of this claim, for which Zhang is now introduced to teach as the method, comprising: notifying the user that the extraneous voice has been identified (Zhang: [0128] — in the situation that noise is encountered, a specific recommended action could be taken, such as the prompting of the user to for a noise adjustment (which is a way of notifying the user about the presence of noise (previously applied references show that extraneous voice, qualify as noise)); wherein removing the components is performed in response to determining that the user has granted permission for removal of the extraneous voice (Zhang: [0128] — in the situation that noise is encountered, a specific recommended action could be taken, such as the prompting of the user to for a noise adjustment (indicating that the user grants the permission for the noise adjustment or noise removal)). The combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen provides teaching for the presence of a machine learning algorithm to determine the presence of extraneous voice. It differs from the claimed invention in that the claimed invention further provides teaching for notifying the user of the identification of extraneous voice and removing the components of the extraneous voice in response to the user granting permission for the removal. This however isn’t new to the art as the reference of Zhang is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Zhang which notifies a user of the presence of noise and removes the noise after the user grants permission for the removal, with the teaching of the combination of Metzar in view of Sivaraman further in view of Yamamoto and further in view of Jasleen provides the removal of extraneous voice present in received audio, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of providing a better user experience to the user by granting the user with more control over the noise that should be removed. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 18, apparatus claim 18 and method claim 8 are related as apparatus and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to method claim 8. Conclusion Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday – Thursday (8:00 AM – 5:30 PM Eastern Standard Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 10/03/2025
Read full office action

Prosecution Timeline

Aug 10, 2021
Application Filed
Sep 20, 2022
Non-Final Rejection — §103
Nov 09, 2022
Interview Requested
Dec 06, 2022
Examiner Interview Summary
Dec 06, 2022
Applicant Interview (Telephonic)
Dec 12, 2022
Response Filed
Apr 03, 2023
Non-Final Rejection — §103
Jun 02, 2023
Interview Requested
Jul 06, 2023
Response Filed
Sep 15, 2023
Final Rejection — §103
Feb 21, 2024
Request for Continued Examination
Feb 28, 2024
Response after Non-Final Action
Mar 24, 2024
Non-Final Rejection — §103
Jun 26, 2024
Response Filed
Aug 23, 2024
Final Rejection — §103
Nov 05, 2024
Response after Non-Final Action
Nov 24, 2024
Response after Non-Final Action
Dec 26, 2024
Request for Continued Examination
Dec 30, 2024
Response after Non-Final Action
Mar 19, 2025
Non-Final Rejection — §103
Jun 24, 2025
Interview Requested
Jun 30, 2025
Examiner Interview Summary
Jun 30, 2025
Applicant Interview (Telephonic)
Jul 24, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579979
NAMING DEVICES VIA VOICE COMMANDS
2y 5m to grant Granted Mar 17, 2026
Patent 12537007
METHOD FOR DETECTING AIRCRAFT AIR CONFLICT BASED ON SEMANTIC PARSING OF CONTROL SPEECH
2y 5m to grant Granted Jan 27, 2026
Patent 12508086
SYSTEM AND METHOD FOR VOICE-CONTROL OF OPERATING ROOM EQUIPMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12499885
VOICE-BASED PARAMETER ASSIGNMENT FOR VOICE-CAPTURING DEVICES
2y 5m to grant Granted Dec 16, 2025
Patent 12469510
TRANSFORMING SPEECH SIGNALS TO ATTENUATE SPEECH OF COMPETING INDIVIDUALS AND OTHER NOISE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

8-9
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.6%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 304 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month