Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,339

DATA DRIVEN AUDIO ENHANCEMENT

Non-Final OA §103
Filed
Jul 15, 2024
Examiner
GAY, SONIA L
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
701 granted / 855 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
888
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 855 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the initial filing of application no. 18/773,339 on 07/15/2024. Claims 1- 20 are still pending in this application, with claims 1, 8 and 14 being independent. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4, 7 – 9, 11, 14, 15, 17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lindahl et al. (US 8,639,516) (“Lindahl”) in view of Wang et al. (US 2017/0061978) (“Wang”). For claim 1, Lindahl discloses a method (Abstract), comprising: receiving first audio data (voice sample) generated by a first device (Fig.15, 272, 276, 280, 282, 284, 286 and Fig.25, 422; column 13 lines 29 – 47, 51 – 57, column 18 lines 38 - 41), the first audio data representing a first voice of a first user (The voice sample is obtained for a user., column 13 lines 29 – 47, 51 – 57, column 18 lines 38 - 41); generating a voice profile that represents voice characteristics of the first voice of the first user represented in the first audio data (Fig.25, 424 and 426; column 18 lines 41 – 51); receiving second audio data generated by the first device and associated with a communication session between the first device and a second device (Fig.4, 32, 58, 60, 82, Fig.26, 432; column 8 line 5 – 12, 18 – 20, column 18 lines 52 – 60); analyzing the second audio data using the voice profile to identify the first voice of the first user represented in the second audio data (Fig. 26, 434; column 18 lines 60 – 65); enhancing the first voice of the first user represented in the second audio data (Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3); suppressing the second voice of the second user represented in the second audio data (Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3); and sending the second audio data to the second device via the communication session (Fig.4, 88; column 8 lines 32 – 34). Yet, Lindahl fails to teach the following: analyzing the second audio data using a deep learning model to identify a second voice associated with a second user represented in the second audio data; using the deep learning model to enhance the first voice of the first user; and using the deep learning model to suppressing the second voice of the second user. However, Wang discloses a system and method for separating noise from speech in real time using a deep neural network (DNN) (Abstract), comprising the following: analyzing audio data using a deep learning model (deep neural network based speech separation system, Fig.4, 420; [0041]) to identify noise, wherein the noise is associated with a second user (The DNN is trained on multi-talker babble noise, [0032]) ([0027 – 0030] [0042 – 0044] [0060] [0062] [0063]); using the deep learning model to enhance speech (Fig.4, 460; [0044] [0045]); and using the deep learning model to suppress the noise (Fig.4, 460; [0044] [0045]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve Lindahl’s invention in the same way that Wang’s invention to achieve the following, predictable results for the purpose of improving the intelligibility of speech using techniques by suppressing or removing noise without substantially increasing processor or processing complexity (Wang, [0001 – 0005]): the system further comprises a deep learning model to process audio signals; audio, e.g. the second audio, is further analyzed using a deep learning model to identify noise including babble noise (Lindahl, Fig.10, 186; column 11 lines 13 – 29) which comprises a second voice associated with a second user represented in the second audio data; the deep learning model is further used to enhance speech, e.g. the first voice of the first user; and deep learning model is further used to suppress noise including the babble noise which comprises the second voice of the second user. For claim 2, Lindahl and Wang further disclose analyzing the second audio data using the deep learning model to identify an unwanted noise signal represented in the second audio data (Lindahl, white noise, Fig.10, 188; column 11 lines 13 – 29 ) (Wang, [0027 – 0030] [0042 – 0044] [0060] [0062] [0063]); and prior to sending the second audio data, using the deep learning model to suppress the unwanted noise signal represented in the second audio data (Lindahl, Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3) (Wang, [0027 – 0030] [0042 – 0044] [0060] [0062]). For claims 4, 11 and 17, Lindahl further discloses, wherein the voice profile is a first voice profile, further comprising: receiving third audio data representing a third voice of a third user (Lindahl, A user using the voice-related feature of the electronic includes any number of users using the voice-related feature. Therefore, a voice profile is generated for each user, including a first and third user., column 13 lines 29 – 47, 51 – 57, column 18 lines 38 – 41); generating a second voice profile that represents second voice characteristics of the third voice of the third user represented in the third audio data (Lindahl, column 18 lines 41 – 51); and determining, using the first voice profile and the second voice profile, that particular voice characteristics represented in the second audio data are more closely correlated to the voice characteristics than the second voice characteristics (Lindahl, The voice characteristics received in the second audio data are compared to the voice profiles of known users, e.g. the first and third user. The user represented in the second audio data is determined to be a known user, e.g. first user, since the received voice characteristics correlate with the first user’s voice profile. column 13 lines 29 – 38). For claim 7, Lindahl and Wang further disclose, wherein one or more steps are performed by the first device (Lindahl, column 8 lines 5 – 12)(Wang, [0007] [0036]). For claim 8, Lindahl discloses a system (Abstract), comprising: one or more processors (Fig.1, 12); one or more non-transitory computer-readable media (Fig.1, 14 and 16) storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations (column 5 lines 47 – 60) comprising: receiving first audio data (voice sample) generated by a first device (Fig.15, 272, 276, 280, 282, 284, 286 and Fig.25, 422; column 13 lines 29 – 47, 51 – 57, column 18 lines 38 - 41), the first audio data representing a first voice of a first user (The voice sample is obtained for a user., column 13 lines 29 – 47, 51 – 57, column 18 lines 38 - 41); generating a voice profile that represents voice characteristics of the first voice of the first user represented in the first audio data (Fig.25, 424 and 426; column 18 lines 41 – 51); receiving second audio data generated by the first device and associated with a communication session between the first device and a second device (Fig.4, 32, 58, 60, 82, Fig.26, 432; column 8 line 5 – 12, 18 – 20, column 18 lines 52 – 60); analyzing the second audio data using the voice profile to identify the first voice of the first user represented in the second audio data (Fig. 26, 434; column 18 lines 60 – 65); enhancing the first voice of the first user represented in the second audio data (Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3); suppressing the second voice of the second user represented in the second audio data (Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3); and sending the second audio data to the second device via the communication session (Fig.4, 88; column 8 lines 32 – 34). Yet, Lindahl fails to teach the following: analyzing the second audio data using one or more models to identify a second voice associated with a second user represented in the second audio data; using the one or more models to enhance the first voice of the first user; and using the one or more models to suppressing the second voice of the second user. However, Wang discloses a system and method for separating noise from speech in real time using a deep neural network (DNN) (Abstract), comprising the following: analyzing audio data using a deep learning model (deep neural network based speech separation system, Fig.4, 420; [0041]) to identify noise, wherein the noise is associated with a second user (The DNN is trained on multi-talker babble noise, [0032]) ([0027 – 0030] [0042 – 0044] [0060] [0062] [0063]); using the deep learning model to enhance speech (Fig.4, 460; [0044] [0045]); and using the deep learning model to suppress the noise (Fig.4, 460; [0044] [0045]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve Lindahl’s invention in the same way that Wang’s invention to achieve the following, predictable results for the purpose of improving the intelligibility of speech using techniques by suppressing or removing noise without substantially increasing processor or processing complexity (Wang, [0001 – 0005]): the system further comprises one or more models ( deep learning model) to process audio signals; audio, e.g. the second audio, is further analyzed using a deep learning model to identify noise including babble noise (Lindahl, Fig.10, 186; column 11 lines 13 – 29) which comprises a second voice associated with a second user represented in the second audio data; the one or more models (deep learning model) is further used to enhance speech, e.g. the first voice of the first user; and the one or more models (deep learning model) is further used to suppress noise including the babble noise which comprises the second voice of the second user. For claim 9, Lindahl and Wang further disclose analyzing the second audio data using the one or models to identify an unwanted noise signal represented in the second audio data (Lindahl, white noise, Fig.10, 188; column 11 lines 13 – 29 ) (Wang, [0027 – 0030] [0042 – 0044] [0060] [0062] [0063]); and prior to sending the second audio data, using the one or more models to suppress the unwanted noise signal represented in the second audio data (Lindahl, Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3) (Wang, [0027 – 0030] [0042 – 0044] [0060] [0062]). For claim 14, Lindahl discloses a first device (Abstract), comprising: one or more processors (Fig.1, 12); a microphone (Fig.1, 32); one or more computer-readable media (Fig.1, 14 and 16) storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations (column 5 lines 47 – 60) comprising: receiving first audio data (voice sample) generated by a first device (Fig.15, 272, 276, 280, 282, 284, 286 and Fig.25, 422; column 13 lines 29 – 47, 51 – 57, column 18 lines 38 - 41), the first audio data representing a first voice of a first user (The voice sample is obtained for a user., column 13 lines 29 – 47, 51 – 57, column 18 lines 38 - 41); generating a voice profile that represents voice characteristics of the first voice of the first user represented in the first audio data (Fig.25, 424 and 426; column 18 lines 41 – 51); receiving second audio data generated by the first device and associated with a communication session between the first device and a second device (Fig.4, 32, 58, 60, 82, Fig.26, 432; column 8 line 5 – 12, 18 – 20, column 18 lines 52 – 60); analyzing the second audio data using the voice profile to identify the first voice of the first user represented in the second audio data (Fig. 26, 434; column 18 lines 60 – 65); enhancing the first voice of the first user represented in the second audio data (Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3); suppressing the second voice of the second user represented in the second audio data (Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3); and sending the second audio data to the second device via the communication session (Fig.4, 88; column 8 lines 32 – 34). Yet, Lindahl fails to teach the following: analyzing the second audio data using one or more models to identify a second voice associated with a second user represented in the second audio data; using the one or more models to enhance the first voice of the first user; and using the one or more models to suppressing the second voice of the second user. However, Wang discloses a system and method for separating noise from speech in real time using a deep neural network (DNN) (Abstract), comprising the following: analyzing audio data using a deep learning model (deep neural network based speech separation system, Fig.4, 420; [0041]) to identify noise, wherein the noise is associated with a second user (The DNN is trained on multi-talker babble noise, [0032]) ([0027 – 0030] [0042 – 0044] [0060] [0062] [0063]); using the deep learning model to enhance speech (Fig.4, 460; [0044] [0045]); and using the deep learning model to suppress the noise (Fig.4, 460; [0044] [0045]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve Lindahl’s invention in the same way that Wang’s invention to achieve the following, predictable results for the purpose of improving the intelligibility of speech using techniques by suppressing or removing noise without substantially increasing processor or processing complexity (Wang, [0001 – 0005]): the system further comprises one or more models ( deep learning model) to process audio signals; audio, e.g. the second audio, is further analyzed using a deep learning model to identify noise including babble noise (Lindahl, Fig.10, 186; column 11 lines 13 – 29) which comprises a second voice associated with a second user represented in the second audio data; the one or more models (deep learning model) is further used to enhance speech, e.g. the first voice of the first user; and the one or more models (deep learning model) is further used to suppress noise including the babble noise which comprises the second voice of the second user. For claim 15, Lindahl and Wang further disclose analyzing the second audio data using the model to identify an unwanted noise signal represented in the second audio data (Lindahl, white noise, Fig.10, 188; column 11 lines 13 – 29 ) (Wang, [0027 – 0030] [0042 – 0044] [0060] [0062] [0063]); and prior to sending the second audio data, using the model to suppress the unwanted noise signal represented in the second audio data (Lindahl, Fig.4, 20, 84, Fig.26, 436; column 8 lines 20 – 33, column 18 lines 65 – column 19 line 3) (Wang, [0027 – 0030] [0042 – 0044] [0060] [0062]). For claim 19, Wang further discloses, wherein the model is a deep learning model (Wang, [0027 – 0030] [0032] [0042 – 0044] [0060] [0062] [0063]). Claim(s) 3, 10 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lindahl et al. (US 8,639,516) (“Lindahl”) in view of Wang et al. (US 2017/0061978) (“Wang”) and further in view of Zhao et al. (“Robust Speaker Identification in Noisy and Reverberant Conditions”). For claim 3, the combination of Lindahl and Wang fails to teach the following: augmenting the deep learning model with the voice profile such that the deep learning model identifies the first voice of the first user in audio data, wherein the first voice of the first user is identified by analyzing the second audio data at least partly using the deep learning model. However, Zhao discloses a system and method for the purpose of performing robust speaker identification in a noisy conditions (Abstract), comprising the following: a deep learning model (deep neural network which generates an ideal binary mask) is further augmented with a speaker models (voice profiles) to identify a speaker (2. System Overview and Front-End Processing, 2.1. Auditory Features and IBM Definition, 2.3. Mask Estimation via DNN; 3. Recognition Methodology, 3.1. Bounded Marginalization Module, 3.2. Direct Masking Module, 3.4. Multi-condition Fusion and Module Combination, pg. 4025 – 4027). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Lindahl and Wang in the same way that Zhao’s invention has been improved to achieve the following, predictable results for the purpose of improving the functionality of the system by further performing automatic speaker recognition (Zhao, Abstract, 1.Introduction, pg. 4025): deep learning model is further augmented with speaker models, e.g. the voice profile, such that the deep learning model further identifies, the speaker, e.g. the first voice of the first user, in audio data, wherein the first voice of the first user is identified by analyzing the audio, e.g. second audio, data at least partly using the deep learning model. For claim 10, the combination of Lindahl and Wang fails to teach the following: augmenting the one or more models with the voice profile such that the one or more models identify the first voice of the first user in audio data, wherein the first voice of the first user is identified by analyzing the second audio data at least partly using the one or more models. However, Zhao discloses a system and method for the purpose of performing robust speaker identification in a noisy conditions (Abstract), comprising the following: a deep learning model (deep neural network which generates an ideal binary mask) is further augmented with a speaker models (voice profiles) to identify a speaker (2. System Overview and Front-End Processing, 2.1. Auditory Features and IBM Definition, 2.3. Mask Estimation via DNN; 3. Recognition Methodology, 3.1. Bounded Marginalization Module, 3.2. Direct Masking Module, 3.4. Multi-condition Fusion and Module Combination, pg. 4025 – 4027). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Lindahl and Wang in the same way that Zhao’s invention has been improved to achieve the following, predictable results for the purpose of improving the functionality of the system by further performing automatic speaker recognition (Zhao, Abstract, 1.Introduction, pg. 4025): the one or more models are further augmented with speaker models, e.g. the voice profile, such that the one or more further identifies, the speaker, e.g. the first voice of the first user, in audio data, wherein the first voice of the first user is identified by analyzing the audio, e.g. second audio, data at least partly using the one or more models. For claim 16, the combination of Lindahl and Wang fails to teach the following: augmenting the model with the voice profile such that the one or more models identify the first voice of the first user in audio data, wherein the first voice of the first user is identified by analyzing the second audio data at least partly using the model. However, Zhao discloses a system and method for the purpose of performing robust speaker identification in a noisy conditions (Abstract), comprising the following: a deep learning model (deep neural network which generates an ideal binary mask) is further augmented with a speaker models (voice profiles) to identify a speaker (2. System Overview and Front-End Processing, 2.1. Auditory Features and IBM Definition, 2.3. Mask Estimation via DNN; 3. Recognition Methodology, 3.1. Bounded Marginalization Module, 3.2. Direct Masking Module, 3.4. Multi-condition Fusion and Module Combination, pg. 4025 – 4027). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Lindahl and Wang in the same way that Zhao’s invention has been improved to achieve the following, predictable results for the purpose of improving the functionality of the system by further performing automatic speaker recognition (Zhao, Abstract, 1.Introduction, pg. 4025): the model is further augmented with speaker models, e.g. the voice profile, such that the one or more further identifies, the speaker, e.g. the first voice of the first user, in audio data, wherein the first voice of the first user is identified by analyzing the audio, e.g. second audio, data at least partly using the model. Claim(s) 6,13 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lindahl et al. (US 8,639,516) (“Lindahl”) in view of Wang et al. (US 2017/0061978) (“Wang”) and further in view of Tachibana (US 2008/0086301). For claim 6, 13 and 20, the combination of Lindahl and Wang fails to teach the following: receiving video data generated by a first device that is associated with the second audio data of the communication session; adding a tag to the video data that indicates that noise suppression associated with speaker identification is being performed for the communication session; and sending the video data that includes the tag to the second device via the communication session such that a visual representation of the tag is presented on the second device. However, Tachibana discloses a discloses an audio communication apparatus and method (Abstract), comprising the following: receiving video data generated by a first device that is associated with audio data of a communication session ([0059] [0078]); adding a tag to the video data that indicates that noise suppression associated with speaker identification is being performed for the communication session (In a TV phone, voice data is transmitted with the image data. A voice correction function is applied to the voice data. A type and parameter of the voice correction is added to the voice data. Since the image and voice data are transmitted together, both are tagged with the type and parameter of the voice correction ([0030 - 0032] [0078]); and sending the video data that includes the tag to the second device via the communication session such that a visual representation of the tag is presented on the second device ([0032 – 0037] [0078]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Lindahl and Wang in the same way that Tachibana’s invention has been improved to achieve the following, predictable results for the purpose of increasing user satisfaction by notifying a receiving device that speech enhancement is being performed so that the receiving device can request a modification of the speech enhancement process based on preference ([0007 – 0010]): further receiving video data generated by a first device that is associated with the second audio data of the communication session; adding a tag to the video data that indicates that noise suppression associated with speaker identification is being performed for the communication session; and sending the video data that includes the tag to the second device via the communication session such that a visual representation of the tag is presented on the second device. Claim(s) 5, 12 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lindahl et al. (US 8,639,516) (“Lindahl”) in view of Wang et al. (US 2017/0061978) (“Wang”) and further in view of Cheethirala (US 2013/0016819). For claims 5, 12 and 18, the combination of Lindahl and Wang fails to teach the following: receiving third audio data generated by the first device and associated with the communication session, the third audio data representing noise in an environment of the first device; determining, using the voice profile, that the third audio data do not represent the first voice of the first user; and based at least in part on the third audio data not representing the first voice of the first user, refraining from sending the third audio data to the second device. However, Cheethirala discloses a speaker recognition system and method (Abstract), comprising the following: receiving audio data generated by a first device ([0042]), the audio data representing noise in an environment of the device ([0042]); determining, using a voice profile, that the audio data does not represent the a voice of a first user ([0038] [0039] [0042]), and based at least in part on the audio data not representing the first voice of the first user, refraining from transmitting the audio data ([0042]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Lindahl and Wang in the same way that Cheethirala’s invention has been improved to achieve the following, predictable results for the purpose of preventing the transmission of unwanted signals in a communication session (Abstract): further receiving third audio data generated by the first device and associated with the communication session, the third audio data representing noise in an environment of the first device; further determining, using the voice profile, that the third audio data do not represent the first voice of the first user; and based at least in part on the third audio data not representing the first voice of the first user, refraining from sending (transmitting) the third audio data to the second device. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SONIA L GAY whose telephone number is (571)270-1951. The examiner can normally be reached Monday-Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SONIA L GAY/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602617
DATA MANUFACTURING FRAMEWORKS FOR SYNTHESIZING SYNTHETIC TRAINING DATA TO FACILITATE TRAINING A NATURAL LANGUAGE TO LOGICAL FORM MODEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602408
STREAMING OF NATURAL LANGUAGE (NL) BASED OUTPUT GENERATED USING A LARGE LANGUAGE MODEL (LLM) TO REDUCE LATENCY IN RENDERING THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602539
PROACTIVE ASSISTANCE VIA A CASCADE OF LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596708
SYSTEMS AND METHODS FOR AUTOMATED CODE GENERATION FOR CALCULATION BASED ON ASSOCIATED FORMAL SPECIFICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591604
INTELLIGENT ASSISTANT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 855 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month