Prosecution Insights
Last updated: April 19, 2026
Application No. 18/090,644

Need-Based Generation of Multi-Modal Communication Aids to Aid Verbal Communications

Final Rejection §103
Filed
Dec 29, 2022
Examiner
OGUNBIYI, OLUWADAMILOL M
Art Unit
2653
Tech Center
2600 — Communications
Assignee
AT&T Intellectual Property I, L.P.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
236 granted / 304 resolved
+15.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
31 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 304 resolved cases

Office Action

§103
DETAILED ACTION Claims 1 – 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment With regard to the Non-Final Office Action from 27 August 2025, the Applicant has filed a response on 28 November 2025. The Specification was objected to for a minor informality. The Specification has been amended to address the minor informality. The Examiner hereby withdraws the objection. Response to Arguments Applicant’s arguments with respect to the independent claims have been considered but are moot due to the new grounds of rejection necessitated by the amendment to the independent claims. The claims will be addressed by their current presentation in the following section. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 6, 9, 10, 14, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yapamanu et al. (US 2016/0062987 A1: hereafter — Yapamanu) in view of Bergenlid et al. (US 2019/0036856 A1: hereafter — Bergenlid). For claim 1, Yapamanu discloses a method comprising: receiving, by a user system comprising a processor executing a multi-modal communication module, during a communications session hosted by a communications service and established between the user system and an additional user system, a request for a multi-modal communication aid to be used (Yapamanu: [0024] — establishing a communication session between users; [0037] — audio visual communications (indicating a multi-modal communication); [0040] — the communication module including a processor; [0046] — during the video session, one user can request to anonymise his/her appearance as an animated avatar and this is presented to the customer as the other participant (the request for the avatar being the communication aid, this request for the avatar being one that is received at the user system of the user requesting such as requested by the user)). The reference of Yapamanu provides teaching for the execution of a multi-modal communication module hosting a communication session between a user system and an additional user system. This reference however fails to teach the further limitations of this claim regarding the presentation of a multi-modal communication aid to the participating users. This is however not new to the art as the reference of Bergenlid is now introduced to teach this as: in response to the request, receiving, by the user system from a user associated with the user system, a selection of the multi-modal communication aid, wherein the multi-modal communication aid is automatically presented, via the user system, in response to recognition, by the user system, of a trigger that occurs during the communications session (Bergenlid: FIG. 6C Part 642, FIG. 6D Part 662, [0133]–[0134] — during a communication session between two users, one user can issue the speech trigger ‘Assistant, show me on a map please’ which then triggers the assistant on the device to then present the user of the device with a display information item (that is automatically presented as a multi-modal communication aid based on the received command); [0128]); providing, by the user system, the multi-modal communication aid to the additional user system via the communications service (Bergenlid: FIG. 6C Part 642, FIG. 6D Part 662, [0133]–[0134] — during a communication session between two users, one user can issue the speech trigger ‘Assistant, show me on a map please’ which then triggers the assistant on the device to then present the other device user with a display information item (that is automatically presented as a multi-modal communication aid based on the received command); [0128]). Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to improve upon the technique of Yapamanu which provides teaching for the execution of a multi-modal communication module hosting a communication session between a user system and an additional user system, by incorporating the known technique of Bergenlid which provides the presentation of a multi-modal communication aid to the participating users based on receiving a trigger from one of the users, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of making assistance services easily available to users engaged in the communication without needing the users to manually access the desired assistive service. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 2, claim 1 is incorporated and the combination of Yapamanu in view of Bergenlid discloses the method, wherein the communications session comprises live audio (Yapamanu: [0018] — establishing a cross-human language communication session with a remote agent (to indicate a live communication session); [0022] — the communication possibly being in an audio-only mode). For claim 6, claim 1 is incorporated and the combination of Yapamanu in view of Bergenlid discloses the method, wherein the multi-modal communication aid selected comprises a pre-defined multi-modal communication aid (Yapamuna: [0022] — selection of an avatar mode for animation (the avatar mode being the communication aid, and also having been presented to the user for use, also, the use of the avatar as presented here is a pre-defined communication aid as it is already available to be made use of)). For claim 9, the reference of Yapamanu discloses a method comprising: generating, by a user system comprising a processor executing a multi-modal communication module, during a communications session hosted by a communications service and established between the user system and an additional user system, a request for a multi-modal communication aid to be used (Yapamanu: [0024] — establishing a communication session between users; [0037] — audio visual communications (indicating a multi-modal communication); [0040] — the communication module including a processor; [0046] — during the video session, one user can request to anonymise his/her appearance as an animated avatar and this is presented to the customer as the other participant (the request for the avatar being the communication aid, this request for the avatar being one that is received at the user system of the user requesting such as requested by the user)), [[wherein the request for a multi-modal communication aid is generated by the user system in response to recognition, by the user system, of a trigger that occurs during the communications session]]; sending, by the user system, during the communications session, the request for a multi-modal communication aid to be used (Yapamanu: [0024] — establishing a communication session between users; [0037] — audio visual communications (indicating a multi-modal communication); [0040] — the communication module including a processor; [0046] — during the video session, one user can request to anonymise his/her appearance as an animated avatar and this is presented to the customer as the other participant (the request for the avatar being the communication aid, this request for the avatar being sent from the user system of the user requesting such)). The reference of Yapamanu provides teaching for the execution of a multi-modal communication module hosting a communication session between a user system and an additional user system. This reference however fails to teach the further limitations of this claim regarding the presentation of a multi-modal communication aid to the participating users. This is however not new to the art as the reference of Bergenlid is now introduced to teach this as: wherein the request for a multi-modal communication aid is generated by the user system in response to recognition, by the user system, of a trigger that occurs during the communications session (Bergenlid: FIG. 6C Part 642, FIG. 6D Part 662, [0133]–[0134] — during a communication session between two users, one user can issue the speech trigger ‘Assistant, show me on a map please’ (which is detected by the system that an explicit invocation for assistance was made, indicating a recognition of the command)); in response to the request, receiving, by the user system, the multi-modal communication aid (Bergenlid: FIG. 6C Part 642, FIG. 6D Part 662, [0133]–[0134] — during a communication session between two users, one user can issue the speech trigger ‘Assistant, show me on a map please’ which then triggers the assistant on the device to then present the user of the device with a display information item (that is automatically presented as a multi-modal communication aid based on the received command); [0128]); and presenting, by the user system, the multi-modal communication aid to a user associated with the user system (Bergenlid: FIG. 6C Part 642, FIG. 6D Part 662, [0133]–[0134] — during a communication session between two users, one user can issue the speech trigger ‘Assistant, show me on a map please’ which then triggers the assistant on the device to then present the user of the device with a display information item (that is automatically presented as a multi-modal communication aid based on the received command); [0128]). The same motivation for the introduction of the Bergenlid reference as applied to claim 1 above is applicable here. For claim 10, claim 9 is incorporated and the combination of Yapamanu in view of Bergenlid discloses the method, wherein the communications session comprises live audio (Yapamanu: [0018] — establishing a cross-human language communication session with a remote agent (to indicate a live communication session); [0022] — the communication possibly being in an audio-only mode). For claim 14, claim 9 is incorporated and the combination of Yapamanu in view of Bergenlid discloses the method, wherein the trigger comprises a word or a phrase (Bergenlid: FIG. 6C Part 642, FIG. 6D Part 662, [0133]–[0134] — during a communication session between two users, one user can issue the speech trigger ‘Assistant, show me on a map please’ (this being a trigger phrase)). As for claim 16, system claim 16 and method claim 1 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Yapamanu in [0062] provides teaching a processor and computer-readable storage suitable to read upon the claimed invention. Accordingly, claim 16 is similarly rejected under the same rationale as applied above with respect to method claim 1. As for claim 19, system claim 19 and method claim 6 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to method claim 6. Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Yapamanu (US 2016/0062987 A1) in view of Bergenlid (US 2019/0036856 A1) as applied to claims 2 and 10, further in view of Lasatar et al. (US 2022/0383399 A1: hereafter — Lasatar). For claim 3, claim 2 is incorporated and the combination of Yapamanu in view of Bergenlid discloses the method, wherein the communications session further comprises live video (Yapamanu: [0018] — establishing a cross-human language communication session with a remote agent (to indicate a live communication session); [0022] — the communication possibly being in a video and audio mode). The combination of Yapamanu in view of Bergenlid fails to completely disclose the further limitation of this claim, for which the reference of Lasatar is now introduced to teach as: wherein the multi-modal communication aid comprises an avatar-based visualization superimposed on the live video (Lasatar: [0075] — superimposing avatars associated with a customer on an actual live video). The combination of Yapamanu in view of Bergenlid provides teaching for the presence of an avatar being used in a live communication session (Yapamanu: [0047]) but differs from the claimed invention in that the claimed invention further provides teaching for superimposing an avatar-based visualisation on a live video. This isn’t new to the art as the reference of Lasatar is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to improve upon the technique of the combination of Yapamanu in view of Bergenlid which provides an avatar-based assistance in a live communication session, by applying the known teaching of Lasatar which superimposes the avatar on a live video communication, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of presenting a video of a virtual persona interacting with live objects to both participants, without having the actual participants be unnecessarily engaged in the physical actions. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 11, it is analysed and rejected by the same reasons set forth in the rejection of claim 3 above given that the presentation of both instant claims has similar limitations. Claims 4, 5, 12, 13, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yapamanu (US 2016/0062987 A1) in view of Bergenlid (US 2019/0036856 A1) as applied to claims 2 and 10, further in view of Duraibabu et al. (US 11,641,592 B1: hereafter — Duraibabu). For claim 4, claim 2 is incorporated but the combination of Yapamanu in view of Bergenlid fails to fully disclose the limitations of this claim, for which the reference of Duraibabu is now introduced to teach as: the method, wherein receiving the request for the multi-modal communication aid comprises receiving the request for the multi-modal communication aid in response to a technology issue with the live audio experienced by a user associated with the additional user system during the communications session (Duraibabu: Col 16 lines 6–18 — a customer to call in (indicating a live audio communications session) and request assistance with solving a problem which the user is having with the user device (this coming from the user end at the user’s system, also, an indication that the problem is related to the device indicates a technology issue) such that an aid is provided through the customer support component, network diagnostic skill, and the diagnostic component). The combination of Yapamanu in view of Bergenlid provides teaching for requesting a multi-modal communication aid with regard to a communication session, but differs from the claimed invention in that that the claimed invention now further provides teaching for receiving the request for the multi-modal communication aid in response to a technology issue with the live audio experienced by a user. This isn’t new to the art as the reference of Duraibabu is seen to teach as provided above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Duraibabu which teaches a user making a request for assistance regarding a technology issue, with the teaching of the combination of Yapamanu in view of Bergenlid which provides making a request for a multi-modal communication aid, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of having assistance made available to resolve communication issues possibly experienced during a communications session, so that the communications issue can be resolved. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 5, claim 2 is incorporated and as applied to claim 4 above, the combination of Yapamanu in view of Bergenlid further in view of Duraibabu discloses the method, wherein receiving the request for the multi-modal communication aid comprises receiving the request for the multi-modal communication aid in response to a user issue with the live audio experienced by a user associated with the additional user system during the communications session (Duraibabu: Col 16 lines 6–18 — a customer to call in (indicating a live audio communications session) and request assistance with solving a problem which the user is having with the user device (this coming from the user end at the user’s system, as an indication that the user is experiencing a user issue) such that an aid is provided through the customer support component, network diagnostic skill, and the diagnostic component). The same motivation applied to claim 4 for introducing the Duraibabu reference is applicable here still. For claim 12, claim 10 is incorporated but the combination of Yapamanu in view of Bergenlid fails to disclose the limitations of this claim, for which the reference of Duraibabu is now introduced to teach as the method, wherein the request is associated with a technology issue with the live audio experienced by the user associated with the user system during the communications session (Duraibabu: Col 16 lines 6–18 — a customer to call in (indicating a live audio communications session) and request assistance with solving a problem which the user is having with the user device (this coming from the user end at the user’s system, also, an indication that the problem is related to the device indicates a technology issue) such that an aid is provided through the customer support component, network diagnostic skill, and the diagnostic component). The same motivation applied to claim 4 for introducing the Duraibabu reference is applicable here still. For claim 13, claim 10 is incorporated and as applied to claim 12 above, the combination of Yapamanu in view of Bergenlid further in view of Duraibabu discloses the method, wherein the request for the multi-modal communication aid is associated with a user issue with the live audio experienced by the user associated with the user system during the communications session (Duraibabu: Col 16 lines 6–18 — a customer to call in (indicating a live audio communications session) and request assistance with solving a problem which the user is having with the user device (this coming from the user end at the user’s system, as an indication that the user is experiencing a user issue) such that an aid is provided through the customer support component, network diagnostic skill, and the diagnostic component). As for claim 17, system claim 17 and method claim 4 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to method claim 4. As for claim 18, system claim 18 and method claim 5 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to method claim 5. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yapamanu (US 2016/0062987 A1) in view of Bergenlid (US 2019/0036856 A1) as applied to claim 1, further in view of Amer et al. (US 2019/0304157 A1: hereafter — Amer). For claim 7, claim 1 is incorporated but the combination of Yapamanu in view of Bergenlid fails to disclose the limitation of this claim, for which the reference of Amer is now introduced to teach as the method, wherein the multi-modal communication aid selected comprises a new multi-modal communication aid created, on-the-fly, by the user associated with the user system (Amer: [0032] — ‘[s]ome aspects of this disclosure relate to an AI agent or system generating an animation or video in response to human input and/or human interactions’ (indicating the presence of a request to generate an animation on-the-fly, as requested by a user)). The combination of Yapamanu in view of Bergenlid provides teaching for receiving a request for a multi-modal communication aid, but differs from the claimed invention in that the claimed invention further provides teaching for receiving the selection of the multi-modal communication aid based upon a new multi-modal communication aid created on-the-fly by a user association with the user system. This is however not new to the art as the reference of Amer is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to improve upon the technique of the combination of Yapamanu in view of Bergenlid which provides a user making a request for a multi-modal communication aid, by applying the known teaching of Amer which provides that the multi-modal communication aid selection comprises the creation of the aid by the user associated with the user system, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of being able to present an animation of a tutorial to show users how a particular task or action should be performed, in a situation where no video of performing such an action is readily available. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yapamanu (US 2016/0062987 A1) in view of Bergenlid (US 2019/0036856 A1) as applied to claims 1 and 16, further in view of Lawler et al. (US 2010/0250196 A1: hereafter — Lawler). For claim 8, claim 1 is incorporated but the combination of Yapamanu in view of Bergenlid fails to disclose the limitation of this claim, for which the reference of Lawler is now introduced to teach as: the method, wherein the multi-modal communication aid based upon a new multi-modal communication aid created based upon an artificial intelligence decision made by the multi-modal communication module (Lawler: [0022] — a cognitive engine that operates as a machine assistant that automatically starts an action or completes an action (indicating an artificial intelligence decision being made to aid the user) to then transmit appropriate information to a human assistant (showing an aid provided based upon an artificial intelligence decision which a human/user may then accept); [0070] — employing artificial intelligence-based schemes (teaching of the cognitive engine being AI-based)). The combination of Yapamanu in view of Bergenlid provides teaching for receiving a request for a multi-modal communication aid, but differs from the claimed invention in that the claimed invention further provides teaching for the communication aid being created based upon an artificial intelligence decision. This is however not new to the art as the reference of Lawler is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to improve upon the technique of the combination of Yapamanu in view of Bergenlid which provides a user making a request for a multi-modal communication aid, by applying the known teaching of Lawler where an artificial intelligence decision is created as an aid to the user, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of presenting the user with an adaptive system that is able to address certain needs of the user by making certain decisions regarding the user’s communication comfort without the user having to request it in the first place. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 20, system claim 20 and method claim 8 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to method claim 8. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Yapamanu (US 2016/0062987 A1) in view of Bergenlid (US 2019/0036856 A1) as applied to claim 9, further in view of White et al. (US 10,742,817 B1: hereafter — White). For claim 15, claim 14 is incorporated and as applied above, the combination of Yapamanu in view of Bergenlid fails to teach the limitation of this claim, for which the reference of White is now introduced to teach as the method, wherein the trigger can be included in a profile (White: Col 3 line 62 – Col 4 line 5, Col 4 lines 28–33, Col 5 lines 40–46 — having a trigger term associated with a moderator for the purpose of performing an action during a communication session; Col 2 lines 57–59 — accessing a moderator profile (indicating that the moderator’s profile contain the trigger terms/words)). The combination of Yapamanu in view of Bergenlid provides teaching for requesting that a multimodal communication aid be used during a communication session based on receiving a trigger, but differs from the claimed invention in that the claimed invention further provides that the trigger can be included in a profile. This isn’t new to the art as the reference of White is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to improve upon the technique of the combination of Yapamanu in view of Bergenlid which provides for requesting that a multimodal communication aid be used during a communication session based on receiving a trigger, by applying the known teaching of White which provides that the request for the aid is associated with a profile, thereby coming up with the claimed invention. The combination of both prior art elements would have provided the predictable result of granting a communication participant with the ease of use of accessing communications assistance actions through words pre-stored in a profile, instead of having the user manipulate control buttons during the conversation. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday – Thursday (8:00 AM – 5:30 PM Eastern Standard Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D. SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 03/14/2026
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Aug 23, 2025
Non-Final Rejection — §103
Nov 28, 2025
Response Filed
Mar 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579979
NAMING DEVICES VIA VOICE COMMANDS
2y 5m to grant Granted Mar 17, 2026
Patent 12537007
METHOD FOR DETECTING AIRCRAFT AIR CONFLICT BASED ON SEMANTIC PARSING OF CONTROL SPEECH
2y 5m to grant Granted Jan 27, 2026
Patent 12508086
SYSTEM AND METHOD FOR VOICE-CONTROL OF OPERATING ROOM EQUIPMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12499885
VOICE-BASED PARAMETER ASSIGNMENT FOR VOICE-CAPTURING DEVICES
2y 5m to grant Granted Dec 16, 2025
Patent 12469510
TRANSFORMING SPEECH SIGNALS TO ATTENUATE SPEECH OF COMPETING INDIVIDUALS AND OTHER NOISE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.6%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 304 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month