Prosecution Insights
Last updated: April 19, 2026
Application No. 18/441,182

SYSTEMS AND METHODS FOR ASSOCIATING CONTEXT TO SUBTITLES DURING LIVE EVENTS

Final Rejection §103
Filed
Feb 14, 2024
Examiner
NGUYEN, THUONG
Art Unit
2416
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
4 (Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
446 granted / 654 resolved
+10.2% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
65 currently pending
Career history
719
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 654 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This action is responsive to the Remark filed on 12/1725. Claims 29, 40, and 50 are amended. Claims 34-35 and 49 are cancelled. Claims 51 and 52 are new. Claim(s) 29-33, 36-43, 46-48 & 50-52 is/are presented for examination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 29-33, 36-43, 46-48 is/are rejected under 35 U.S.C. 103 as being unpatentable over Niekrasz, U.S. Patent/Pub. No. US 2019/0327103 A1 in view of Vezina, U.S. Patent/Pub. No. US 11,314,756 B2, and further in view of Huang, US 2023/0035155 A1. As to claim 29, Niekrasz teaches a method comprising: generating a first summary associated with a first subject using a first group of subtitles related to a first video segment of the piece of media content (Niekrasz, page 1, paragraph 8; page 2, paragraph 9; i.e., [0008] generating abstractive summaries of meetings, the computing system comprising: produce, based on the transcript of the meeting, a data structure that comprises utterance features; determine, based on the transcript of the meeting, temporal bounds of a plurality of activity episodes within the meeting, wherein for each respective activity episode of the plurality of activity episodes; generating a plurality of episode summaries, for each respective activity episode of the plurality of activity episodes; and produce an episode summary for the respective activity episode that is dependent on the determined conversational activity type associated with the respective activity episode); generating a second summary associated with a second subject using a second group of subtitles related to a second video segment of the piece of media content (Niekrasz, page 1, paragraph 8; page 2, paragraph 9; i.e., [0008] generating abstractive summaries of meetings, the computing system comprising: produce, based on the transcript of the meeting, a data structure that comprises utterance features; determine, based on the transcript of the meeting, temporal bounds of a plurality of activity episodes within the meeting, wherein for each respective activity episode of the plurality of activity episodes; generating a plurality of episode summaries, for each respective activity episode of the plurality of activity episodes; and produce an episode summary for the respective activity episode that is dependent on the determined conversational activity type associated with the respective activity episode); storing the first summary and the second summary in a database comprising a plurality of entries, wherein the plurality of entries associates one or more summaries with one or more subjects (Niekrasz, figure 1 & 3); receiving a first audio segment related to a third video segment of the piece of media content (Niekrasz, page 1, paragraph 8; page 2, paragraph 9; i.e., [0008] determine, based on the transcript of the meeting, temporal bounds of a plurality of activity episodes within the meeting, wherein for each respective activity episode of the plurality of activity episodes; and produce an episode summary for the respective activity episode that is dependent on the determined conversational activity type associated with the respective activity episode); generating a first subtitle using the first audio segment (Niekrasz, page 1, paragraph 8; page 2, paragraph 9; i.e., [0008] generating abstractive summaries of meetings, the computing system comprising: produce, based on the transcript of the meeting, a data structure that comprises utterance features; generating a plurality of episode summaries, for each respective activity episode of the plurality of activity episodes; and produce an episode summary for the respective activity episode that is dependent on the determined conversational activity type associated with the respective activity episode); determining that a first entry of the plurality of entries associates at least a portion of the first subtitle with the first subject (Niekrasz, figure 1 & 3). But Niekrasz failed to teach the claim limitation wherein generating for display, a piece of media content, wherein: the piece of media content is a video conference; and the piece of media content comprises a plurality of video segments; generating for display, an interface, the interface comprising: the third video segment, wherein the interface displays the third video segment live the first subtitle, wherein the interface displays the first subtitle with a first attribute; the first summary, wherein the interface displays the first summary with the first attribute; and the second summary, wherein the interface displays the second summary with a second attribute, different than the first attribute, and the interface displays the first subtitle, the first summary, the second summary, and the third video segment of at the same time. However, Vezina teaches the limitation wherein the second summary, wherein the interface displays the second summary with a second attribute, different than the first attribute, and the interface displays the first subtitle, the first summary, the second summary, and the third video segment of at the same time (Vezina, figure 5; col 4, lines 10-23; col 8, lines 12-26; col 10, lines 5-24; i.e., provide a different font for the promoted data items. the user may more easily analyze and process the varying data types to determine differences between the information sources; data items related to the production cost are provided in a different font than data items related to other data types. Although demonstrated in the example of FIG. 6 as bolding and increasing the font size for the data type of interest. These other operations may include placing the data item of interest in a location that is in a better location of the display to be acknowledged by the user, may be highlighted, provided in a different color font, or promoted for the user in any other similar manner; structure, the user may request a comparison summary for the party, wherein the summary may present values for each data type). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz to substitute comparison service from Vezina for extracted phrases from Niekrasz to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). However, Huang teaches the limitation wherein generating for display, a piece of media content, wherein: the piece of media content is a video conference; and the piece of media content comprises a plurality of video segments (Huang, page 3, paragraph 42; page 8, paragraph 89; page 9, paragraph 97-99; i.e., [0099] For example, the video clip and transcript may be downloaded as a video and text summary of the conference by a user (e.g., using the user device 422 to download from the web server 420). For example, the video clip and the transcript of the video clip may be presented to a user by transmitting the video clip and the transcript. In some implementations, the steps 510, 512, 514, and 516 may be repeated to generate many different video clips that highlight different portions of the conference); the first summary, wherein the interface displays the first summary with the first attribute (Huang, figure 16 & 19; page 3, paragraph 42; page 8, paragraph 89; page 9, paragraph 97-99; i.e., [0042] extracting a summary from a conference recording transcript using natural language processing techniques. The summary may be presented to the user as highlighted text, and the user can optionally make modifications. The final highlighted transcript may be used to generate a brief text summary. The highlighted transcript timestamps are used to generate video clips from a video recording of the conference that may be used as video summary of the conference); the second summary, wherein the interface displays the second summary with a second attribute, different than the first attribute, and the interface displays the first subtitle, the first summary, the second summary, and the third video segment of at the same time (Huang, figure 16 & 19; page 3, paragraph 42; page 8, paragraph 89; page 9, paragraph 97-99; i.e., [0042] The summary may be presented to the user as highlighted text, and the user can optionally make modifications. The final highlighted transcript may be used to generate a brief text summary. The highlighted transcript timestamps are used to generate video clips from a video recording of the conference that may be used as video summary of the conference; [0089] NLP server 430 extracts the summary from the transcript text. Seventh, the web server 420 may present 470 the transcript summary as highlighted text that may be used to generate summary video clips of the conference. Eighth, the web server 420 may download 480 the highlighted transcript and video). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz to substitute unified communications as a service from Huang for extracted phrases from Niekrasz to deliver a complete communication experience regardless of physical location (Huang, page 1, paragraph 2). As to claim 30, Niekrasz-Vezina-Huang teaches the method as recited in claim 29, further comprising: receiving a second audio segment related to a fourth video segment of the piece of media content (Niekrasz, page 4, paragraph 34-35; i.e., [0034] Transcription engine 106 provides speech-to-text conversion on an audio stream or recording of the conversation occurring in a meeting and produces a text transcript 107. associated with a corresponding participant who spoke the utterance, such as by using speaker identification techniques to analyze the meeting audio; [0035] In some examples, audio transcoder 104 and/or transcription engine 106 may be distributed in whole or in part to audio input devices 120 such that audio input devices 120 may perform the audio transcoding and transcription); generating a second subtitle using the second audio segment (Niekrasz, page 4, paragraph 34-35; i.e., [0034] Transcription engine 106 provides speech-to-text conversion on an audio stream or recording of the conversation occurring in a meeting and produces a text transcript 107; [0035] distributed in whole or in part to audio input devices 120 such that audio input devices 120 may perform the audio transcoding and transcription. In such cases, audio input devices 120 may send meeting audio 130 as transcoded audio for transcription by transcription engine 106); determining that a second entry of the plurality of entries associates at least a portion of the second subtitle with the second subject (Niekrasz, page 4, paragraph 45; i.e., [0045] It is recognized that spoken communication can typically be broken into episodes. Summarizer 108 identifies the temporal bounds of these episodes, identifies the activity type of each of them, and then applies a type-specific summarization process to each episode. Summarizer 108 may then collect each episode summary into a sequential, time-indexed summary of the meeting); and generating an updated interface, wherein the updated interface displays the second subtitle, the second summary, the first summary, and the fourth video segment of the piece of media content at the same time (Niekrasz, figure 1 & 3). As to claim 31, Niekrasz-Vezina-Huang teaches the method as recited in claim 29, wherein the interface is generated for display in response to determining that the first entry of the plurality of entries associates at the least a portion of the first subtitle with the first subject (Niekrasz, page 4, paragraph 34-35; i.e., [0034] Transcription engine 106 provides speech-to-text conversion on an audio stream or recording of the conversation occurring in a meeting and produces a text transcript 107; [0035] In such cases, audio input devices 120 may send meeting audio 130 as transcoded audio for transcription by transcription engine 106). As to claim 32, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Huang failed to teach the claim limitation wherein displaying, by a first device, the interface on a first screen. However, Vezina teaches the limitation wherein displaying, by a first device, the interface on a first screen (Vezina, figure 4). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Huang to substitute comparison service from Vezina for extracted phrases from Niekrasz-Huang to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). As to claim 33, Niekrasz-Vezina-Huang teaches the method as recited in claim 32. But Niekrasz-Huang failed to teach the claim limitation wherein displaying, by a second device, the interface on a second screen. However, Vezina teaches the limitation wherein displaying, by a second device, the interface on a second screen (Vezina, figure 4). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Huang to substitute comparison service from Vezina for extracted phrases from Niekrasz-Huang to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). As to claim 36, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Huang failed to teach the claim limitation wherein the first attribute corresponds to a type of font. However, Vezina teaches the limitation wherein the first attribute corresponds to a type of font (Vezina, col 8, lines 12-26; col 10, lines 5-24; i.e., data items related to the production cost are provided in a different font than data items related to other data types. Although demonstrated in the example of FIG. 6 as bolding and increasing the font size for the data type of interest. These other operations may include placing the data item of interest in a location that is in a better location of the display to be acknowledged by the user, may be highlighted, provided in a different color font, or promoted for the user in any other similar manner; structure, the user may request a comparison summary for the party, wherein the summary may present values for each data type such that they can be compared across the contractors). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Huang to substitute comparison service from Vezina for extracted phrases from Niekrasz-Huang to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). As to claim 37, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Huang failed to teach the claim limitation wherein the first attribute corresponds to a color. However, Vezina teaches the limitation wherein the first attribute corresponds to a color (Vezina, col 8, lines 12-26; col 10, lines 5-24; i.e., data items related to the production cost are provided in a different font than data items related to other data types. Although demonstrated in the example of FIG. 6 as bolding and increasing the font size for the data type of interest. These other operations may include placing the data item of interest in a location that is in a better location of the display to be acknowledged by the user, may be highlighted, provided in a different color font, or promoted for the user in any other similar manner; structure, the user may request a comparison summary for the party, wherein the summary may present values for each data type such that they can be compared across the contractors). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Huang to substitute comparison service from Vezina for extracted phrases from Niekrasz-Huang to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). As to claim 38, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Huang failed to teach the claim limitation wherein the first attribute corresponds to a size. However, Vezina teaches the limitation wherein the first attribute corresponds to a size (Vezina, col 8, lines 12-26; col 10, lines 5-24; i.e., data items related to the production cost are provided in a different font than data items related to other data types. Although demonstrated in the example of FIG. 6 as bolding and increasing the font size for the data type of interest. In presenting the prioritized values, the identified data types may be highlighted, bolded, provided in a different font or size, promoted in the viewing space for the requesting user, or promoted in any other similar manner.). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Huang to substitute comparison service from Vezina for extracted phrases from Niekrasz-Huang to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). As to claim 39, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Huang failed to teach the claim limitation wherein the first attribute corresponds to a text indicator. However, Vezina teaches the limitation wherein the first attribute corresponds to a text indicator (Vezina, col 8, lines 12-26; col 10, lines 5-24; i.e., data items related to the production cost are provided in a different font than data items related to other data types. Although demonstrated in the example of FIG. 6 as bolding and increasing the font size for the data type of interest. These other operations may include placing the data item of interest in a location that is in a better location of the display to be acknowledged by the user, may be highlighted, provided in a different color font, or promoted for the user in any other similar manner). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Huang to substitute comparison service from Vezina for extracted phrases from Niekrasz-Huang to efficiently and effectively identify relevant information from the presentation (Vezina, col 1, lines 28-30). As to claim 48, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Vezina failed to teach the claim limitation wherein the first video segment of the piece of media content occurs before the second video segment of the piece of media content; and the second video segment of the piece of media content occurs before the third video segment of the piece of media content. However, Huang teaches the limitation wherein the first video segment of the piece of media content occurs before the second video segment of the piece of media content; and the second video segment of the piece of media content occurs before the third video segment of the piece of media content (Huang, page 2, paragraph 14; i.e., [0014] select a video excerpt from a video of the conference based on the respective timestamp of the selected string; and generate a video conference summary as a sequence of video excerpts from the video, including the selected video excerpt). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Vezina to substitute unified communications as a service from Huang for extracted phrases from Niekrasz-Vezina to deliver a complete communication experience regardless of physical location (Huang, page 1, paragraph 2). Claim(s) 40-43, 46-47 is/are directed to the apparatus claims and they do not teach or further define over the limitations recited in claim(s) 29-33, 38-39. Therefore, claim(s) 40-43, 46-47 is/are also rejected for similar reasons set forth in claim(s) 29-33, 38-39. Claim(s) 50 is/are directed to the non-transitory claim and they do not teach or further define over the limitations recited in claim(s) 29. Therefore, claim(s) 50 is/are also rejected for similar reasons set forth in claim(s) 29. Claim(s) 51-52 is/are rejected under 35 U.S.C. 103 as being unpatentable over Niekrasz, U.S. Patent/Pub. No. US 2019/0327103 A1 in view of Vezina, U.S. Patent/Pub. No. US 11,314,756 B2, and Huang, US 2023/0035155 A1, and further in view of Castellucci, US 2021/0158586 A1. As to claim 51, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Vezina-Huang failed to teach the claim limitation wherein the first subtitle, the first summary, and the second summary are overlayed over a portion of the live third video segment. However, Castellucci teaches the limitation wherein the first subtitle, the first summary, and the second summary are overlayed over a portion of the live third video segment (Castellucci, figure 5A-5B; page 1, paragraph 14; page 6, paragraph 59-60; i.e., [0014] video frame as part of a live feed of video content; [0059] FIG. 5A depicts an example video frame with a portion of subtitle text displayed across a background portion. Video frame 510 includes subtitle text positioned towards the bottom of video frame 510. The subtitle text and enhancement of the subtitle text increasing the contrast between predominant color portion 501 of the background and the portion of the subtitle text overlaying predominant color portion 501 will render the subtitle text more visible). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Vezina-Huang to substitute cloud computing environment from Castellucci for cloud based application from Niekrasz-Vezina-Huang to reduces the similarity level during display of the video frame (Castellucci, page 1, paragraph 4). As to claim 52, Niekrasz-Vezina-Huang teaches the method as recited in claim 29. But Niekrasz-Vezina-Huang failed to teach the claim limitation wherein the first subtitle, the first summary, and the second summary do not overlay any portion of the live third video segment. However, Castellucci teaches the limitation wherein the first subtitle, the first summary, and the second summary do not overlay any portion of the live third video segment (Castellucci, figure 4A-4B; page 1, paragraph 14; page 6, paragraph 57-58; i.e., [0014] video frame as part of a live feed of video content; [0057] Video frame 420 includes the subtitle text enhanced with a change to the font color tone, lightening the font color with respect to predominant color portion 401 and predominant color portion 405. The enhancement to the subtitle text results in more visible text). It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Niekrasz-Vezina-Huang to substitute cloud computing environment from Castellucci for cloud based application from Niekrasz-Vezina-Huang to reduces the similarity level during display of the video frame (Castellucci, page 1, paragraph 4). Response to Arguments Applicant’s argument(s) filed 12/17/25 have been fully considered but they are not persuasive. Applicant argues in substance that: A) with respect to claims 29, 40 & 50; Huang is silent regarding displaying any type of transcript or video segment at the same time as a live video segment. Instead, Huang requires obtaining the entire transcript of a conference after the conference has concluded. Further, Huang is silent regarding displaying anything at the same time that a live video segment is displayed. Accordingly, Huang fails to teach or render obvious displaying an interface comprising a first subtitle, a first summary, a second summary, and a live video segment, wherein the first subtitle, the first summary, the second summary, and the live video segment are displayed at the same time (page 9). In response to A); Huang does teach the claimed limitation of “the second summary, wherein the interface displays the second summary with a second attribute, different than the first attribute, and the interface displays the first subtitle, the first summary, the second summary, and the third video segment of at the same time” (Huang, figure 16 & 19; page 3, paragraph 42; page 8, paragraph 89; page 9, paragraph 97-99; i.e., [0042] Implementations of this disclosure address problems such as these by automatically extracting a summary from a conference recording transcript using natural language processing techniques. The summary may be presented to the user as highlighted text, and the user can optionally make modifications. The final highlighted transcript may be used to generate a brief text summary. The highlighted transcript timestamps are used to generate video clips from a video recording of the conference that may be used as video summary of the conference; [0089] NLP server 430 extracts the summary from the transcript text. Seventh, the web server 420 may present 470 the transcript summary as highlighted text that may be used to generate summary video clips of the conference. Eighth, the web server 420 may download 480 the highlighted transcript and video). Clearly, figure 16, 19 and paragraph 89 of Huang discloses displaying the video summary of the conference, video clips, transcript and transcript summary in one display which equating to “displaying at the same time”. Therefore, Huang meets the claim limitation. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Listing of Relevant Arts Broidy, U.S. Patent/Pub. No. US 20190268465 A1 discloses plurality of transcripts corresponding to plurality of audio files. Bi, U.S. Patent/Pub. No. US 20190035091 A1 discloses multiple summary, presenting in different format. Contact Information The present application is being examined under the pre-AIA first to invent provisions. THUONG NGUYEN whose telephone number is (571)272-3864. The examiner can normally be reached on Monday-Friday 9:00-6:00. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Noel Beharry can be reached on 571-270-5630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THUONG NGUYEN/Primary Examiner, Art Unit 2416
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Sep 17, 2024
Non-Final Rejection — §103
Jan 14, 2025
Applicant Interview (Telephonic)
Jan 14, 2025
Examiner Interview Summary
Jan 17, 2025
Response Filed
Mar 27, 2025
Final Rejection — §103
Jun 18, 2025
Response after Non-Final Action
Jul 01, 2025
Request for Continued Examination
Jul 07, 2025
Response after Non-Final Action
Sep 23, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603743
CLOCK SYNCHRONIZATION METHOD AND RELATED APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598609
TRANSMISSION METHOD, APPARATUS, FIRST COMMUNICATION NODE, SECOND COMMUNICATION NODE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12587405
MULTICAST LOCAL BREAKOUT FOR CUSTOMER PREMISE EQUIPMENT IN A 5G WIRELESS WIRELINE CONVERGENCE AT AN ACCESS GATEWAY FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12580991
MAINTAINING SESSION IDENTIFIERS ACROSS MULTIPLE WEBPAGES FOR CONTENT SELECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12550131
METHOD AND SYSTEM FOR SCHEDULING A POOL OF RESOURCES TO A PLURALITY OF USER EQUIPMENTS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+32.1%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 654 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month