Prosecution Insights
Last updated: April 19, 2026
Application No. 18/779,709

SUPPLEMENTAL AUDIO GENERATION SYSTEM IN AN AUDIO-ONLY MODE

Final Rejection §103
Filed
Jul 22, 2024
Examiner
TELAN, MICHAEL R
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
42%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
69%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
176 granted / 417 resolved
-15.8% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
453
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 417 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed February 9, 2026 have been fully considered but they are not persuasive. With regard to claim 1, Applicant submits that the cited prior art does not teach amendments to the claim. Remarks, pp. 8-9. Claim 1 is rejected under 35 USC §103 over a combination of Lewis et al. (US 2017/0041680) and Bhardwaj et al. (US 2014/0365675). Lewis teaches a method comprising: causing to output content in an audio-only mode ([0024], “the mechanisms described herein can determine that the user device requesting the content is in a background playback mode in which the audio data included in the content item is to be presented, but the visual data of the content item is either inhibited from being presented or is unlikely to be viewed by a user even if it were to be presented.” [0039], “At 304, process 300 can determine if the user device requesting the content is being used in a background playback mode.” [0044]-[0045], Fig. 3); determining, during playback of the content, that the played content reaches a section containing visual information that is not reproduced in the audio-only mode ([0025], “The mechanisms can evaluate any suitable properties of the one or more content items to determine whether the one or more content items are suitable for presentation in background playback mode, such as the presence or absence of a lengthy introduction (e.g., dialogue) to the content item before music starts, the presence or absence of periods of silence or only noise, the presence of periods of dialogue, the audio quality of the content item, repetitiveness in the audio data (e.g., where the audio data is unvaried over a relatively large portion of the content item), etc. For example, if the particular video requested by the smartphone includes a long silence at the end (e.g., with visual information and/or user interface elements prompting a user to subscribe to content from a user associated with the particular video) and/or the audio data is of poor quality, the mechanisms described herein can determine that the particular video is not suitable for background playback.” [0046], “At 310, process 300 can determine if the requested content is suitable for background playback.” [0048], “In some embodiments, process 300 (and/or any other suitable process) can determine the suitability of a content item for background playback based at least in part on the amount and/or length of silences in audio of the content item.” Fig. 3); and based at least in part on determining that the played content reaches the section containing visual information that is not reproduced in the audio-only mode ([0046], “At 310, process 300 can determine if the requested content is suitable for background playback.” [0048], “In some embodiments, process 300 (and/or any other suitable process) can determine the suitability of a content item for background playback based at least in part on the amount and/or length of silences in audio of the content item.” Fig. 3): accessing metadata of the section containing visual information that is not reproduced in the audio-only mode ([0049], “In some embodiments, when a content item is transmitted to a user device and/or when a content item is presented by a user device, a portion at the beginning and/or the end that includes silence or dialogue with no music can be automatically skipped over by the user device based on metadata associated with the content item and/or based on instructions transmitted to the user device to skip a particular portion of a content item.” [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback.” Fig. 4); identifying a supplemental audio portion based at least in part on the metadata of the section containing visual information that is not reproduced in the audio-only mode ([0059], “if process 300 determines that the requested content it not suitable for background playback (‘NO’ at 312), process 300 can move to 316.” [0060], “At 316, process 300 can cause the requested content to be skipped and/or can cause replacement content that is suitable to be transmitted to the user device instead of the requested content.” [0062], “In some embodiments, if process 300 determines that a content item is not suitable for background playback, process 300 can cause a replacement content item to be transmitted in response to the request. Such a replacement item can include the same or similar content to the requested content item, but be more suitable for background playback.” [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback.” Figs. 3-4); causing to output the supplemental portion instead of the audio-only mode for the section containing visual information that is not reproduced in the audio-only mode ([0059], “if process 300 determines that the requested content it not suitable for background playback (‘NO’ at 312), process 300 can move to 316.” [0060], “At 316, process 300 can cause the requested content to be skipped and/or can cause replacement content that is suitable to be transmitted to the user device instead of the requested content.” [0062], “In some embodiments, if process 300 determines that a content item is not suitable for background playback, process 300 can cause a replacement content item to be transmitted in response to the request. Such a replacement item can include the same or similar content to the requested content item, but be more suitable for background playback.” Fig. 3). Bhardwaj teaches causing to output supplemental portion until completion of the supplemental portion, and causing to resume output of content ([0078], “The scenario 400 proceeds to the bottom portion where playback of the content stream 402 resumes from the resume point 414 and utilizing the resume state 420. For example, playback skips from the end of the supplementary content 410 to the resume point 414, without playing the segment 404b or portions of the segment 406b that occur before the resume point 414. Utilizing the resume state 420 provides a data context for processing and output of the segment 406b from the resume point 414, as well as subsequent portions of the content stream 402.” [0111], “embodiments enable supplementary content to be inserted at any specified point in a content stream, such as between boundaries of a segment and/or a period. Embodiments further enable playback of a content stream to resume after supplementary content has been inserted.”). Taking the teachings together, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lewis with Bhardwaj to enable causing to output the supplemental portion instead of the audio-only mode for the section until completion of the supplemental portion, and causing to resume output of the content in the audio-only mode. The modification would serve to facilitate a return to user-preferred content. The modification would thereby improve the user experience. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 8, 11, 13-14, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lewis et al. (US 2017/0041680) and Bhardwaj et al. (US 2014/0365675). . Regarding claim 1, Lewis teaches a method comprising: causing to output content in an audio-only mode ([0024], “the mechanisms described herein can determine that the user device requesting the content is in a background playback mode in which the audio data included in the content item is to be presented, but the visual data of the content item is either inhibited from being presented or is unlikely to be viewed by a user even if it were to be presented.” [0039], “At 304, process 300 can determine if the user device requesting the content is being used in a background playback mode.” [0044]-[0045], Fig. 3); determining, during playback of the content, that the played content reaches a section containing visual information that is not reproduced in the audio-only mode ([0025], “The mechanisms can evaluate any suitable properties of the one or more content items to determine whether the one or more content items are suitable for presentation in background playback mode, such as the presence or absence of a lengthy introduction (e.g., dialogue) to the content item before music starts, the presence or absence of periods of silence or only noise, the presence of periods of dialogue, the audio quality of the content item, repetitiveness in the audio data (e.g., where the audio data is unvaried over a relatively large portion of the content item), etc. For example, if the particular video requested by the smartphone includes a long silence at the end (e.g., with visual information and/or user interface elements prompting a user to subscribe to content from a user associated with the particular video) and/or the audio data is of poor quality, the mechanisms described herein can determine that the particular video is not suitable for background playback.” [0046], “At 310, process 300 can determine if the requested content is suitable for background playback.” [0048], “In some embodiments, process 300 (and/or any other suitable process) can determine the suitability of a content item for background playback based at least in part on the amount and/or length of silences in audio of the content item.” Fig. 3); and based at least in part on determining that the played content reaches the section containing visual information that is not reproduced in the audio-only mode ([0046], “At 310, process 300 can determine if the requested content is suitable for background playback.” [0048], “In some embodiments, process 300 (and/or any other suitable process) can determine the suitability of a content item for background playback based at least in part on the amount and/or length of silences in audio of the content item.” Fig. 3): accessing metadata of the section containing visual information that is not reproduced in the audio-only mode ([0049], “In some embodiments, when a content item is transmitted to a user device and/or when a content item is presented by a user device, a portion at the beginning and/or the end that includes silence or dialogue with no music can be automatically skipped over by the user device based on metadata associated with the content item and/or based on instructions transmitted to the user device to skip a particular portion of a content item.” [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback.” Fig. 4); identifying a supplemental audio portion based at least in part on the metadata of the section containing visual information that is not reproduced in the audio-only mode ([0059], “if process 300 determines that the requested content it not suitable for background playback (‘NO’ at 312), process 300 can move to 316.” [0060], “At 316, process 300 can cause the requested content to be skipped and/or can cause replacement content that is suitable to be transmitted to the user device instead of the requested content.” [0062], “In some embodiments, if process 300 determines that a content item is not suitable for background playback, process 300 can cause a replacement content item to be transmitted in response to the request. Such a replacement item can include the same or similar content to the requested content item, but be more suitable for background playback.” [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback.” Figs. 3-4); causing to output the supplemental portion instead of the audio-only mode for the section containing visual information that is not reproduced in the audio-only mode ([0059], “if process 300 determines that the requested content it not suitable for background playback (‘NO’ at 312), process 300 can move to 316.” [0060], “At 316, process 300 can cause the requested content to be skipped and/or can cause replacement content that is suitable to be transmitted to the user device instead of the requested content.” [0062], “In some embodiments, if process 300 determines that a content item is not suitable for background playback, process 300 can cause a replacement content item to be transmitted in response to the request. Such a replacement item can include the same or similar content to the requested content item, but be more suitable for background playback.” Fig. 3). Lewis does not expressly teach causing to output the supplemental portion instead of the audio-only mode for the section until completion of the supplemental portion, and causing to resume output of the content in the audio-only mode. Bhardwaj teaches causing to output supplemental portion until completion of the supplemental portion, and causing to resume output of content ([0078], “The scenario 400 proceeds to the bottom portion where playback of the content stream 402 resumes from the resume point 414 and utilizing the resume state 420. For example, playback skips from the end of the supplementary content 410 to the resume point 414, without playing the segment 404b or portions of the segment 406b that occur before the resume point 414. Utilizing the resume state 420 provides a data context for processing and output of the segment 406b from the resume point 414, as well as subsequent portions of the content stream 402.” [0111], “embodiments enable supplementary content to be inserted at any specified point in a content stream, such as between boundaries of a segment and/or a period. Embodiments further enable playback of a content stream to resume after supplementary content has been inserted.”). In view of Bhardwaj’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lewis to enable causing to output the supplemental portion instead of the audio-only mode for the section until completion of the supplemental portion, and causing to resume output of the content in the audio-only mode. The modification would serve to facilitate a return to user-preferred content. The modification would thereby improve the user experience. Regarding claims 3 and 13, the combination further teaches a method comprising: accessing the metadata of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content, wherein the supplemental information includes at least one of an actor, a character, music, a commentary, a rating, bonus content, trivia, or social media network information; identifying audio information corresponding with the accessed supplemental information; and causing to output the supplemental portion based at least in part on the accessed supplemental information (Lewis: [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback. For example, the metadata can include a content identifier of a song in the content item which can be used to identify a content item that includes the same song but that is more suitable for background playback.”). Regarding claims 4 and 14, the combination further teaches a method comprising: accessing a data of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content, wherein the supplemental information includes at least one of an actor, a character, music, a commentary, a rating, bonus content, trivia, or social media network information; identifying audio information corresponding with the accessed supplemental information; and causing to output the supplemental portion based at least in part on the accessed supplemental information (Lewis: [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback. For example, the metadata can include a content identifier of a song in the content item which can be used to identify a content item that includes the same song but that is more suitable for background playback.”). However, the combination as presently combined does not expressly teach accessing a manifest of the section for supplemental information associated with the content. Bhardwaj teaches accessing a manifest for supplemental information associated with content ([0045], “in at least some embodiments, the modified manifest 216 is utilized to manage playback of the content stream 202 and the supplementary content 212 ….”) [0047], “The modified manifest 216 further indicates that the supplementary content 212 is inserted between the segment 206a and the segment 204b.”). In view of Bhardwaj’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include accessing a manifest of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content. The modification would produce a combined system enabled with an additional and/or alternative means of identifying supplemental content related to a current program. Regarding claims 8 and 18, the combination further teaches a method comprising: determining advertisement-related content associated with the content; and causing to output the supplemental portion based at least in part on the advertisement-related content (Lewis: [0058], “In some embodiments, when an advertisement is determined not to be suitable for background playback, a substitute advertisement can be provided in its place.”). Regarding claim 11, Lewis teaches a system comprising: circuitry ([0031]-[0034], Fig. 2) configured to execute the method of claim 1. The grounds of rejection of claim 1 under 35 USC §103 are similarly applied to the remaining limitations of claim 11. Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lewis, Bhardwaj, and Chang et al. (US 2009/0259473). Regarding claims 2 and 12, the combination teaches the limitations specified above; however, the combination does not expressly teach further teaches a method comprising determining text present in frames of the section containing visual information that is not reproduced in the audio-only mode; and causing to output the text in an audio format. Chang determining text present in frames of content containing visual information that is not reproduced in audio, and causing to output the text in an audio format ([0012], “FIG. 1 is a schematic illustration of an example STB 100 that, in addition to receiving and presenting video programs to a user, detects textual portions of a video program that are not readily consumable by a visually impaired person, converts said textual portions into corresponding audio data, and presents the thus generated audio data to the visually impaired person to aid in the consumption of the video program. Example textual portions include, but are not limited to a series of video frames presenting a substantially static text-based information screen (e.g., the example screen snapshot of FIG. 2), a series of video frames presenting scrolling text within all or a portion of a screen (e.g., the example screen snapshots of FIGS. 3A and 3B), and/or text-based information that describes a series of video frames that do not have associated spoken words (e.g., when a panoramic scene is being displayed but no person is speaking, and the text-based information describes the scene being displayed). It will be readily understood that such text-based portions and any other type(s) of text-based portions of a video program are not readily consumable by visually impaired persons.”). In view of Chang’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include determining text present in frames of the section containing visual information that is not reproduced in the audio-only mode; and causing to output the text in an audio format. The modification would serve to facilitate consumption of text content while consuming content in audio only mode. Claim(s) 5-7 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lewis, Bhardwaj, and Trollope et al. (US 9491522). Regarding claims 5 and 15, the combination further teaches a method comprising: accessing data of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content, wherein the supplemental information includes at least one of an actor, a character, music, a commentary, a rating, bonus content, trivia, or social media network information; identifying audio information corresponding with the accessed supplemental information; and causing to output the supplemental portion based at least in part on the accessed supplemental information (Lewis: [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback. For example, the metadata can include a content identifier of a song in the content item which can be used to identify a content item that includes the same song but that is more suitable for background playback.”). However, the combination does not expressly teach accessing a subtitle of the section for supplemental information associated with the content. Trollope teaches accessing a subtitle of a section for supplemental information associated with content (Col. 17, line 64 to col. 18, line 12, “In some implementations, a capture module 1122 can receive media data related to a program or a channel, such as video data, audio data, electronic program guide data, metadata, subtitles or captioning content, etc., as described above in connection with, for example, FIGS. 1 and 2. Additionally or alternatively, capture module 1122 can extract various media data from content provided from content sources as described in connection with, for example, FIGS. 1 and 2. Such extracted media data can include, for example, audio fingerprints, subtitles, etc. This information can be stored, for example, in a database (not shown) for use by the search application executing on front-end server 1120 in identifying channels, identifying program and/or other program-related information, obtaining supplemental content items, and/or various other operations.” Col. 18, lines 29-43, “keyword extraction module 1130 can extract keywords from captured audio data, video data, and/or subtitle information and obtain supplemental content items from multiple content sources (e.g., content sources 1114).”). In view of Trollope’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include accessing a subtitle of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content. The modification would produce a combined system enabled with an additional and/or alternative means of identifying supplemental content related to a current program. Regarding claims 6 and 16, the combination further teaches a method comprising: accessing a data of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content, wherein the supplemental information includes at least one of an actor, a character, music, a commentary, a rating, bonus content, trivia, or social media network information; identifying audio information corresponding with the accessed supplemental information; and causing to output the supplemental portion based at least in part on the accessed supplemental information (Lewis: [0075], “In some embodiments, process 400 can generate metadata that can be used to identify a content item that is to be substituted for the content item that is not suitable for background playback. For example, the metadata can include a content identifier of a song in the content item which can be used to identify a content item that includes the same song but that is more suitable for background playback.”). However, the combination does not expressly teach accessing a closed caption data of the section for supplemental information associated with the content. Trollope teaches accessing a closed caption data of a section for supplemental information associated with content (Col. 17, line 64 to col. 18, line 12, “In some implementations, a capture module 1122 can receive media data related to a program or a channel, such as video data, audio data, electronic program guide data, metadata, subtitles or captioning content, etc., as described above in connection with, for example, FIGS. 1 and 2. Additionally or alternatively, capture module 1122 can extract various media data from content provided from content sources as described in connection with, for example, FIGS. 1 and 2. Such extracted media data can include, for example, audio fingerprints, subtitles, etc. This information can be stored, for example, in a database (not shown) for use by the search application executing on front-end server 1120 in identifying channels, identifying program and/or other program-related information, obtaining supplemental content items, and/or various other operations.”). In view of Trollope’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include accessing a closed caption data of the section containing visual information that is not reproduced in the audio-only mode for supplemental information associated with the content. The modification would produce a combined system enabled with an additional and/or alternative means of identifying supplemental content related to a current program. Regarding claims 7 and 17, the combination teaches the limitations specified above; however, the combination does not expressly teach: accessing one or more social media networks to retrieve comments or posts related to the content; and causing to output the supplemental portion based at least in part on the retrieved comments or posts. Trollope teaches accessing one or more social media networks to retrieve comments or posts related to content, and causing to output supplemental content based at least in part on the retrieved comments or posts (Col. 13, lines 32-39, “As shown in FIG. 7, recommendation cards 720, 730, and 740 each include a supplemental content item. For example, in response to determining the keywords ‘John Smith’ and ‘hydraulic fracking,’ cards 720 and 730 that provide text snippets, thumbnail images, links, and/or other supplemental content relating to ‘John Smith’ and card 740 that provides social media snippets relating to the topic ‘hydraulic fracking’ can be presented to the user of the client application.”). In view of Trollope’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include accessing one or more social media networks to retrieve comments or posts related to the content, and causing to output the supplemental portion based at least in part on the retrieved comments or posts. The modification would serve to enable a combined system to provide supplemental content items based on social media content. The modification would serve to improve the user experience. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lewis, Bhardwaj, and Huttner et al. (US 11513759). Regarding claims 9 and 19, the combination teaches the limitations specified above; however, the combination does not expressly teach preceding the supplemental audio with a special sound or tone to inform the user that the supplemental audio is not part of the main content. Huttner teaches preceding supplemental audio with a special sound or tone (Col. 2, lines 31-20, “In one or more embodiments, a listener may be given an option to have a device present supplemental content. The option may be presented in the form of a ‘soundmark’ (e.g., a notification sound such as a tone, beep, series of sounds, etc.) that indicate the presence of available supplemental content. … A soundmark may be distinct for different types of content. For example, one soundmark may indicate to a listener that footnote or endnote content is available. Another soundmark may indicate to a listener that images are available. Another soundmark may indicate to a listener that charts or graphs are available. Another soundmark may indicate to a listener that additional audio (e.g., music) is available. Another soundmark may indicate to a listener that content promotions are available.”). In view of Huttner’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include preceding the supplemental audio with a special sound or tone to inform the user that the supplemental audio is not part of the main content. The modification would serve to facilitate user access to supplemental content while listening to audio content. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Lewis, Bhardwaj, and Hattery et al. (US 2020/0077147). Regarding claims 10 and 20, the combination teaches the limitations specified above; however, the combination does not expressly teach including delimiter words before and after the supplemental audio portion to indicate that the audio that follows the delimiter words is not part of the content. Hattery teaches including delimiter words before supplemental content to indicate that the content that follows the delimiter words is not part of the content ([0019], “FIG. 2B illustrates a scenario showing what may occur when the zombie show is nearing a break 215, according to particular embodiments. The social-networking system 222 may anticipate the start time of a break in the television program. At a predetermined amount of time prior to the anticipated start time of the break, the social-networking system 222 may transmit a notification 231 to the user's device 230. The notification 231 may be surfaced to the user 201 through an application installed on the device 230, such as a social-networking application associated with the social-networking system 222, text message, phone call, e-mail, or any other suitable communication channels. The notification 231 may include a message that informs the user 201 that live streaming content related to the zombie show 215 is about to begin.”). In view of Hattery’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include delimiter words before and after the supplemental audio portion to indicate that the audio that follows the delimiter words is not part of the content. The modification would serve to facilitate user recognition of supplemental content. The modification would thereby improve the user experience. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. /MICHAEL R TELAN/ Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Aug 07, 2025
Non-Final Rejection — §103
Nov 10, 2025
Interview Requested
Nov 19, 2025
Examiner Interview Summary
Nov 19, 2025
Applicant Interview (Telephonic)
Feb 09, 2026
Response Filed
Mar 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604066
SYSTEMS AND METHODS FOR GENERATING NOTIFICATION INTERFACES BASED ON MEDIA BROADCAST ACCESS EVENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12598361
VIDEO OPTIMIZATION PROXY SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598352
VIDEO PRESENTATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12581137
VIDEO MANAGEMENT SYSTEM FOR VIDEO FILES AND LIVE STREAMING CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12549801
LYRIC VIDEO DISPLAY METHOD AND DEVICE, ELECTRONIC APPARATUS AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
42%
Grant Probability
69%
With Interview (+27.0%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 417 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month