Prosecution Insights
Last updated: April 19, 2026
Application No. 18/498,897

PROVIDING AUGMENTED REALITY IN ASSOCIATION WITH LIVE EVENTS

Final Rejection §103
Filed
Oct 31, 2023
Examiner
TELAN, MICHAEL R
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Snap Inc.
OA Round
2 (Final)
42%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
69%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
176 granted / 417 resolved
-15.8% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
453
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 417 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed November 26, 2025 have been fully considered but they are not persuasive. With regard to claim 1, Applicant submits that the cited prior art does not teach the amendments to the claim. Claim 1 is rejected under 35 USC §103 over a combination of Patadia et al. (US 2015/0319505) and Sharma et al. (US 2016/0379410). As presented in the claim rejections, Patadia teaches: causing, based on the timeline data, an output device to display the first live video together with content concurrently with the live event ([0049], “In another example, a content moment may present an opportunity for a user to receive a recommendation to explore additional content associated with the media program. For example, metadata associated with media program 206 may indicate that time point 210-2 corresponds to a portion of media program 206 in which a natural disaster is depicted and/or described. Accordingly, management facility 102 may identify, based on the metadata, content moment 208-2 as corresponding to a part in media program 206 in which a natural disaster is depicted. Management facility 102 may generate supplemental content stream 204 such that supplemental content instance 212-2 may include information associated with additional content (e.g., news reports, documentaries, follow up stories, etc.) and/or one or more tools that may facilitate a user accessing the additional content associated with the natural disaster. Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2), each of the plurality of client devices configured to capture respective second live video ([0071], “Access device 604 may facilitate access by a user to content (e.g., media content and/or supplemental content) provided by media content provider subsystem 602. … To illustrate, access device 604 may present and/or record a media program at the direction of a user.” That is, client devices may record media programs.), and each of the plurality of client devices being configured to display the respective second live video together with the content concurrently with the live event based on the indication of the timeline data ([0049], “Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2). Sharma teaches live video having been captured by a camera, and providing augmented reality content ([0007], “The processor is configured to acquire video data from the camera sensor or a video file, identify at least one region of interest within the video data, and generate augmented reality data for the at least one region of interest without receiving user input, with the augmented reality data being contextually related to the at least one region of interest.” [0014], “Referring additionally to FIG. 2, an augmented reality processing technique is now described. The processor 112 collects frames of video data, optionally in real time (Block 202), optionally from a camera sensor 118,….” [0029]). Sharma additionally teaches a client device comprising a camera configured to capture live video ([0013], “The electronic device 100 may be a smartphone, tablet, augmented reality headset, or other suitable electronic device. The electronic device 100 includes a processor 112 having an optional display 112, an optional non-volatile storage unit 116, an optional camera sensor 118….” [0014], “The processor 112 collects frames of video data, optionally in real time (Block 202), optionally from a camera sensor 118, and may optionally operate the audio transducer 135 to obtain an audio recording contemporaneous with the frames of video data.”). In view of Sharma’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Patadia wherein the first live video is been captured by a camera, such that the content is augmented reality content, and wherein each of the plurality of client devices comprising a respective second camera configured to capture respective second live video in association with the live event. The modification would serve to allow users to access and view augmented reality content via client devices, thereby enhancing the user experience. Applicant is directed to the following claim rejections for analysis as to how previously-cited prior art teaches the amended claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 10-11, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Patadia et al. (US 2015/0319505) and Sharma et al. (US 2016/0379410). Regarding claim 1, Patadia teaches a system comprising: at least one processor; at least one memory component storing instructions that, when executed by the at least one processor ([0089]-[0090]), cause the at least one processor to perform operations comprising: accessing first live video provided to a media server ([0026], “Management facility 102 may receive and add media programs to the catalogue in any suitable manner. For example, management facility 102 may receive media programs from one or more media content sources. A media content source may include any entity that generates, produces, and/or provides media content that may be streamed by a media service provider to users of a media service provided by the media service provider.” [0076], “FIG. 7 illustrates an exemplary implementation 700 of system 100 wherein a content delivery network 702 and an application server 704 are communicatively coupled to access device 604 by way of a network 706. Management facility 102 and storage facility 104 may each be implemented by content delivery network 702, application server 704, and/or access device 604.” Figs. 1, 7), the first live video in association with a live event ([0029], “For example, management facility 102 may receive a live content stream from a media content source. Management facility 102 may then facilitate distribution of the live content stream to a media content access device associated with a user in any suitable manner.”); accessing timeline data stored by the media server, the timeline data for synchronizing effects with respect to the first live video ([0033], “To facilitate providing such supplemental content, management facility 102 may be configured to generate a supplemental content stream associated with the primary media content stream.” [0049], “In another example, a content moment may present an opportunity for a user to receive a recommendation to explore additional content associated with the media program. For example, metadata associated with media program 206 may indicate that time point 210-2 corresponds to a portion of media program 206 in which a natural disaster is depicted and/or described. Accordingly, management facility 102 may identify, based on the metadata, content moment 208-2 as corresponding to a part in media program 206 in which a natural disaster is depicted. Management facility 102 may generate supplemental content stream 204 such that supplemental content instance 212-2 may include information associated with additional content (e.g., news reports, documentaries, follow up stories, etc.) and/or one or more tools that may facilitate a user accessing the additional content associated with the natural disaster. Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2); causing, based on the timeline data, an output device to display the first live video together with content concurrently with the live event ([0049], “In another example, a content moment may present an opportunity for a user to receive a recommendation to explore additional content associated with the media program. For example, metadata associated with media program 206 may indicate that time point 210-2 corresponds to a portion of media program 206 in which a natural disaster is depicted and/or described. Accordingly, management facility 102 may identify, based on the metadata, content moment 208-2 as corresponding to a part in media program 206 in which a natural disaster is depicted. Management facility 102 may generate supplemental content stream 204 such that supplemental content instance 212-2 may include information associated with additional content (e.g., news reports, documentaries, follow up stories, etc.) and/or one or more tools that may facilitate a user accessing the additional content associated with the natural disaster. Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2), the content having been preselected for the live event ([0049], “In another example, a content moment may present an opportunity for a user to receive a recommendation to explore additional content associated with the media program. For example, metadata associated with media program 206 may indicate that time point 210-2 corresponds to a portion of media program 206 in which a natural disaster is depicted and/or described. Accordingly, management facility 102 may identify, based on the metadata, content moment 208-2 as corresponding to a part in media program 206 in which a natural disaster is depicted. Management facility 102 may generate supplemental content stream 204 such that supplemental content instance 212-2 may include information associated with additional content (e.g., news reports, documentaries, follow up stories, etc.) and/or one or more tools that may facilitate a user accessing the additional content associated with the natural disaster. Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2); and providing, to a plurality of client devices, an indication of the timeline data ([0014], “In certain examples, the media content delivery system may facilitate concurrent streaming of the primary media content stream and the supplemental content stream to a media content access device over a network.” [0030], “In certain examples, the one or more operations may include management facility 102 preparing the media program to be streamed to one or more types of media content access devices and/or over one or more types of network connections.”), each of the plurality of client devices configured to capture respective second live video ([0071], “Access device 604 may facilitate access by a user to content (e.g., media content and/or supplemental content) provided by media content provider subsystem 602. … To illustrate, access device 604 may present and/or record a media program at the direction of a user.” That is, client devices may record media programs.), and each of the plurality of client devices being configured to display the respective second live video together with the content concurrently with the live event based on the indication of the timeline data ([0049], “Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2). Patadia does not expressly teach the first live video having been captured by a first camera. Patadia also does not expressly teach that the content is augmented reality content. Patadia also does not expressly teach each of the plurality of client devices comprising a respective second camera configured to capture respective second live video in association with the live event. Sharma teaches live video having been captured by a camera, and providing augmented reality content ([0007], “The processor is configured to acquire video data from the camera sensor or a video file, identify at least one region of interest within the video data, and generate augmented reality data for the at least one region of interest without receiving user input, with the augmented reality data being contextually related to the at least one region of interest.” [0014], “Referring additionally to FIG. 2, an augmented reality processing technique is now described. The processor 112 collects frames of video data, optionally in real time (Block 202), optionally from a camera sensor 118,….” [0029]). Sharma additionally teaches a client device comprising a camera configured to capture live video ([0013], “The electronic device 100 may be a smartphone, tablet, augmented reality headset, or other suitable electronic device. The electronic device 100 includes a processor 112 having an optional display 112, an optional non-volatile storage unit 116, an optional camera sensor 118….” [0014], “The processor 112 collects frames of video data, optionally in real time (Block 202), optionally from a camera sensor 118, and may optionally operate the audio transducer 135 to obtain an audio recording contemporaneous with the frames of video data.”). In view of Sharma’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Patadia wherein the first live video is been captured by a camera, such that the content is augmented reality content, and wherein each of the plurality of client devices comprising a respective second camera configured to capture respective second live video in association with the live event. The modification would serve to allow users to access and view augmented reality content via client devices, thereby enhancing the user experience. The grounds of rejection of claim 1 under 35 USC §103 are similarly applied to claim 10. Regarding claim 19, Patadia teaches a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor ([0089]-[0090]). The grounds of rejection of claim 1 under 35 USC §103 are similarly applied to the remaining limitations of claim 19. Regarding claims 2, 11, and 20, the combination further teaches, wherein the indication of the timeline data comprises a cue ID for synchronizing the augmented reality content with the first live video for display on the output device, and for synchronizing the augmented reality content with the respective second live video for display on the plurality of client devices (Patadia: [0039], “The supplemental content instances may be included in the secondary content stream such that the supplemental content instances are temporally aligned with corresponding content moments of a media program included in the primary media content stream.” [0047], “As further shown in FIG. 2, supplemental content stream 204 may include supplemental content instances 212 (e.g., 212-1 through 212-3) that are temporally aligned with content moments 208 and time points 210 of primary media content stream 202.” [0048]-[0049], Fig. 2. Sharma: [0007], [0029]). Claim(s) 3-5, 9, 12-14, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Patadia, Sharma, and Charlton et al. (US 2021/0389850). Regarding claims 3 and 12, the combination further teaches wherein the augmented reality content is provided by an augmented reality content item that is preselected for the live event (Patadia: [0049], “In another example, a content moment may present an opportunity for a user to receive a recommendation to explore additional content associated with the media program. For example, metadata associated with media program 206 may indicate that time point 210-2 corresponds to a portion of media program 206 in which a natural disaster is depicted and/or described. Accordingly, management facility 102 may identify, based on the metadata, content moment 208-2 as corresponding to a part in media program 206 in which a natural disaster is depicted. Management facility 102 may generate supplemental content stream 204 such that supplemental content instance 212-2 may include information associated with additional content (e.g., news reports, documentaries, follow up stories, etc.) and/or one or more tools that may facilitate a user accessing the additional content associated with the natural disaster. Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2). However, the combination does not expressly teach the augmented reality content item being associated with an interaction system that is separate from the media server. Charlton teaches augmented reality content item being associated with an interaction system that is separate from a media server ([0044], “The Application Program Interface (API) server 118 receives and transmits data (e.g., commands and other payloads, e.g. AR content generators and associated metadata) between the client device 106 and the application servers 116 and between the developer device 108 and the application servers 116.” [0045], “As will be described in further detail, the AR content generators generated by developers may be uploaded from the developer device 108 to the SDK server system 104 where they are aggregated by the effects submission service 120 into collections of AR content generators, associated with individual developers and stored in database 130.” Fig. 1). In view of Charlton’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the augmented reality content item being associated with an interaction system that is separate from the media server. The modification would serve to facilitate distribution and management of augmented reality content items to viewers. Regarding claims 4 and 13, the combination teaches the limitations specified above, and additionally provides a teaching for a desktop (Patadia: [0025], “Management facility 102 may be configured to perform one or more management operations associated with delivering content to a media content access device (e.g., a set-top box device, a mobile phone device, a tablet computer, a laptop computer, a desktop computer, a smart television, etc.).”). However, the combination does not expressly teach: loading the augmented reality content item for a desktop application running on the media server, the desktop application being configured to interface with a media server designer for visualizing, designing and sequencing effects for the live event, wherein causing the output device to display the first live video together with augmented reality content further is based on loading the augmented reality content item for the desktop application. Charlton teaches: loading augmented reality content item for an application running on a media server ([0038], “The system also may include a developer device 108 that hosts effects software 112 that can be used by a developer to create custom AR content generators for use with the app 110. The effects software 112 may be provided by the SDK provider as downloadable software or a cloud service via the SDK server system 104.” [0039], “The SDK server system 104 includes application programming interfaces (APIs) with functions that can be called or invoked by the app 110 or the effects software 112.” [0040]-[0041], Fig. 1), the application being configured to interface with a media server designer for visualizing, designing and sequencing effects ([0038], “The system also may include a developer device 108 that hosts effects software 112 that can be used by a developer to create custom AR content generators for use with the app 110. The effects software 112 may be provided by the SDK provider as downloadable software or a cloud service via the SDK server system 104.” [0039], “The SDK server system 104 includes application programming interfaces (APIs) with functions that can be called or invoked by the app 110 or the effects software 112.” [0040]-[0041], [0020], “In some examples, an augmented reality content generator includes augmented reality (or ‘AR’) content configured to modify or transform image data presented within a GUI of a client device in some way. For example, complex additions or transformations to the content images may be performed using AR content generator data, such as adding rabbit ears to the head of a person in a video clip, adding floating hearts with background coloring to a video clip, altering the proportions of a person's features within a video clip, adding enhancements to landmarks in a scene being viewed on a client device or many numerous other such transformations. This includes both real-time modifications that modify an image as it is captured using a camera associated with the client device, which is then displayed on a screen of the client device with the AR content generator modifications, as well as modifications to stored content, such as video clips in a gallery that may be modified using AR content generators.” Fig. 1), wherein causing an output device to display video together with augmented reality content further is based on loading the augmented reality content item for the application ([0038], “The system also may include a developer device 108 that hosts effects software 112 that can be used by a developer to create custom AR content generators for use with the app 110. The effects software 112 may be provided by the SDK provider as downloadable software or a cloud service via the SDK server system 104.” [0039], “The SDK server system 104 includes application programming interfaces (APIs) with functions that can be called or invoked by the app 110 or the effects software 112.” [0040]-[0041], [0020], “In some examples, an augmented reality content generator includes augmented reality (or ‘AR’) content configured to modify or transform image data presented within a GUI of a client device in some way. For example, complex additions or transformations to the content images may be performed using AR content generator data, such as adding rabbit ears to the head of a person in a video clip, adding floating hearts with background coloring to a video clip, altering the proportions of a person's features within a video clip, adding enhancements to landmarks in a scene being viewed on a client device or many numerous other such transformations. This includes both real-time modifications that modify an image as it is captured using a camera associated with the client device, which is then displayed on a screen of the client device with the AR content generator modifications, as well as modifications to stored content, such as video clips in a gallery that may be modified using AR content generators.” Fig. 1). In view of Charlton’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include loading the augmented reality content item for a desktop application running on the media server, the desktop application being configured to interface with a media server designer for visualizing, designing and sequencing effects for the live event, wherein causing the output device to display the first live video together with augmented reality content further is based on loading the augmented reality content item for the desktop application. The modification would enable a means for creating and modifying augmented reality content for distribution and presentation to viewers. Regarding claims 5 and 14, the combination further teaches the operations further comprising: loading the augmented reality content item for the plurality of client devices, wherein display of the respective second live video together with the augmented reality content is further based on loading the augmented reality content item for the plurality of client devices (Patadia: [0014], “In certain examples, the media content delivery system may facilitate concurrent streaming of the primary media content stream and the supplemental content stream to a media content access device over a network.” [0030], “In certain examples, the one or more operations may include management facility 102 preparing the media program to be streamed to one or more types of media content access devices and/or over one or more types of network connections.” [0049], “Accordingly, when a user is consuming media program 206 during a playback of primary media content stream 202, the user may, for example, view a depiction of the natural disaster beginning at time point 210-2 and be concurrently presented with a recommendation to explore additional content associated with the natural disaster by way of supplemental content instance 212-2 during a concurrent playback of supplemental content stream 204.” [0050], Fig. 2). Regarding claims 9 and 18, the combination further teaches wherein the camera is a video camera (Sharma: [0007], “The processor is configured to acquire video data from the camera sensor….”). However, the combination does not expressly teach that the output device is a light-emitting diode (LED) display. Charlton teaches an output device that is a light-emitting diode (LED) display ([0108], “The user output components 926 may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth.”). In view of Charlton’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination such that the output device is a light-emitting diode (LED) display. The modification would provide a combined system with additional and/or alternative options for content output. Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Patadia, Sharma, and Smith et al. (US 10325410). Regarding claims 6 and 15, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the live event is presented at a venue, and wherein the camera and the output device are positioned at the venue. Smith teaches wherein a live event is presented at a venue, and wherein a camera and an output device are positioned at the venue (Col. 2, lines 14-36, “(11) Embodiments described herein provide enhanced computer- and network-based methods, techniques, and systems for enhancing a live sporting event using augmented reality (‘AR’) in real-time (or near real-time). For the purposes of this application, real-time means that delays are not perceptible to a user and that as plays in the sporting event are occurring, the enhancements are taking place. Example embodiments provide a Augmented Reality Live Game Enhancement System (‘ARLGES’), which enables users to see augmentations appear as if they are ‘live’ on the sports field as they are watching the game. The user is able to see and interact with these augmentations using his or her mobile device and without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an (optional) modified virtual headset. The user can view the augmentations using the camera of the phone (holding the phone up to look through the camera at the field). In other deployments the user is aided by a virtual headset such as GOOGLE™ Cardboard, or Samsung Gear virtual reality ‘glasses.’ Other virtual reality and augmented reality devices, both existing and as developed in the future, may be used with this enhanced AR system to render the augmentations.”). In view of Smith’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the live event is presented at a venue, and wherein the camera and the output device are positioned at the venue. The modification would enable viewers to access and view augmented reality content at venues. The modification would thereby enhance the user experience. Claim(s) 7-8 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Patadia, Sharma, Smith, and Charlton. Regarding claims 7 and 16, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein for each of the plurality of client devices, access to the augmented reality content is based on a geofenced area corresponding to the venue. Charlton teaches wherein for each of a plurality of client devices, access to augmented reality content is based on a geofenced area corresponding to a location ([0046], “The metadata associated with each AR content generator may include an AR content generator ID (a unique identifier used for all transactions involving the AR content generator), a public AR content generator name, an AR content generator icon image, any preview media, visibility settings, preferred activation camera (e.g. front or rear-facing camera) and the date the AR content generator was last submitted through the effects software 112. The associated metadata may also for example specify visibility (i.e. is the AR content generator public or private, or on or oft), a ‘Start date’ and ‘End date’ to limit any AR content generator availability within a group, as well as advanced scheduling options, e.g. recurring times (daily, weekly, monthly, yearly). The associated metadata may also for example specify geofencing limitations, so that an AR content generator is only available in certain locations.”). In view of Charlton’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein for each of the plurality of client devices, access to the augmented reality content is based on a geofenced area corresponding to the venue. The modification would serve to improve management functionality of augmented reality content access. Regarding claims 8 and 17, the combination further teaches the operations further comprising: causing the first live video together with the augmented reality content to be broadcast to a second plurality of client devices (Patadia: [0014], “In certain examples, the media content delivery system may facilitate concurrent streaming of the primary media content stream and the supplemental content stream to a media content access device over a network.” [0071], “Access device 604 may facilitate access by a user to content (e.g., media content and/or supplemental content) provided by media content provider subsystem 602. … To illustrate, access device 604 may present and/or record a media program at the direction of a user.”). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R TELAN whose telephone number is (571)270-5940. The examiner can normally be reached 9:30AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL R TELAN/ Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Sep 29, 2025
Non-Final Rejection — §103
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Applicant Interview (Telephonic)
Nov 26, 2025
Response Filed
Jan 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604066
SYSTEMS AND METHODS FOR GENERATING NOTIFICATION INTERFACES BASED ON MEDIA BROADCAST ACCESS EVENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12598361
VIDEO OPTIMIZATION PROXY SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598352
VIDEO PRESENTATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12581137
VIDEO MANAGEMENT SYSTEM FOR VIDEO FILES AND LIVE STREAMING CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12549801
LYRIC VIDEO DISPLAY METHOD AND DEVICE, ELECTRONIC APPARATUS AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
42%
Grant Probability
69%
With Interview (+27.0%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 417 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month