Prosecution Insights
Last updated: April 19, 2026
Application No. 18/471,919

SYSTEMS AND METHODS FOR SYNCHRONOUS GROUP DEVICE TRANSMISSION OF LIVE STREAMING MEDIA AND RELATED USER INTERFACES

Non-Final OA §103
Filed
Sep 21, 2023
Examiner
VU, NGOC K
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
Discovery Com LLC
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
85%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
181 granted / 253 resolved
+13.5% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
15 currently pending
Career history
268
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 253 resolved cases

Office Action

§103
Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 25-33 and 35-38 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 25, 26 and 28-31 are rejected under 35 U.S.C. 103 as being unpatentable over Akram et al. (US 20110246908 A1) in view of Kobayashi (US 20230353800 A1). Regarding claim 25, Akram teaches a computer-implemented method for interacting with a live multimedia stream, the computer-implemented method comprising operations including: presenting, on a multimedia streaming platform associated with a computer system, the live multimedia stream in a general virtual media streaming session (presenting, on a multimedia stream platform of a system, a live video stream in a virtual world - see FIGs.1-3, 14a-14b and 15a-5b, abstract, 0079, 0081); implementing, using one or more processors of the computer system and responsive to detecting a predetermined indication, one or more features associated with a joint experience in the general virtual media streaming session (implementing, via one or more the processors of the system as shown in FIGs.1-3, one or more features, e.g., 1518, 1520, 1522, 1524 or 1532, associated with a shared environment responsive to detecting to an acceptance invitation - see FIGs 1-3, 15a-17b; 0126, 0127, 0131), wherein the joint experience is a cheering activity (the shared environment includes cheering activity - see 0108, 0124, 0131, 0134); detecting, at the multimedia streaming platform, interaction input with the joint experience from the one or more devices (detecting user interaction with the shared environment from one or more electronic media devices associated with one or more users – see FIGs. 12a-12b, 0108, 0109, 0121); Akram lacks to teach a digital support meter configured to visually represent an aggregate level of cheering activity across one or more devices associated with one or more participants; and adjusting, responsive to detecting the interaction input, a characteristic associated with the digital support meter, wherein the adjusting the characteristic comprises imparting a visual effect on one or more digital support bars of the digital support meter based on the interaction input. However, Kobayashi taught that a digital support meter 2502 visually represents an aggregate level of cheering activity across one or more devices 100 associated with one or more users; adjusting, responsive to detecting the interaction input, a characteristic associated with the digital support meter 2502, wherein the adjusting the characteristic comprises imparting a visual effect on one or more digital support bars of the digital support meter based on the interaction input (i.e., the shaded bar of the digital support meter 2502 representing a calculated index based on the user’s reaction information is changed from time to time.) See FIG. 9, 0034, 0044. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Akram by including a digital support meter configured to visually represent an aggregate level of cheering activity across one or more devices associated with one or more participants; and adjusting, responsive to detecting the interaction input, a characteristic associated with the digital support meter, wherein the adjusting the characteristic comprises imparting a visual effect on one or more digital support bars of the digital support meter based on the interaction input as taught or suggested by Kobayashi to provide the remote viewers with a rich and immersive experience of the event. Regarding claim 26, Akram teaches that wherein the one or more features include a user-actionable icon (one of the features, e.g., emote 1532, includes a menu of different action icons – see 0131 and FIG. 17b). Regarding claim 28, Akram teaches that wherein detecting the interaction input comprises detecting interaction with user-actionable icon (detecting the user interaction with one of the action icons – see FIGs. 12a-12b, 0108, 0109, 0121). Regarding claim 29, Akram in combination with Kobayashi teaches the features of a magnitude of the visual effect imparted on the one or more digital support bars is directly proportional to a frequency of the interaction with the user-actionable icon (the meter 2502 visually shows, via the shaded bars, the value of the index calculated on the basis of the aggregation result of the number of receptions of user reaction information from one or more users – see Kobayashi: 0044; the user interaction via one of the action icons – Akram: FIGs. 12a-12b, 0108, 0109, 0121). Regarding claim 30, Akram as modified by Kobayashi teaches imparting the visual effect comprised filling in the one or more digital support bars (e.g., the shaded bars in 2502 - see Kobayashi: FIG. 9). Regarding claim 31, Akram teaches implementing the one or more features dynamically, at one or more points during the live multimedia stream, and on a subset of user devices connected to the general virtual media streaming session (implementing one or more features, e.g., 1518, 1520, 1522, 1524 or 1532, during the live stream and on a plurality of user devices connected to the shared environment - see FIGs 1-3, 15a-17b; 0003, 0005, 0080, 0126, 0127, 0131). Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Akram et al. (US 20110246908 A1) in view of Kobayashi (US 20230353800 A1) and further in view of Gottlieb (US 20140229866 A1). Regarding claim 27, the combination of Akram and Kobayashi lacks to teach presenting a notification in the general virtual media streaming session that provides instructions to interact with the user-actionable icon. Gottlied discloses presenting a pop-up message to users in the audience to request some form of interaction by the users via one of the options in “call-to-action” window 1000. See FIG. 10, 0165, 0166. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Akram and Kobayashi by presenting a notification in the general virtual media streaming session that provides instructions to interact with the user-actionable icon as disclosed or suggested by Gottlied to increase effectiveness of engaging the remote audiences in collaborative activities. Claim 32 is rejected under 35 U.S.C. 103 as being unpatentable over Akram et al. (US 20110246908 A1) in view of Kobayashi (US 20230353800 A1) and further in view of Soman et al. (US 20220329881 A1). Regarding claim 32, Akram teaches that the multimedia stream corresponds to a live sporting event (see FIGs. 14a-17g, 0112). Akram lacks to teach implementing the one or more features on the subset of user devices for which a preference designation has been identified for a team engaged in the live sporting event that is approaching a key milestone. Soman teaches implementing the icons/options related to an incident or specific event to the live program displayed on one or more use devices. For example, the UI 402 may display different icons or options related to other detected events such as a team scoring a goal in the football game. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Akram and Kobayashi by implementing the one or more features on the subset of user devices for which a preference designation has been identified for a team engaged in the live sporting event that is approaching a key milestone as taught or suggested by Soman for the purposes of enabling the users to react to exciting event to enhance more the viewing experience for the users. Claims 33 and 35-38 are rejected under 35 U.S.C. 103 as being unpatentable over Akram et al. (US 20110246908 A1) in view of Soman et al. (US 20220329881 A1). Regarding claim 33, Akram teaches a computer-implemented method for interacting with a live multimedia stream, the computer-implemented method comprising operations including: presenting, on a multimedia streaming platform associated with a computer system, the live multimedia stream in a general virtual media streaming session (presenting, on a multimedia stream platform of a system, a live video stream in a virtual world - see FIGs.1-3, 14a-14b and 15a-5b, abstract, 0079, 0081); implementing, using one or more processors of the computer system and responsive to detecting a predetermined indication, one or more features associated with a joint experience in the general virtual media streaming session (implementing, via one or more the processors of the system as shown in FIGs.1-3, one or more features 1518, 1520, 1522, 1524 or 1532 associated with a shared environment responsive to detecting to an acceptance invitation - see FIGs 1-3, 15a-17b; 0126, 0127, 0131), wherein the joint experience is a reaction activity and wherein the one or more features include one or more selectable emoticons (the user can select one of emoticons from emote to show one or more reactions — see FIGs. 17a-d, 0131-0132); detecting, at the multimedia streaming platform, interaction input with the joint experience from one or more devices associated with one or more participants (detecting user interaction with the shared environment from one or more electronic media devices associated with one or more users, e.g., selecting one of the options from an emote menu 1720 – see FIGs. 12a-12b, 0108, 0109, 0121); and adjusting, responsive to detecting the interaction input, a characteristic associated with the one or more selectable emoticons, wherein the adjusting the characteristic comprises adjusting an appearance of the selectable emoticons based upon an event occurring in the live multimedia stream (altering an appearance of the emotions 1720 as presenting a textual description in the center region 1734 associated with one of the emoticons, e.g., 1722 or 1724, selected by the user indicating a particular reaction to content of a sporting event in the live video stream – see FIGs. 17c-d, 0133, 0134). Akram lacks to further teach that wherein the one or more selectable emoticons are created based on content occurring in the live multimedia stream. Soman teaches generating one or more selectable icons based on content occurring in a live stream. See 0052, 0057, 0068. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Akram by including the one or more selectable emoticons to be created based on content occurring in the live multimedia stream as taught or suggested by Soman for providing the tailored choices to the viewer to express emotions or reactions in relation to the content in order to enhance viewer engagement. Regarding claim 35, Akram teaches that wherein the one or more features are implemented dynamically in the general virtual media streaming session (each user can change the view of the virtual world from different camera angles or change a rendering of the shared environment such as scene elements or furniture. See FIG. 14a-17g, 0110-0111, 0120). Regarding claim 36, Akram teaches that wherein the live multimedia stream corresponds to a live sporting event (see FIGs. 14a-17g, 0112). Regarding claim 37, Akram in view of Soman teaches that wherein the predetermined indication corresponds to detecting event such as injury to a star player in the football game (see Soman: 0057). Regarding claim 38, Akram in view of Soman teaches storing the one or more selectable emoticons in an emoticon bank (application 106 associated with the content provider downloaded on the electronic device comprises the icons/emoticons. See Soman: 0025, 0043). Allowable Subject Matter Claims 1, 4, 21, 22, 23 and 24 are allowed. The following is a statement of reasons for the indication of allowable subject matter: the prior art, either alone or in combination, fails to teach or fairly suggest the combined elements, as a whole, of a computer-implemented method for interacting with a live multimedia stream recited in claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGOC K VU whose telephone number is (571)272-7306. The examiner can normally be reached Monday & Thursday: 9AM-6PM EST; Tuesday, Wednesday & Friday: out of office. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NATHAN FLYNN can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NGOC K VU/Primary Examiner, Art Unit 2421
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
May 13, 2025
Non-Final Rejection — §103
Jun 23, 2025
Applicant Interview (Telephonic)
Jun 23, 2025
Examiner Interview Summary
Jul 31, 2025
Response Filed
Sep 24, 2025
Final Rejection — §103
Dec 01, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581153
SERVER SYSTEM, APPLICATION PROGRAM DISTRIBUTION SERVER, VIEWING TERMINAL, CONTENT VIEWING METHOD, APPLICATION PROGRAM, DISTRIBUTION METHOD, AND APPLICATION PROGRAM DISTRIBUTION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12549778
ANGLE-OF-VIEW SWITCHING METHOD, APPARATUS AND SYSTEM FOR FREE ANGLE-OF-VIEW VIDEO, AND DEVICE AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12532044
METHOD AND SYSTEM FOR ACCESSING USER RELEVANT MULTIMEDIA CONTENT WITHIN MULTIMEDIA FILES
2y 5m to grant Granted Jan 20, 2026
Patent 12501095
ADAPTIVE PLAYBACK METHOD AND DEVICE FOR VIDEO
2y 5m to grant Granted Dec 16, 2025
Patent 12464173
VIRTUAL GIFT DISPLAY
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
85%
With Interview (+13.9%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 253 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month