Prosecution Insights
Last updated: April 19, 2026
Application No. 18/852,387

VIEW RENDERING METHOD AND APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT

Final Rejection §103
Filed
Sep 27, 2024
Examiner
TELAN, MICHAEL R
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
42%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
69%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
176 granted / 417 resolved
-15.8% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
453
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 417 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-8, 12-14, and 16-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 12-14, 16-18, 21, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sarkar et al. (US 11057444), Hallanan, Lauren. “Live Streaming 101: Understanding the Battles or PK Feature.” Medium, 17 Jan. 2019, medium.com/@laurenhallanan/live-streaming-101-understanding-the-battles-or-pk-feature-49e7d7bc2ff2 (hereinafter, “Hallanan”), and Roberts et al. (US 11471777). Regarding claim 1, Sarkar teaches a view rendering method, executed by a viewer device, comprising: receiving a message sent from a server (Col. 3, lines 43-48, “the content provider module 102 or at least a portion thereof can be implemented using one or more computing devices or systems that include one or more servers, such as network servers or cloud servers.” Col. 4, lines 36-40, “the content provider module 102 or at least a portion thereof can be implemented using one or more computing devices or systems that include one or more servers, such as network servers or cloud servers.” Col. 8, lines 25-27, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user.”), wherein the message is for indicating a beginning of live interaction on a current live stream channel (Col. 8, lines 25-60, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user. In some embodiments, such invitations are provided to the user as messages sent to the user through the social networking system. … In some embodiments, when the user accepts the invitation, the user's computing device is instructed to capture and provide a separate live content stream to the content provider, as described above. Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C.”), and the message carries data associated with the live interaction (Col. 8, lines 25-60, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user. In some embodiments, such invitations are provided to the user as messages sent to the user through the social networking system. … In some embodiments, when the user accepts the invitation, the user's computing device is instructed to capture and provide a separate live content stream to the content provider, as described above. Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C.”), wherein the data associated with the live interaction comprises viewer contributions of both streamers (Col. 10, lines 20-23, “In some embodiments, a user can select (e.g., tap) the comment section 318 to reveal additional comments in a comments overlay 320, as illustrated in the example of FIG. 3D.” Figs. 3C-3D), and a current session identification (Col. 10, lines 42-51, “In some embodiments, avatars corresponding to users that are live broadcasting can also be shown in the media tray region 408. For example, the media tray region 408 includes a set of layered avatars 412 corresponding to users Vanessa80 and Ellen who are co-broadcasting. In some embodiments, the avatar corresponding to the primary broadcaster (e.g., Vanessa80) is shown on top. In some embodiments, subsequent avatars included in the set of layered avatars 412 can be covered partially by a preceding avatar. Many variations are possible.” Fig. 4A); and rendering a live interaction view according to the data associated with the live interaction in response to pulling a co-stream of video streams of the live interaction (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C). Sarkar does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. Sarkar also does not expressly teach that the viewer contributions are viewer contributions of both streamers of the live interaction PK. Sarkar also does not expressly teach that the current session identification is a current PK session identification. Sarkar also does not expressly teach wherein the start time and the total PK time are used to determine a countdown of the live interaction PK. Sarkar also does not expressly teach the live interaction view is a live interaction PK view. Hallanan teaches a live interaction PK, viewer contributions of both streamers of a live interaction PK, a countdown of the live interaction PK, and a live interaction PK view (Page 2, “Depending on the platform, streamers usually have the option to battle either with a friend (someone they mutually follow) or a random stranger. Once both streamers have opted into the battle, their streams are combined such that both audiences can see both streamers at the same time — the two streams are placed side by side and the audiences are combined. Neither one is the “host”, and each one brings their own audience.” Page 5, “When the PK starts, a little box will pop up on the streamer’s screen showing themselves, the streamer they are battling, and a countdown timer. Just like a regular PK, the streamer who earns the most gifts during that time period is the winner.”). In view of Hallanan’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar wherein the data associated with the live interaction comprises a live interaction PK, wherein the viewer contributions are viewer contributions of both streamers of the live interaction PK, wherein current session identification is a current PK session identification and wherein the live interaction view is a live interaction PK view. The modification would serve to enable enhance interactivity between viewers and streamers. The combination teaches the limitations specified above; however, the combination does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. The combination also does not expressly teach wherein the start time and the total PK time are used to determine the countdown of the live interaction PK. Roberts teaches a start time of a live interaction, a total time, wherein the start time and the total time are used to determine a countdown of the live interaction (Col. 11, line 62 to col. 12, line 4, “In an embodiment, each quiz question may be required to be answered within a predetermined time period (e.g., within 10 seconds). For example, a countdown timer may be provided in a corner of the screen that indicates how long each participant has left to provide an answer to the quiz. In an embodiment, once all participants have selected an answer to the quiz question or once the predetermined time period to answer the quiz question has expired, whichever of the two happens first, the results of the quiz may be presented to each participant.” Col. 12, lines 37-40, “The depicted quiz question 505 contains four answer choices (i.e., A, B, C, and D) from which a user must select one from before the countdown timer 510 reaches zero.” Fig. 5). In view of Roberts’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar to include wherein the data associated with the live interaction comprises a start time of a live interaction PK, a total PK time set by a streamer, and wherein the start time and the total PK time are used to determine the countdown of the live interaction PK. The modification would serve to facilitate the management of timing and/or scheduling of live interactions. Regarding claim 4, Sarkar teaches a view rendering method, executed by a viewer device, comprising: receiving live stream channel information in response to accessing a live stream channel (Col. 9, line 49 to col. 10, line 27, “FIG. 3A illustrates an example 300 of an interface 304 for streaming live content, according to an embodiment of the present disclosure. The interface 304 is presented on a display screen of the computing device 302. The interface 304 may be provided through an application (e.g., a web browser, a social networking application, etc.) running on the computing device 302.” Fig. 3A); in response to the live stream channel information indicating that the live stream channel is in a live interaction state, requesting data associated with the live interaction from a server (Col. 3, lines 43-48, “the content provider module 102 or at least a portion thereof can be implemented using one or more computing devices or systems that include one or more servers, such as network servers or cloud servers.” Col. 4, lines 36-40, “the content provider module 102 or at least a portion thereof can be implemented using one or more computing devices or systems that include one or more servers, such as network servers or cloud servers.” Col. 8, lines 25-60, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user. In some embodiments, such invitations are provided to the user as messages sent to the user through the social networking system. … In some embodiments, when the user accepts the invitation, the user's computing device is instructed to capture and provide a separate live content stream to the content provider, as described above. Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C.”), wherein the data associated with the live interaction is for indicating a current state (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C), wherein the data associated with the live interaction comprises viewer contributions of both streamers (Col. 10, lines 20-23, “In some embodiments, a user can select (e.g., tap) the comment section 318 to reveal additional comments in a comments overlay 320, as illustrated in the example of FIG. 3D.” Figs. 3C-3D), and a current session identification (Col. 10, lines 42-51, “In some embodiments, avatars corresponding to users that are live broadcasting can also be shown in the media tray region 408. For example, the media tray region 408 includes a set of layered avatars 412 corresponding to users Vanessa80 and Ellen who are co-broadcasting. In some embodiments, the avatar corresponding to the primary broadcaster (e.g., Vanessa80) is shown on top. In some embodiments, subsequent avatars included in the set of layered avatars 412 can be covered partially by a preceding avatar. Many variations are possible.” Fig. 4A); and saving the data associated with the live interaction returned from the server (Col. 2, line 63 to col. 3, line 7, “In one example, a live content stream can include content that is being captured and streamed live by a user (e.g., a broadcaster). For example, the broadcaster can capture and stream an event (e.g., a live video of the broadcaster, concert, speech, etc.) as part of a live content stream. Such events can be captured using computing devices (e.g., mobile devices with audio and video capture capabilities) and/or standalone devices (e.g., video cameras and microphones).” Col. 3, lines 47-49, “The second user can capture and provide a separate live content stream from the second user's computing device.”); and in response to pulling a co-stream of video streams of the live interaction, rendering a live interaction view according to the data associated with the live interaction (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C). Sarkar does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. Sarkar also does not expressly teach that the viewer contributions are viewer contributions of both streamers of the live interaction PK. Sarkar also does not expressly teach that the current session identification is a current PK session identification. Sarkar also does not expressly teach wherein the start time and the total PK time are used to determine a countdown of the live interaction PK. Sarkar also does not expressly teach the live interaction view is a live interaction PK view. Hallanan teaches a live interaction PK, viewer contributions of both streamers of a live interaction PK, a countdown of the live interaction PK, and a live interaction PK view (Page 2, “Depending on the platform, streamers usually have the option to battle either with a friend (someone they mutually follow) or a random stranger. Once both streamers have opted into the battle, their streams are combined such that both audiences can see both streamers at the same time — the two streams are placed side by side and the audiences are combined. Neither one is the “host”, and each one brings their own audience.” Page 5, “When the PK starts, a little box will pop up on the streamer’s screen showing themselves, the streamer they are battling, and a countdown timer. Just like a regular PK, the streamer who earns the most gifts during that time period is the winner.”). In view of Hallanan’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar wherein the data associated with the live interaction comprises a live interaction PK, wherein the viewer contributions are viewer contributions of both streamers of the live interaction PK, wherein current session identification is a current PK session identification and wherein the live interaction view is a live interaction PK view. The modification would serve to enable enhance interactivity between viewers and streamers. The combination teaches the limitations specified above; however, the combination does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. The combination also does not expressly teach wherein the start time and the total PK time are used to determine the countdown of the live interaction PK. Roberts teaches a start time of a live interaction, a total time, wherein the start time and the total time are used to determine a countdown of the live interaction (Col. 11, line 62 to col. 12, line 4, “In an embodiment, each quiz question may be required to be answered within a predetermined time period (e.g., within 10 seconds). For example, a countdown timer may be provided in a corner of the screen that indicates how long each participant has left to provide an answer to the quiz. In an embodiment, once all participants have selected an answer to the quiz question or once the predetermined time period to answer the quiz question has expired, whichever of the two happens first, the results of the quiz may be presented to each participant.” Col. 12, lines 37-40, “The depicted quiz question 505 contains four answer choices (i.e., A, B, C, and D) from which a user must select one from before the countdown timer 510 reaches zero.” Fig. 5). In view of Roberts’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar to include wherein the data associated with the live interaction comprises a start time of a live interaction PK, a total PK time set by a streamer, and wherein the start time and the total PK time are used to determine the countdown of the live interaction PK. The modification would serve to facilitate the management of timing and/or scheduling of live interactions. Regarding claims 2 and 16, Sarkar further teaches further comprising, before rendering the live interaction PK view according to the data associated with the live interaction in response to pulling the co-stream of video streams of the live interaction: acquiring the data associated with the live interaction from the message (Col. 8, lines 25-60, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user. In some embodiments, such invitations are provided to the user as messages sent to the user through the social networking system. … In some embodiments, when the user accepts the invitation, the user's computing device is instructed to capture and provide a separate live content stream to the content provider, as described above. Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C.”); and saving the data associated with the live interaction (Col. 2, line 63 to col. 3, line 7, “In one example, a live content stream can include content that is being captured and streamed live by a user (e.g., a broadcaster). For example, the broadcaster can capture and stream an event (e.g., a live video of the broadcaster, concert, speech, etc.) as part of a live content stream. Such events can be captured using computing devices (e.g., mobile devices with audio and video capture capabilities) and/or standalone devices (e.g., video cameras and microphones).” Col. 3, lines 47-49, “The second user can capture and provide a separate live content stream from the second user's computing device.”). Regarding claim 3, Sarkar further teaches further comprising: before rendering the live interaction PK view according to the data associated with the live interaction in response to pulling the co-stream of video streams of the live interaction, saving the message (Col. 8, lines 25-27, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user.”), wherein the rendering a live interaction PK view according to the data associated with the live interaction in response to pulling a co-stream of video streams of the live interaction comprises: in response to pulling the co-stream of video streams of the live interaction, acquiring the data associated with the live interaction from the message (Col. 8, lines 25-60, “Once a user is selected to join as co-broadcaster, the user selection module 204 is configured to send one or more invitations to the user. In some embodiments, such invitations are provided to the user as messages sent to the user through the social networking system. … In some embodiments, when the user accepts the invitation, the user's computing device is instructed to capture and provide a separate live content stream to the content provider, as described above. Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C.”), and rendering the live interaction PK view according to the data associated with the live interaction (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C). Regarding claims 5 and 18, Sarkar further teaches further comprising: receiving a first co-stream of video streams from the server in response to accessing the live stream channel (Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C); in response to a first supplemental enhancement information (SEI) in the first co-stream of video streams indicates that the first co-stream of video streams is a co-stream of video streams of the live interaction, requesting the data associated with the live interaction from the server (Col. 10, line 28 to col. 11, line 2, “In some embodiments, avatars corresponding to users that are live broadcasting can also be shown in the media tray region 408. For example, the media tray region 408 includes a set of layered avatars 412 corresponding to users Vanessa80 and Ellen who are co-broadcasting. … The user operating the computing device 402 can select the layered avatars 412 to access the co-broadcast, as illustrated in the example of FIG. 4B.” Figs. 4A-4B); and rendering the live interaction PK view according to the data associated with the live interaction returned from the server (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Figs. 3C, 4B). Regarding claim 12, Sarkar teaches an electronic device comprising: a memory and a processor, wherein the memory is used for storing a computer program; the processor is used for executing the view rendering method according to claim 1 when calling the computer program (Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 13, Sarkar teaches a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to claim 1 (Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 14, Sarkar teaches a computer program product having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to claim 1 (Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 17, Sarkar teaches an electronic device comprising: a memory and a processor, wherein the memory is used for storing a computer program; the processor is used for executing the view rendering method according to claim 4 when calling the computer program (Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 21, Sarkar teaches a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to claim 4 (Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 23, Sarkar teaches a computer program product having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to claim 4 (Col. 18, line 55 to col. 19, line 32; Fig. 7). Claim(s) 6-8 and 19-20, 22, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Sarkar, Matli (US 2023/0104026), Hallanan, and Roberts. Regarding claim 6, Sarkar teaches the limitations specified above; however, the combination does not expressly teach wherein the requesting the data associated with the live interaction from the server comprises: in response to the data associated with the live interaction returned from the server being not received and a number of times of requesting the data associated with the live interaction from the server being less than a threshold number of times, requesting the data associated with the live interaction from the server; in response to the data associated with the live interaction returned from the server being not received and the number of times of requesting the data associated with the live interaction from the server being greater than or equal to the threshold number of times, cancelling requesting the data associated with the live interaction from the server. Matli teaches in response to data associated being not received and a number of times of requesting the data being less than a threshold number of times, requesting the data ; in response to the data being not received and the number of times of requesting the data associated with the live interaction from the server being greater than or equal to the threshold number of times, cancelling requesting the data ([0085], “The IP video platform 606 of a respective smart device 606 returns a confirmation message to the push notification service 624, in Step 672.” [0086], “If a respective confirmation message is not received by the push notification service 624 within a predetermined period of time, or if a failure message is received, for example, the push notification service may try again one or more times.” [0088], “Steps 675-680 may be repeated until a confirmation message is received from all the user devices or the process times out. … Retries may continue for a predetermined number of times, such as 3 to 10 times, for example. After the predetermined number of retries, the push notification service 624 may stop attempting to send push notifications.”). In view of Matli’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar wherein the requesting the data associated with the live interaction from the server comprises: in response to the data associated with the live interaction returned from the server being not received and a number of times of requesting the data associated with the live interaction from the server being less than a threshold number of times, requesting the data associated with the live interaction from the server; in response to the data associated with the live interaction returned from the server being not received and the number of times of requesting the data associated with the live interaction from the server being greater than or equal to the threshold number of times, cancelling requesting the data associated with the live interaction from the server. The modification would serve to enable a client device to make multiple attempts at retrieving requested content. The modification would thereby improve the user experience. Regarding claim 7, Sarkar view rendering method, executed by a viewer device, comprising: receiving a first co-stream of video streams from a server (Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C), wherein first supplemental enhancement information SEI in the first co-stream of video streams indicates that the first co- stream of video streams is a merged stream of live interaction (Col. 10, line 28 to col. 11, line 2, “In some embodiments, avatars corresponding to users that are live broadcasting can also be shown in the media tray region 408. For example, the media tray region 408 includes a set of layered avatars 412 corresponding to users Vanessa80 and Ellen who are co-broadcasting.” Fig. 4A); requesting the data associated with a live interaction from the server, wherein the data associated with the live interaction is used to indicate a current state (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C); wherein the data associated with the live interaction comprises viewer contributions of both streamers (Col. 10, lines 20-23, “In some embodiments, a user can select (e.g., tap) the comment section 318 to reveal additional comments in a comments overlay 320, as illustrated in the example of FIG. 3D.” Figs. 3C-3D), and a current session identification (Col. 10, lines 42-51, “In some embodiments, avatars corresponding to users that are live broadcasting can also be shown in the media tray region 408. For example, the media tray region 408 includes a set of layered avatars 412 corresponding to users Vanessa80 and Ellen who are co-broadcasting. In some embodiments, the avatar corresponding to the primary broadcaster (e.g., Vanessa80) is shown on top. In some embodiments, subsequent avatars included in the set of layered avatars 412 can be covered partially by a preceding avatar. Many variations are possible.” Fig. 4A); and receiving the data associated with the live interaction returned from the server; and rendering a live interaction view according to the data associated with the live interaction (Col. 2, line 63 to col. 3, line 35, “A user (e.g., a viewer) operating a computing device can access the live content stream through the content provider. In response, the content provider encodes and provides data corresponding to the live content stream to the user's computing device over a network (e.g., the Internet) in real-time. The computing device can decode and present the live content stream, for example, through a display screen of the computing device.” Col. 8, lines 25-60, “Once the user's separate live content stream is active, the stream merge module 206 merges the broadcaster's live content stream and the user's live content stream so that both the broadcaster (i.e., primary broadcaster) and the user (i.e., co-broadcaster) appear in a merged live content stream, as illustrated in the example of FIG. 3C. In some embodiments, the merged live content stream is presented to users as a split screen that is divided into a first region and a second region.” Fig. 3C). Sarkar does not expressly teach in response to data associated with the live interaction returned from the server being not received and a number of times of requesting the data associated with the live interaction from the server being less than a threshold number of times, requesting the data associated with the live interaction. Sarkar also does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. Sarkar also does not expressly teach that the viewer contributions are viewer contributions of both streamers of the live interaction PK. Sarkar also does not expressly teach that the current session identification is a current PK session identification. Sarkar also does not expressly teach wherein the start time and the total PK time are used to determine a countdown of the live interaction PK. Sarkar also does not expressly teach the live interaction view is a live interaction PK view. Matli teaches in response to data being not received and a number of times of requesting the data being less than a threshold number of times, requesting the data ([0085], “The IP video platform 606 of a respective smart device 606 returns a confirmation message to the push notification service 624, in Step 672.” [0086], “If a respective confirmation message is not received by the push notification service 624 within a predetermined period of time, or if a failure message is received, for example, the push notification service may try again one or more times.” [0088], “Steps 675-680 may be repeated until a confirmation message is received from all the user devices or the process times out. … Retries may continue for a predetermined number of times, such as 3 to 10 times, for example. After the predetermined number of retries, the push notification service 624 may stop attempting to send push notifications.”). In view of Matli’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar to include, in response to data associated with the live interaction returned from the server being not received and a number of times of requesting the data associated with the live interaction from the server being less than a threshold number of times, requesting the data associated with the live interaction. The modification would serve to enable a client device to make multiple attempts at retrieving requested content. The modification would thereby improve the user experience. The combination teaches the limitations specified above; however, the combination does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. The combination also does not expressly teach that the viewer contributions are viewer contributions of both streamers of the live interaction PK. The combination also does not expressly teach that the current session identification is a current PK session identification. The combination also does not expressly teach wherein the start time and the total PK time are used to determine a countdown of the live interaction PK. The combination also does not expressly teach the live interaction view is a live interaction PK view. Hallanan teaches a live interaction PK, viewer contributions of both streamers of a live interaction PK, a countdown of the live interaction PK, and a live interaction PK view (Page 2, “Depending on the platform, streamers usually have the option to battle either with a friend (someone they mutually follow) or a random stranger. Once both streamers have opted into the battle, their streams are combined such that both audiences can see both streamers at the same time — the two streams are placed side by side and the audiences are combined. Neither one is the “host”, and each one brings their own audience.” Page 5, “When the PK starts, a little box will pop up on the streamer’s screen showing themselves, the streamer they are battling, and a countdown timer. Just like a regular PK, the streamer who earns the most gifts during that time period is the winner.”). In view of Hallanan’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar wherein the data associated with the live interaction comprises a live interaction PK, wherein the viewer contributions are viewer contributions of both streamers of the live interaction PK, wherein current session identification is a current PK session identification and wherein the live interaction view is a live interaction PK view. The modification would serve to enable enhance interactivity between viewers and streamers. The combination teaches the limitations specified above; however, the combination does not expressly teach wherein the data associated with the live interaction comprises a start time of a live interaction PK, and a total PK time set by a streamer. The combination also does not expressly teach wherein the start time and the total PK time are used to determine the countdown of the live interaction PK. Roberts teaches a start time of a live interaction, a total time, wherein the start time and the total time are used to determine a countdown of the live interaction (Col. 11, line 62 to col. 12, line 4, “In an embodiment, each quiz question may be required to be answered within a predetermined time period (e.g., within 10 seconds). For example, a countdown timer may be provided in a corner of the screen that indicates how long each participant has left to provide an answer to the quiz. In an embodiment, once all participants have selected an answer to the quiz question or once the predetermined time period to answer the quiz question has expired, whichever of the two happens first, the results of the quiz may be presented to each participant.” Col. 12, lines 37-40, “The depicted quiz question 505 contains four answer choices (i.e., A, B, C, and D) from which a user must select one from before the countdown timer 510 reaches zero.” Fig. 5). In view of Roberts’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sarkar to include wherein the data associated with the live interaction comprises a start time of a live interaction PK, a total PK time set by a streamer, and wherein the start time and the total PK time are used to determine the countdown of the live interaction PK. The modification would serve to facilitate the management of timing and/or scheduling of live interactions. Regarding claims 8 and 20, the combination further teaches, after receiving the first co-stream of video streams from the server, in response to the data associated with the live interaction returned from the server being not received and the number of times of requesting the data associated with the live interaction from the server being greater than or equal to the threshold number of times, cancelling requesting the data associated with the live interaction from the server (Matli: [0085], “The IP video platform 606 of a respective smart device 606 returns a confirmation message to the push notification service 624, in Step 672.” [0086], “If a respective confirmation message is not received by the push notification service 624 within a predetermined period of time, or if a failure message is received, for example, the push notification service may try again one or more times.” [0088], “Steps 675-680 may be repeated until a confirmation message is received from all the user devices or the process times out. … Retries may continue for a predetermined number of times, such as 3 to 10 times, for example. After the predetermined number of retries, the push notification service 624 may stop attempting to send push notifications.”). Regarding claim 19, Sarkar teaches an electronic device comprising: a memory and a processor, wherein the memory is used for storing a computer program; the processor is used for executing the view rendering method according to claim 7 when calling the computer program (Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 22, the combination teaches a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to claim 7 (Sarkar: Col. 18, line 55 to col. 19, line 32; Fig. 7). Regarding claim 24, the combination teaches a computer program product having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to claim 7 (Sarkar: Col. 18, line 55 to col. 19, line 32; Fig. 7). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R TELAN whose telephone number is (571)270-5940. The examiner can normally be reached 9:30AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL R TELAN/ Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Sep 27, 2024
Application Filed
Oct 07, 2025
Non-Final Rejection — §103
Jan 08, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604066
SYSTEMS AND METHODS FOR GENERATING NOTIFICATION INTERFACES BASED ON MEDIA BROADCAST ACCESS EVENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12598361
VIDEO OPTIMIZATION PROXY SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598352
VIDEO PRESENTATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12581137
VIDEO MANAGEMENT SYSTEM FOR VIDEO FILES AND LIVE STREAMING CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12549801
LYRIC VIDEO DISPLAY METHOD AND DEVICE, ELECTRONIC APPARATUS AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
42%
Grant Probability
69%
With Interview (+27.0%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 417 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month