Prosecution Insights
Last updated: April 19, 2026
Application No. 18/332,608

METHOD AND APPARATUS FOR DISPLAYING ONLINE INTERACTION, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIUM

Non-Final OA §103
Filed
Jun 09, 2023
Examiner
TELAN, MICHAEL R
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
5 (Non-Final)
42%
Grant Probability
Moderate
5-6
OA Rounds
3y 6m
To Grant
69%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
176 granted / 417 resolved
-15.8% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
453
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 417 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 10, 2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1-2, 4-15, and 17-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The examiner must, however, address any arguments presented by the applicant which are still relevant to any references being applied. With regard to claim 1, Applicant submits that the cited prior art does not teach “wherein the live stream interaction interface is obtained by adding, using a dynamic effect, at least one first state adjustment control to the live stream viewing interface, the dynamic effect indicates a preset manner for dynamically presenting a process of adding the at least one first co-hosting state adjustment control to the live stream viewing interface,” as recited in claim 1. Remarks, pp. 11-13. As presented in the claim rejections of claim 1 under 35 USC §103, claim 1 is rejected over a combination of Kedenburg, III (US 2018/0167427, “Kedenburg”), Hartnett et al. (US 2022/0070243, “Hartnett ‘243”), Dandu et al. (US 2016/0266781), and Sullivan et al. (US 2009/0249429). Hartnett ‘243 teaches: in response to determining that a user participates in a live streaming interaction, switching from a live streaming viewing interface to a live streaming interaction interface ([0034], “the live video streaming system can provide dynamic user interfaces to pending, current, and past participant devices.” [0035], “Further, the live video streaming system can facilitate adding, removing, and swapping viewer devices to and from the public combined live video stream. For example, the live video streaming system can automatically determine and invite target viewer devices to become participant devices and participate in the public combined live video stream.” [0065], “In these implementations, the live video streaming system converts the viewer device into a participant device when their live video stream is added to the public combined live video stream.” That is, a user interface may transition from a viewer device 112 displaying a public combined live video stream interface 702c as depicted in Fig. 7C to a participant device 110 displaying public combined live video stream interface 702b as depicted in Fig. 7B, wherein Fig. 7B illustrates controls pertaining to audio and video), and wherein the live streaming interaction interface is obtained by adding, using a dynamic effect, at least one first state adjustment control to the live streaming viewing interface, the dynamic effect indicates a preset manner for dynamically presenting a process of adding the at least one first state adjustment control to the live stream viewing interface ([0154], “FIG. 7B illustrates a participant device 110 that includes a public combined live video stream interface 702b.” [0257], “the series of acts 1700 includes an act 1712 of the live video streaming system 106 transitioning the target viewer device from a viewer device to a participant device.” [0357], “In various implementations, the user interface manager 2212 can generate, create, update, change, replace, delete, remove, refresh, render, reveal, display, present, and/or provide user interfaces associated with the live video streaming system 106 and/or networking system 104 to client devices (e.g., a host device, a participant device, or a viewer device).” That is, a user interface may transition from a viewer device 112 displaying a public combined live video stream interface 702c as depicted in Fig. 7C to a participant device 110 displaying public combined live video stream interface 702b as depicted in Fig. 7B, wherein Fig. 7B illustrates controls pertaining to audio and video). In view of Hartnett ‘243’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kedenburg to include, in response to determining that a user participates in the live streaming interaction, switching from a live streaming viewing interface to a live streaming interaction interface, wherein the live streaming interaction interface is obtained by adding, using a dynamic effect, at least one first state adjustment control to the live streaming viewing interface, the dynamic effect indicates a preset manner for dynamically presenting a process of adding the at least one first co-hosting state adjustment control to the live stream viewing interface. The modification would serve to facilitate user operation of the system. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “interface changes are tied to a dynamic enhancement of the interface through visual effects that show the adding of controls,” and “applying a dynamic effect to visually embed new controls into ongoing viewing interface,” Remarks, pp. 12-13) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 13-14, 20, and 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Kedenburg, III (US 2018/0167427, hereinafter “Kedenburg”), Hartnett et al. (US 2022/0070243, hereinafter “Hartnett ‘243”), Dandu et al. (US 2016/0266781), and Sullivan et al. (US 2009/0249429). Regarding claim 1, Kedenburg teaches a method for live stream interaction, wherein the method comprises: in response to determining that a user participates in the live stream interaction, displaying a live stream viewing interface to a live stream interaction interface ([0041], “the broadcaster can perform one or more touch screen gestures to reveal the filter options. In this example, the broadcaster can perform a swipe gesture in the region 408 to reveal a set of options 410 for applying various filters, as illustrated in the example of FIG. 4B.” [0042], “The broadcaster can also apply filters to the co-broadcaster's live content stream. In this example, any visual modifications to the co-broadcaster's live content stream can appear in the second region 468 through which the co-broadcaster's live content stream is being presented. Similarly, the co-broadcaster can apply filters to the co-broadcaster's live content stream and/or the broadcaster's live content stream.” Fig. 4B), wherein the live stream interaction interface comprises is obtained by adding at least one first state adjustment control ([0041], “the broadcaster can perform one or more touch screen gestures to reveal the filter options. In this example, the broadcaster can perform a swipe gesture in the region 408 to reveal a set of options 410 for applying various filters, as illustrated in the example of FIG. 4B.” [0042], “The broadcaster can also apply filters to the co-broadcaster's live content stream. In this example, any visual modifications to the co-broadcaster's live content stream can appear in the second region 468 through which the co-broadcaster's live content stream is being presented. Similarly, the co-broadcaster can apply filters to the co-broadcaster's live content stream and/or the broadcaster's live content stream.” Fig. 4B), in response to a trigger operation of the user for a target control in the at least one first state adjustment control, adjusting a live stream interaction state of the user on the live stream interaction interface ([0041], “The broadcaster can select any of the options to apply the various filters described above. Any content that is inserted into the live content stream using the filter options can appear as one or more overlay in the interface 404 through which the live content stream is being presented. As mentioned, such visual modifications to the live content stream can also be applied, or propagated, to the respective interfaces of viewers that are accessing the live content stream. In the example of FIG. 4B, the broadcaster can select an option to draw (or doodle) in the live content stream. The broadcaster can also select an option to insert text into the live content stream. Once complete, these visual modifications can be presented in the interface 404, as illustrated in the example of FIG. 4C. As shown, the interface 404 in FIG. 4C has been updated to include the broadcaster's doodle 412 and inserted text 414.” Figs. 4B-4C). While Kedenburg teaches, in response to determining that a user participates in the live streaming interaction, displaying a live streaming viewing interface, Kedenburg does not expressly teach, in response to determining that a user participates in the live streaming interaction, switching from a live streaming viewing interface to a live streaming interaction interface. Kedenburg also does not expressly teach wherein the live streaming interaction interface is obtained by adding, using a dynamic effect, at least one first state adjustment control to the live streaming viewing interface. Kedenburg also does not expressly teach the dynamic effect indicates a preset manner for dynamically presenting a process of adding the at least one first co-hosting state adjustment control to the live stream viewing interface. Kedenburg also does not expressly teach a first quantity of the at least one first state adjustment control is determined based on an available space for deploying controls on the live streaming viewing interface. Kedenburg also does not expressly teach the at least one first state adjustment control in the determined first quantity is selected according to a theme of a live stream in which the user participates, from candidate controls based on priorities of the candidate controls. Hartnett ‘243 teaches: in response to determining that a user participates in a live streaming interaction, switching from a live streaming viewing interface to a live streaming interaction interface ([0034], “the live video streaming system can provide dynamic user interfaces to pending, current, and past participant devices.” [0035], “Further, the live video streaming system can facilitate adding, removing, and swapping viewer devices to and from the public combined live video stream. For example, the live video streaming system can automatically determine and invite target viewer devices to become participant devices and participate in the public combined live video stream.” [0065], “In these implementations, the live video streaming system converts the viewer device into a participant device when their live video stream is added to the public combined live video stream.” That is, a user interface may transition from a viewer device 112 displaying a public combined live video stream interface 702c as depicted in Fig. 7C to a participant device 110 displaying public combined live video stream interface 702b as depicted in Fig. 7B, wherein Fig. 7B illustrates controls pertaining to audio and video), and wherein the live streaming interaction interface is obtained by adding, using a dynamic effect, at least one first state adjustment control to the live streaming viewing interface, the dynamic effect indicates a preset manner for dynamically presenting a process of adding the at least one first state adjustment control to the live stream viewing interface ([0154], “FIG. 7B illustrates a participant device 110 that includes a public combined live video stream interface 702b.” [0257], “the series of acts 1700 includes an act 1712 of the live video streaming system 106 transitioning the target viewer device from a viewer device to a participant device.” [0357], “In various implementations, the user interface manager 2212 can generate, create, update, change, replace, delete, remove, refresh, render, reveal, display, present, and/or provide user interfaces associated with the live video streaming system 106 and/or networking system 104 to client devices (e.g., a host device, a participant device, or a viewer device).” That is, a user interface may transition from a viewer device 112 displaying a public combined live video stream interface 702c as depicted in Fig. 7C to a participant device 110 displaying public combined live video stream interface 702b as depicted in Fig. 7B, wherein Fig. 7B illustrates controls pertaining to audio and video). In view of Hartnett ‘243’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kedenburg to include, in response to determining that a user participates in the live streaming interaction, switching from a live streaming viewing interface to a live streaming interaction interface, wherein the live streaming interaction interface is obtained by adding, using a dynamic effect, at least one first state adjustment control to the live streaming viewing interface, the dynamic effect indicates a preset manner for dynamically presenting a process of adding the at least one first co-hosting state adjustment control to the live stream viewing interface. The modification would serve to facilitate user operation of the system. The combination teaches the limitations specified above; however, the combination does not expressly teach a first quantity of the at least one first state adjustment control is determined based on an available space for deploying controls on the live streaming viewing interface. The combination also does not expressly teach the at least one first state adjustment control in the determined first quantity is selected according to a theme of a live stream in which the user participates, from candidate controls based on priorities of the candidate controls. Dandu teaches: a first quantity of at least one first state adjustment control is determined based on an available space for deploying controls on a viewing interface, and at least one first state adjustment control in a determined first quantity is selected from candidate controls based on priorities of the candidate controls ([0029], “For example, FIG. 6 is a table diagram showing sample contents of a custom control precedence table containing a precedence order for presenting controls specified by an application. The table 600 is made up of rows 601-607, each corresponding to a different control. In each row, a priority column 611 indicates a numerical priority for the control, while a control column 612 identifies the control. For example, row 601 indicates that a play/pause button has priority 1, i.e., the highest priority. After this precedence order for controls is specified by the application, the facility will include the controls with the highest precedence—the smallest priority values—until the space available for controls is exhausted.” [0031], “FIG. 7 is a display diagram showing media player controls presented in accordance with the precedence established in FIG. 6 and Table 5 when a relatively large amount of display space is available for controls, such as on a large display device, where much or all of the display area of the display device available to display media player controls. It can be seen that these controls 700 include the controls that have been assigned the six highest positions in the precedence order—i.e., priority values 1-6 shown in rows 601-606 of FIG. 6: a play/pause button 702, an audio effects button 710, a volume/mute button 705, a seek bar, a full window button 704, and a zoom button 703.” [0032], “FIG. 8 is a display diagram showing the presentation of media player controls in accordance with the control precedence shown in FIG. 6 and in Table 5 in a case where a relatively small amount of display space is available, such as on a device having a small display device or a display where only a small portion of the display is allocated to display of media play controls. The controls 800 include the four highest controls in the precedence order—i.e., the controls having priority values 1-4 in rows 601-604 of FIG. 6: play/pause button 802, audio effects button 810, volume/mute button 805, and seek bar 801.” Figs. 6-8). In view of Dandu’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein a first quantity of the at least one first state adjustment control is determined based on an available space for deploying controls on the live streaming viewing interface, and the at least one first state adjustment control in the determined first quantity is selected from candidate controls based on priorities of the candidate controls. The modification would serve to enable a combined system to customize the user interface so that when the media player is constrained to a small display area, the user interface controls that are of higher precedence. The modification would serve to facilitate user operation of the system. The combination teaches the limitations specified above; however, the combination does not expressly teach the at least one first state adjustment control in the determined first quantity is selected according to a theme of a live stream in which the user participates. Sullivan teaches at least one first state adjustment control is selected according to a theme of media content ([0051], “Thematic metadata can include for example thematic GUI elements representative of a theme of the media program. The GUI elements can represent media controls that symbolically emulate a theme of the media program.” [0052], “With thematic metadata, common media controls can be replaced by the MP with thematic media controls. For example, referee calls can be used to symbolize the media controls rewind, pause, and forward. A football player initiating a kickoff can be used to symbolize a media playback button. Similarly, a track bar that indicates how much of the media program has transpired, and can also serve to fast rewind or fast forward the media program when dragged left or right respectively can be replaced with a football symbol.” Fig. 7). In view of Sullivan’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the at least one first state adjustment control in the determined first quantity is selected according to a theme of a live stream in which the user participates. The modification would serve to enhance user engagement with content. Regarding claim 13, the combination further teaches wherein the at least one first co-hosting state adjustment control is displayed in a predetermined area of the live streaming co-hosting display page in a predetermined order (Kedenburg: [0041], “the broadcaster can perform one or more touch screen gestures to reveal the filter options. In this example, the broadcaster can perform a swipe gesture in the region 408 to reveal a set of options 410 for applying various filters, as illustrated in the example of FIG. 4B.” [0042], “The broadcaster can also apply filters to the co-broadcaster's live content stream. In this example, any visual modifications to the co-broadcaster's live content stream can appear in the second region 468 through which the co-broadcaster's live content stream is being presented. Similarly, the co-broadcaster can apply filters to the co-broadcaster's live content stream and/or the broadcaster's live content stream.” Fig. 4B). Regarding claim 14, Kedenburg teaches an electronic device, wherein the electronic device comprises a processor and a memory; the memory is configured to store instructions or computer programs; and the processor is configured to execute the instructions or computer programs in the memory to cause the method of claim 1 ([0081], [0083], Fig. 7). The rejection of claim 1 under 35 USC §103 is similarly applied to the remaining limitations of claim 14. Regarding claim 20, Kedenburg teaches a non-transitory computer readable medium having instructions or computer programs stored thereon, wherein the instructions or computer programs, when being executed on a device, cause the method of claim 1 ([0081], [0083], Fig. 7). The rejection of claim 1 under 35 USC §103 is similarly applied to the remaining limitations of claim 20. Claim(s) 2, 4 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Kedenburg, Harnett ‘243, Dandu, Sullivan, and Apurvi (US 2023/0396826). Regarding claim 2, Kedenburg teaches the limitations specified above; however, Kedenburg does not expressly teach wherein the at least one first state adjustment control comprises at least one of a first voice control and a first video control. Apurvi teaches a state adjustment control comprises at least one of a first voice control and a first video control ([0080], “The functions given to the host 120 are described as below:” [0083], “Video on/off” [0084], “Audio Mute/Unmute” [0094], “The functions given to Participants is described as below:” [0097], “Video on/off” [0098], “Audio Mute/Unmute”). In view of Apurvi’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention wherein the at least one first state adjustment control comprises at least one of a first voice control and a first video control. The modification would allow co-hosts to determine whether their voice is enabled or not. The modification would improve the experience for users. Regarding claim 4, Kedenburg teaches the limitations specified above; however, Kedenburg does not expressly teach wherein the target control is a first video control; the adjusting the live stream interaction state of the target user on the live stream interaction interface, comprises: adjusting the first video control from a first usage state to a second usage state on the live stream interaction interface, wherein the first usage state is a video enabled state, and the second usage state is a video disabled state; or the second usage state is the video enabled state, and the first usage state is the video disabled state. Apurvi teaches: a first video control; adjusting a co-hosting state of a user, comprises: adjusting the first video control from a first usage state to a second usage state, wherein the first usage state is a video enabled state, and the second usage state is a video disabled state; or the second usage state is the video enabled state, and the first usage state is the video disabled state ([0080], “The functions given to the host 120 are described as below:” [0083], “Video on/off” [0084], “Audio Mute/Unmute” [0094], “The functions given to Participants is described as below:” [0097], “Video on/off” [0098], “Audio Mute/Unmute”). In view of Apurvi’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention wherein the target control is a first video control; the adjusting the co-hosting state of the target user on the live stream interaction interface, comprises: adjusting the first video control from a first usage state to a second usage state on the live stream interaction interface, wherein the first usage state is a video enabled state, and the second usage state is a video disabled state; or the second usage state is the video enabled state, and the first usage state is the video disabled state. The modification would allow co-hosts to determine whether their video is enabled or not. The modification would improve the experience for users. Regarding claim 15, Kedenburg teaches the limitations specified above; however, Kedenburg does not expressly teach wherein the target control is a first voice control; the adjusting the live stream interaction state of the target user on the live stream interaction interface, comprises: adjusting the first voice control from a first voice state to a second voice state on the live stream interaction interface, wherein the first voice state is a voice enabled state, and the second voice state is a voice disabled state; or the second voice state is the voice enabled state, and the first voice state is the voice disabled state. Apurvi teaches: a first voice control; adjusting a co-hosting state of a user, comprises: adjusting the first voice control from a first voice state to a second voice state , wherein the first voice state is a voice enabled state, and the second voice state is a voice disabled state; or the second voice state is the voice enabled state, and the first voice state is the voice disabled state ([0080], “The functions given to the host 120 are described as below:” [0083], “Video on/off” [0084], “Audio Mute/Unmute” [0094], “The functions given to Participants is described as below:” [0097], “Video on/off” [0098], “Audio Mute/Unmute”). In view of Apurvi’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention wherein the target control is a first voice control; the adjusting the co-hosting state of the target user on the live stream interaction interface, comprises: adjusting the first voice control from a first voice state to a second voice state on the live stream interaction interface, wherein the first voice state is a voice enabled state, and the second voice state is a voice disabled state; or the second voice state is the voice enabled state, and the first voice state is the voice disabled state. The modification would allow co-hosts to determine whether their voice is enabled or not. The modification would improve the experience for users. Claim(s) 5, 7, 9-12, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Kedenburg, Hartnett ‘243, Dandu, Sullivan, and Hartnett et al. (US 2022/0070504, hereinafter “Hartnett ‘504”). Regarding claims 5 and 17, the combination further teaches the limitations specified above, and teaches wherein the live stream interaction interface comprises a second state adjustment control (Kedenburg: [0041], “the broadcaster can perform one or more touch screen gestures to reveal the filter options. In this example, the broadcaster can perform a swipe gesture in the region 408 to reveal a set of options 410 for applying various filters, as illustrated in the example of FIG. 4B.” [0042], “The broadcaster can also apply filters to the co-broadcaster's live content stream. In this example, any visual modifications to the co-broadcaster's live content stream can appear in the second region 468 through which the co-broadcaster's live content stream is being presented. Similarly, the co-broadcaster can apply filters to the co-broadcaster's live content stream and/or the broadcaster's live content stream.” Fig. 4B); the method further comprises: receiving a configuration operation of the user for at least one to-be-adjusted live stream interaction state; and in response to a state adjustment request triggered by the user on the live stream interaction state management interface, adjusting the live stream interaction state of the user on the live stream interaction interface according to the configuration operation (Kedenburg: [0041], “The broadcaster can select any of the options to apply the various filters described above. Any content that is inserted into the live content stream using the filter options can appear as one or more overlay in the interface 404 through which the live content stream is being presented. As mentioned, such visual modifications to the live content stream can also be applied, or propagated, to the respective interfaces of viewers that are accessing the live content stream. In the example of FIG. 4B, the broadcaster can select an option to draw (or doodle) in the live content stream. The broadcaster can also select an option to insert text into the live content stream. Once complete, these visual modifications can be presented in the interface 404, as illustrated in the example of FIG. 4C. As shown, the interface 404 in FIG. 4C has been updated to include the broadcaster's doodle 412 and inserted text 414.” Figs. 4B-4C). The combination does not expressly teach, in response to a trigger operation of the user for the second state adjustment control, displaying a live stream interaction state management interface to the user, and the co-hosting state adjustment request is triggered by the target user on the co-hosting state management page. Hartnett ‘504 teaches: in response to a trigger operation of a target user, displaying a co-hosting state management page to the target user, and a co-hosting state adjustment request is triggered by the target user on the co-hosting state management page ([0099], “the host user interface 302 can include live stream settings elements 312 that correspond to setting up and facilitating a public combined live video stream. For example, as shown, the live stream settings elements 312 include elements corresponding to adding a title to the public combined live video stream, setting up various digital rooms, changing the setup or scheme of the public combined live video stream, and adding activities to the public combined live video stream. Many of the live stream setting elements are further described below.” [0262], “As mentioned above in connection with FIG. 3A, the live stream settings elements 312 includes elements that can correspond to adding a title to the public combined live video stream, changing the setup or scheme of the public combined live video stream, and adding activities to the public combined live video stream. As shown in FIG. 18A, the live video streaming system 106 can detect selection of the ‘Setup’ live stream settings element and, in response, update the host user interface 1802 to show a live stream setup menu 1804.” [0263], “As shown, the live stream setup menu 1804 includes various live stream setup options 1806 including options to enable, modify, and/or specify a participant lineup, eligibility requirements, digital purchase options, digital auction settings, host authorizations, room access, and comments among other live stream setup options 1806 not show.” Fig. 3A). In view of Hartnett ‘504’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention modify the combination to include, in response to a trigger operation of the user for the second state adjustment control, displaying a live stream interaction state management interface to the user, and the co-hosting state adjustment request is triggered by the target user on the co-hosting state management page. The modification would serve to facilitate user navigation and operation of the system. Regarding claim 7, the combination further teaches wherein the streaming interaction state management interface comprises at least one to-be-used control; the at least one to-be-used control is used to control the at least one to-be-adjusted live stream interaction state; the receiving the configuration operation of the user for the at least one to-be-adjusted live stream interaction state, comprises: receiving a trigger operation of the user for the at least one to-be-used control; the method further comprises: in response to the trigger operation of the user for the at least one to-be-used control, adjusting state description information of the at least one to-be-used control on the live stream interaction state management interface (Hartnett ‘504: [0099], [0262], “As mentioned above in connection with FIG. 3A, the live stream settings elements 312 includes elements that can correspond to adding a title to the public combined live video stream, changing the setup or scheme of the public combined live video stream, and adding activities to the public combined live video stream. As shown in FIG. 18A, the live video streaming system 106 can detect selection of the ‘Setup’ live stream settings element and, in response, update the host user interface 1802 to show a live stream setup menu 1804.” [0263], “As shown, the live stream setup menu 1804 includes various live stream setup options 1806 including options to enable, modify, and/or specify a participant lineup, eligibility requirements, digital purchase options, digital auction settings, host authorizations, room access, and comments among other live stream setup options 1806 not show.” [0281], “The host device 108 can select one or more activities associated with a public combined live video stream. To illustrate, FIG. 18B shows the activity element 1810 selected within the host user interface 1802. As shown, the host user interface 1802 shows the activity element 1810 along with an activities list 1812 of example activities the live video streaming system 106 can facilitate during a public combined live video stream.” Fig. 3A). Regarding claims 9 and 18, the combination further teaches wherein the method further comprises: in response to a request of ending live stream interaction, stopping displaying the at least one first state adjustment control on the live stream interaction interface (Hartnett: [0185], “In one or more implementations, a given participant device from the participant devices 110 requests to leave the public combined live video stream. For example, the live video streaming system 106 detects selection of an exit live video stream element at the given participant device.” [0187], “As shown, the series of acts 1000 includes an act 1006 of the live video streaming system 106 generating a digital post-participation room for the leaving participant device. For example, the live video streaming system 106 can move the live video stream of the given participant device from the public combined live video stream to the digital post-participation room.” Specifically, Fig. 7B illustrates Participant Device presenting control elements at the bottom row. A user may exit via the exit live video stream element 614 to a digital room, e.g., Digital Post-Participation Room presented in Fig. 11B. As illustrated in Fig. 11B, the control elements are not present.). Regarding claim 10, the combination further teaches wherein the method further comprises: in response to the request of ending live streaming interaction, adjusting a display state of a comment control on the live streaming interaction interface (Hartnett ‘504: [0185], “In one or more implementations, a given participant device from the participant devices 110 requests to leave the public combined live video stream. For example, the live video streaming system 106 detects selection of an exit live video stream element at the given participant device.” [0187], “As shown, the series of acts 1000 includes an act 1006 of the live video streaming system 106 generating a digital post-participation room for the leaving participant device. For example, the live video streaming system 106 can move the live video stream of the given participant device from the public combined live video stream to the digital post-participation room.” Specifically, Fig. 7B illustrates Participant Device presenting a chat/message icon on the bottom row. A user may exit via the exit live video stream element 614 to a digital room, e.g., Digital Post-Participation Room presented in Fig. 11B. As illustrated in Fig. 11B, the chat/message icon is not present.). Regarding claim 11, the combination further teaches wherein the stopping displaying the at least one first state adjustment control on the live streaming, comprises: deleting at least one first state adjustment control from the live stream interaction interface according to a predetermined first dynamic effect pattern (Hartnett ‘504: [0185], “In one or more implementations, a given participant device from the participant devices 110 requests to leave the public combined live video stream. For example, the live video streaming system 106 detects selection of an exit live video stream element at the given participant device.” [0187], “As shown, the series of acts 1000 includes an act 1006 of the live video streaming system 106 generating a digital post-participation room for the leaving participant device. For example, the live video streaming system 106 can move the live video stream of the given participant device from the public combined live video stream to the digital post-participation room.” Specifically, Fig. 7B illustrates Participant Device presenting control elements at the bottom row. A user may exit via the exit live video stream element 614 to a digital room, e.g., Digital Post-Participation Room presented in Fig. 11B. As illustrated in Fig. 11B, the control elements are not present.); and/or the adjusting the display state of the comment control, comprises: adjusting a control display state of the comment control from an icon display state to a text box display state on the live stream interaction interface page according to a predetermined second dynamic effect pattern. Regarding claims 12 and 19, the combination further teaches wherein the live stream interaction interface further comprises a lives streaming interaction state display interface of the user (Kedenburg: [0041], “the broadcaster can perform one or more touch screen gestures to reveal the filter options. In this example, the broadcaster can perform a swipe gesture in the region 408 to reveal a set of options 410 for applying various filters, as illustrated in the example of FIG. 4B.” [0042], “The broadcaster can also apply filters to the co-broadcaster's live content stream. In this example, any visual modifications to the co-broadcaster's live content stream can appear in the second region 468 through which the co-broadcaster's live content stream is being presented. Similarly, the co-broadcaster can apply filters to the co-broadcaster's live content stream and/or the broadcaster's live content stream.” Fig. 4B); the method further comprises: receiving a configuration operation of the user for at least one to-be-adjusted live stream interaction state; and in response to a state adjustment request triggered by the user, adjusting the live stream interaction state of the user on the live stream interaction interface according to the configuration operation (Kedenburg: [0041], “The broadcaster can select any of the options to apply the various filters described above. Any content that is inserted into the live content stream using the filter options can appear as one or more overlay in the interface 404 through which the live content stream is being presented. As mentioned, such visual modifications to the live content stream can also be applied, or propagated, to the respective interfaces of viewers that are accessing the live content stream. In the example of FIG. 4B, the broadcaster can select an option to draw (or doodle) in the live content stream. The broadcaster can also select an option to insert text into the live content stream. Once complete, these visual modifications can be presented in the interface 404, as illustrated in the example of FIG. 4C. As shown, the interface 404 in FIG. 4C has been updated to include the broadcaster's doodle 412 and inserted text 414.” Figs. 4B-4C). The combination does not expressly teach, in response to a trigger operation of the user for the live stream interaction state display interface, displaying a live stream interaction state management interface to the user, and the state adjustment request is triggered by the user on the live stream interaction interface state management. Hartnett ‘504 teaches: in response to a trigger operation of a target user, displaying a co-hosting state management page to the target user, and a co-hosting state adjustment request is triggered by the target user on the co-hosting state management page ([0099], “the host user interface 302 can include live stream settings elements 312 that correspond to setting up and facilitating a public combined live video stream. For example, as shown, the live stream settings elements 312 include elements corresponding to adding a title to the public combined live video stream, setting up various digital rooms, changing the setup or scheme of the public combined live video stream, and adding activities to the public combined live video stream. Many of the live stream setting elements are further described below.” [0262], “As mentioned above in connection with FIG. 3A, the live stream settings elements 312 includes elements that can correspond to adding a title to the public combined live video stream, changing the setup or scheme of the public combined live video stream, and adding activities to the public combined live video stream. As shown in FIG. 18A, the live video streaming system 106 can detect selection of the ‘Setup’ live stream settings element and, in response, update the host user interface 1802 to show a live stream setup menu 1804.” [0263], “As shown, the live stream setup menu 1804 includes various live stream setup options 1806 including options to enable, modify, and/or specify a participant lineup, eligibility requirements, digital purchase options, digital auction settings, host authorizations, room access, and comments among other live stream setup options 1806 not show.” Fig. 3A). In view of Hartnett ‘504’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include, in response to a trigger operation of the user for the live stream interaction state display interface, displaying a live stream interaction state management interface to the user, and the state adjustment request is triggered by the user on the live stream interaction interface state management. The modification would serve to facilitate user navigation and operation of the system. Claim(s) 6 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Kedenburg, Hartnett ‘243, Dandu, Sullivan, Hartnett ‘504, and Apurvi. Regarding claim 6, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the live stream interaction state management interface comprises at least one of a second voice control, a second video control, a third video control, an image processing control, and a special effect configuration control. Apurvi teaches a voice control and a video control ([0080], “The functions given to the host 120 are described as below:” [0083], “Video on/off” [0084], “Audio Mute/Unmute” [0094], “The functions given to Participants is described as below:” [0097], “Video on/off” [0098], “Audio Mute/Unmute”). In view of Apurvi’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention wherein the live stream interaction state management interface comprises at least one of a second voice control, a second video control, a third video control, an image processing control, and a special effect configuration control. The modification would allow co-hosts to determine whether their voice is enabled or not. The modification would improve the experience for users. Regarding claim 8, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the method further comprises: in response to the trigger operation of the user for the second state adjustment control, controlling both a voice state and a video state of the user to be in a disabled state. Apurvi teaches controlling both a voice state and a video state of a user to be in a disabled state ([0080], “The functions given to the host 120 are described as below:” [0083], “Video on/off” [0084], “Audio Mute/Unmute” [0094], “The functions given to Participants is described as below:” [0097], “Video on/off” [0098], “Audio Mute/Unmute”). In view of Apurvi’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention wherein the method further comprises: in response to the trigger operation of the user for the second state adjustment control, controlling both a voice state and a video state of the user to be in a disabled state. The modification would allow co-hosts to disable video and voice presentation. The modification would improve the experience for users. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R TELAN whose telephone number is (571)270-5940. The examiner can normally be reached 9:30AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL R TELAN/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Jun 09, 2023
Application Filed
Aug 23, 2024
Non-Final Rejection — §103
Nov 27, 2024
Response Filed
Feb 07, 2025
Final Rejection — §103
May 12, 2025
Request for Continued Examination
May 16, 2025
Response after Non-Final Action
Jul 14, 2025
Non-Final Rejection — §103
Oct 16, 2025
Response Filed
Dec 08, 2025
Final Rejection — §103
Feb 12, 2026
Response after Non-Final Action
Mar 10, 2026
Request for Continued Examination
Mar 19, 2026
Response after Non-Final Action
Mar 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604066
SYSTEMS AND METHODS FOR GENERATING NOTIFICATION INTERFACES BASED ON MEDIA BROADCAST ACCESS EVENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12598361
VIDEO OPTIMIZATION PROXY SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598352
VIDEO PRESENTATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12581137
VIDEO MANAGEMENT SYSTEM FOR VIDEO FILES AND LIVE STREAMING CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12549801
LYRIC VIDEO DISPLAY METHOD AND DEVICE, ELECTRONIC APPARATUS AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
42%
Grant Probability
69%
With Interview (+27.0%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 417 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month