Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,737

SYSTEM FOR PROVIDING A COMMUNITY LIVE STREAMING AND METHOD THEREOF

Non-Final OA §103
Filed
May 16, 2024
Examiner
JONES, CARISSA ANNE
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Jjaann Company
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
20 granted / 24 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
30 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
76.0%
+36.0% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the application filed 05/16/2024. Claims 1 – 20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 4 is objected to because of the following informalities: Claim 4 recites: “The community live streaming system of claim 1, further comprising: a management unit configured to store and manage user information party room information, and participant activity history, the user information including personal and interest information…” Examiner believes there was a typographical error and a comma should be placed between “user information” and “party room information” to distinguish the two types of information. Claim 20 recites “The method of claim 19, wherein the step (d) comprises outputting the camera output images the camera output images of the corresponding participant terminal associated with said one of the candidate streamers at the split screen area of the corresponding seat number when the host streamer terminal selects one of the candidate streamers on the main frame area.” Examiner believes there was a typographical error and “the camera output images” has been repeated twice. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 3 and 11 - 13 are rejected under 35 U.S.C. 103 as being unpatentable over Hartnett et al. (U.S. Pub. No. 2022/0070243, hereinafter “Hartnett”) in view of Anderson et al. (U.S. Pub. No. 2012/0182384, hereinafter “Anderson”) and Sirivara et al. (U.S. Pub. No. 2003/0046384, hereinafter “Sirivara”). Regarding Claim 1, Hartnett teaches A community live streaming system (see Hartnett Abstract, live video streaming system), comprising: a plurality of user terminals (see Hartnett Paragraph [0003], multiple participant devices); and a server providing a live streaming service (see Hartnett Paragraph [0072], The server device 102 can include one or more computing devices to implement the networking system 104 and/or the live video streaming system 106. In one or more implementations, the server device 102 can implement all or a portion of the networking system 104 and/or the live video streaming system 106, and Figure 1, server device 102), wherein the plurality of user terminals (see Hartnett Figure 1, participant devices) comprises: a host streamer terminal for creating a party room (see Hartnett Paragraph [0036], Participant devices can include a host device or another participant device that can control the public combined live video stream, and Paragraph [0110], As shown, FIG. 4 includes the digital preparation room 404. Generally, a digital preparation room enables participant devices to privately converse and prepare before broadcasting to live to a more public audience. As also shown, the digital preparation room 404 includes the first participant device 412 (e.g., a host device), a second participant device 414, and a third participant device 416. For example, the host device (i.e., the first participant device 412) creates and joins the digital preparation room 404. The second participant device 414 then joins the digital preparation room 404 followed by the third participant device 416); and at least one participant terminal for participating in the party room (see Hartnett Paragraph [0110], As shown, FIG. 4 includes the digital preparation room 404. Generally, a digital preparation room enables participant devices to privately converse and prepare before broadcasting to live to a more public audience. As also shown, the digital preparation room 404 includes the first participant device 412 (e.g., a host device), a second participant device 414, and a third participant device 416. For example, the host device (i.e., the first participant device 412) creates and joins the digital preparation room 404. The second participant device 414 then joins the digital preparation room 404 followed by the third participant device 416), wherein the server (see Hartnett Paragraph [0054], live video streaming system resides at a server device) comprises: a candidate streamer list generation unit for providing a list of candidate streamers to the host streamer terminal (see Hartnett Paragraph [0004], the disclosed systems can generate a digital waiting room during the public combined live video stream where a pending participant device can prepare to join the public combined live video stream. For example, the disclosed systems can generate a digital waiting room that includes an additional video stream from a pending participant device, Paragraph [0034], Additionally, the live video streaming system can provide dynamic user interfaces to pending, current, and past participant devices. For example, for a pending participant device that is waiting in a digital waiting room before joining the public combined live video stream, the live video streaming system can provide a user interface to the pending participant device that includes a live video stream of the devices in the digital waiting room as well as includes the public combined live video stream. Additionally, the live video streaming system can provide a similar user interface to a participant device (e.g., a host device) currently participating in the public combined live video stream such that the participant device can view which pending participant devices are in the digital waiting room without having to leave the public combined live video stream. In this manner, the participant device can visually confirm that the pending participant devices in the digital waiting room are prepared and ready before the live video streaming system adds them to the public combined live video stream, Paragraph [0142], the live video streaming system 106 allows the host device 108 to remove participant devices using other approaches, such as from a drop-down list or from a live stream settings element, and Paragraph [0152], the live video streaming system 106 can provide additional information regarding a target digital room. For example, the live video streaming system 106 indicates the number and/or names of each participant device in occupied digital rooms), the list being based on a name of a participant of the at least one participant terminal (see Hartnett Paragraph [0142], the live video streaming system 106 allows the host device 108 to remove participant devices using other approaches, such as from a drop-down list or from a live stream settings element, and Paragraph [0152], the live video streaming system 106 can provide additional information regarding a target digital room. For example, the live video streaming system 106 indicates the number and/or names of each participant device in occupied digital rooms); and a service support unit for outputting camera images from the host streamer terminal and a selected terminal among the at least one participant terminal to predetermined areas on a live streaming screen and providing said images to the plurality of user terminals participating in the party room (see Hartnett Paragraph [0092], In addition, the live video streaming system 106 can arrange the received live video streams into an arrangement and dynamically update the layout based on activity metrics associated with the participant devices 110 (e.g., detecting when speakers change), as participant devices 110 enter and exit the public combined live video stream, and/or based on input provided by a participant device (e.g., the host device), Paragraph [0112], In many implementations, the live video streaming system 106 utilizes the digital preparation room 404 to generate the public combined live video stream 402. For example, based on detecting user input from a participant device within the digital preparation room 404, the live video streaming system 106 can transfer the live video streams of the participant devices in the digital preparation room 404 (e.g., the combined live video stream) to the public combined live video stream 402. As a result, the live video streaming system 106 can start broadcasting a combined live video stream that includes multiple participant devices, which are ready to participate, in an efficient and seamless manner, Paragraph [0134], To illustrate, the series of acts 500 includes an act 510 of the live video streaming system 106 receiving input to broadcast the public combined live video stream to the viewer devices 112. In one or more implementations, the live video streaming system 106 detects user input at the host device requesting that the digital preparation room be broadcast to the viewer devices 112, Paragraph [0137], In one or more implementations, the act 512 includes the live video streaming system 106 first broadcasting the live video stream of a host device (e.g., of the participant devices 110) to the viewer devices 112. Then, the live video streaming system 106 adds the remaining participant devices 110 to the live video stream. For example, the public combined live video stream initially includes the live video stream of the host device, who first appears to introduce the additional participant devices. Then, the live video streaming system 106 adds the remaining participant devices (e.g., one-by-one, as a group, or in multiple groups). For instance, the host device provides input to the live video streaming system 106 as to who and/or when to add each of the participant devices to the public combined live video stream, and Paragraph [0047], In one or more implementations, the live video streaming system can facilitate and provide uniform layouts to the viewer devices. For example, the live video streaming system can generate a uniform visual arrangement (e.g., layout) of the public combined live video stream where each live video stream is correctly synchronized in time. The live video streaming system can provide this uniform layout to the viewer devices. In this manner, each viewer device shares the same experience of the public combined live video stream. In some implementations, however, the live video streaming system generates one or more alternative visual arrangements, which are also correctly synchronized in time, and provides a viewer device with a requested visual arrangement. In any case, the live video streaming system generates and provides a handful of layouts of the public combined live video stream rather than each viewer device trying to generate a separate layout), wherein the selected terminal is selected as a fellow streamer by the host streamer terminal (see Hartnett Paragraph [0134], To illustrate, the series of acts 500 includes an act 510 of the live video streaming system 106 receiving input to broadcast the public combined live video stream to the viewer devices 112. In one or more implementations, the live video streaming system 106 detects user input at the host device requesting that the digital preparation room be broadcast to the viewer devices 112, Paragraph [0137], In one or more implementations, the act 512 includes the live video streaming system 106 first broadcasting the live video stream of a host device (e.g., of the participant devices 110) to the viewer devices 112. Then, the live video streaming system 106 adds the remaining participant devices 110 to the live video stream. For example, the public combined live video stream initially includes the live video stream of the host device, who first appears to introduce the additional participant devices. Then, the live video streaming system 106 adds the remaining participant devices (e.g., one-by-one, as a group, or in multiple groups). For instance, the host device provides input to the live video streaming system 106 as to who and/or when to add each of the participant devices to the public combined live video stream). Hartnett does not expressively teach a setting checking unit for checking on/off status of a camera and microphone of the at least one participant terminal upon receiving a request to enter the party room from the at least one participant terminal; a network quality measurement unit for accessing the network quality of the at least one participant terminal; the list being based on the on/off status of the camera and microphone and the network quality of the at least one participant terminal; However, Anderson teaches a setting checking unit for checking on/off status of a camera and microphone of the at least one participant terminal upon receiving a request to enter the party room from the at least one participant terminal (see Anderson Paragraph [0398], FIG. 18 is an embodiment of the client connecting to the streaming server. In step 1801 the client makes a connection request with the streaming server passing it the following information: user ID, used as the unique identifier for clients; conference ID, the unique identifier for a conference; user name, used for display purposes in the user interface; facilitator; a variable designating if the client has facilitator privileges in both the client side program and the streaming server; record, used to determine if the conference has the ability to be recorded; hardware (hw) setup, used to determine the device configuration of the client (e.g. microphone, camera). Proceed to step 1802); the list being based on the on/off status of the camera and microphone of the at least one participant terminal (see Anderson Figure 5 and Paragraph [0181], The interface 500 also includes a participant panel 510 showing a list of the current conference participants, along participant type icons and participant status icons 509, and Paragraph [0184], In a preferred embodiment, type [icons] refers to the type of connection from the participant to the server: a) video (including audio and text, also known as full video, indicated by video camera icon), audio (including text, indicated by a music note and speaker icon)); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a community live streaming system where a host creates a party room, the server provides a list of participant candidates to the host, and the host selects participant(s) to display alongside the host’s stream and broadcast to the room (as taught in Hartnett), with determining a participant’s device configuration, such as the use of a camera and microphone, upon request to join a stream, and generate a list that portrays the device configuration of participants (as taught in Anderson), the motivation being to display the capabilities and readiness of a participant to a host in order to reduce delays and confusion upon entering streaming room (see Anderson Paragraph [0142]). Hartnett in view of Anderson does not expressively teach a network quality measurement unit for accessing the network quality of the at least one participant terminal; the list being based on the network quality of the at least one participant terminal; However, Sirivara teaches a network quality measurement unit for accessing the network quality of the at least one participant terminal (see Sirivara Paragraph [0020], The server 106 includes a Quality of Service (QOS) determinator 108 responsible for determining quality of service received by the client); the list being based on the network quality of the at least one participant terminal (see Sirivara Paragraph [0025], transmission statistics are logged 206, e.g., received and stored, by the sending server or other machine); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a community live streaming system where a host creates a party room, the server provides a list of participant candidates to the host, and the host selects participant(s) to display alongside the host’s stream and broadcast to the room (as taught in Hartnett), with determining a participant’s device configuration, such as the use of a camera and microphone, upon request to join a stream, and generate a list that portrays the device configuration of participants (as taught in Anderson), the motivation being to display the capabilities and readiness of a participant to a host in order to reduce delays and confusion upon entering streaming room (see Anderson Paragraph [0142]). It would have been further obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a community live streaming system where a host creates a party room, the server provides a list of participant candidates to the host that includes the candidate(s) device configuration, and the host selects participant(s) to display alongside the host’s stream and broadcast to the room (as taught in Hartnett in view of Anderson), with determining a participant’s network quality, and generate a list that portrays the network quality of participant devices (as taught in Sirivara), the motivation being to improve reception of video and audio output in a live stream and to ensure the live stream is meeting a certain quality of service requirements (see Sirivara Paragraph [0028] and [0033]). Regarding Claim 2, Hartnett in view of Anderson and Sirivara teaches The community live streaming system of claim 1, wherein the network quality measurement unit is further configured to determine a network quality rating based on bandwidth, latency, packet loss, and playback buffer latency of the at least one participant terminal (see Sirivara Paragraph [0018], the media player is also responsible for signaling transmission characteristics in accord with the RTCP protocol, however it will be appreciated that a separate hardware and/or software component may be responsible for the signaling. In one embodiment, transmission statistics minimally include an indication of packets not received by the client (e.g., lost packets), and network latency. In an alternate embodiment, transmission statistics also include an indication of average transmission bandwidth. In a further embodiment, transmission statistics may include one or more of peak bandwidth, low bandwidth, high/low latency, jitter, client video-rendering frame rate, client audio and/or video hardware specifications, client central processing unit (CPU) specification, and client CPU usage, as such statistics may be used to enhance evaluation of the content received by a client), and wherein the service support unit is further configured to support differentiated video qualities depending on the network quality rating of the at least one participant terminal (see Hartnett Paragraph [0083], the live video stream is sent via a real-time messaging protocol (RTMP) or another protocol that prioritizes quality connections, Paragraph [0311], the live video streaming system 106 can enable a host device to establish eligibility requirements for participating in the public combined live video stream (e.g., a verified account, minimum device streaming quality, granted authorization to utilizing their live video stream, etc.), Paragraph [0054], the live video streaming system can provide several technical advantages and benefits over conventional systems. More particularly, in many implementations, the live video streaming system resides at a server device that provides increases in efficiency, flexibility, and accuracy. As one example of improved efficiency, the live video streaming system can easily scale as the number of live video streams increases and/or the quality of live video stream improves without becoming overburdened. Indeed, the live video streaming system can utilize a server device that provides additional hardware capabilities over most client devices, which enables the live video streaming system to efficiently join a large number of live video streams into a public combined live video stream, Paragraph [0056], the live video streaming system can dynamically adjust the video quality being provided to each participant device, including broadcasting the public combined live video stream to viewer devices at a higher resolution). Regarding Claim 3, Hartnett in view of Anderson and Sirivara teaches The community live streaming system of claim 2, wherein the candidate streamer list generation unit is configured to provide a list of candidate streamers to the host streamer terminal comprising participant information (see Hartnett Paragraph [0004], the disclosed systems can generate a digital waiting room during the public combined live video stream where a pending participant device can prepare to join the public combined live video stream. For example, the disclosed systems can generate a digital waiting room that includes an additional video stream from a pending participant device, Paragraph [0034], Additionally, the live video streaming system can provide dynamic user interfaces to pending, current, and past participant devices. For example, for a pending participant device that is waiting in a digital waiting room before joining the public combined live video stream, the live video streaming system can provide a user interface to the pending participant device that includes a live video stream of the devices in the digital waiting room as well as includes the public combined live video stream. Additionally, the live video streaming system can provide a similar user interface to a participant device (e.g., a host device) currently participating in the public combined live video stream such that the participant device can view which pending participant devices are in the digital waiting room without having to leave the public combined live video stream. In this manner, the participant device can visually confirm that the pending participant devices in the digital waiting room are prepared and ready before the live video streaming system adds them to the public combined live video stream, Paragraph [0142], the live video streaming system 106 allows the host device 108 to remove participant devices using other approaches, such as from a drop-down list or from a live stream settings element, and Paragraph [0152], the live video streaming system 106 can provide additional information regarding a target digital room. For example, the live video streaming system 106 indicates the number and/or names of each participant device in occupied digital rooms) corresponding to the at least one participant terminal with the camera and microphone turned on (see Anderson Figure 5 and Paragraph [0181], The interface 500 also includes a participant panel 510 showing a list of the current conference participants, along participant type icons and participant status icons 509, and Paragraph [0184], In a preferred embodiment, type [icons] refers to the type of connection from the participant to the server: a) video (including audio and text, also known as full video, indicated by video camera icon), audio (including text, indicated by a music note and speaker icon)) and arranged in order of increasing network quality rating (see Hartnett Paragraph [0083], the live video stream is sent via a real-time messaging protocol (RTMP) or another protocol that prioritizes quality connections, Paragraph [0311], the live video streaming system 106 can enable a host device to establish eligibility requirements for participating in the public combined live video stream (e.g., a verified account, minimum device streaming quality, granted authorization to utilizing their live video stream, etc.)). Regarding Claims 11 - 13, they are rejected similarly as Claims 1 - 3, respectively. The method can be found in Hartnett (Abstract, method). Claims 4 – 6 and 14 - 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hartnett et al. (U.S. Pub. No. 2022/0070243, hereinafter “Hartnett”) in view of Anderson et al. (U.S. Pub. No. 2012/0182384, hereinafter “Anderson”), Sirivara et al. (U.S. Pub. No. 2003/0046384, hereinafter “Sirivara”) and Torstensen et al. (U.S. Pub. No. 2023/0325735, hereinafter “Torstensen”). Regarding Claim 4, Hartnett in view of Anderson and Sirivara teaches The community live streaming system of claim 1, further comprising: a management unit configured to store and manage user information party room information, and participant activity history, the user information including personal and interest information (see Hartnett Paragraph [0379], As mentioned above, the live video streaming system 106 can operate within a networking system 104, which may be a social networking system in various implementations. In addition to the description given above, a social networking system may enable its users (such as persons or organizations) to interact with the system and with each other. The social networking system may, with input from a user, create and store in the social networking system a user profile associated with the user. The user profile may include demographic information, communication-channel information, and information on personal interests of the user. The social networking system may also, with input from a user, create and store a record of relationships of the user with other users of the social networking system, as well as provide services (e.g. wall posts, photo-sharing, online calendars and event organization, messaging, games, or advertisements) to facilitate social interaction between or among users, Paragraph [0388], The networking system 104 may generate, store, receive, and send networking data, such as user-profile data, concept-profile data, graph information (e.g., social-graph information), or other suitable data related to the online network of users, Paragraph [0396], A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. For example, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing ” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external), Paragraph [0416], history of user’s actions); Hartnett in view of Anderson and Sirivara does not expressively teach a recommendation unit configured to compare the user information of one of the user terminals with the party room information and user information of other user terminals including the host streamer terminal, respectively, and recommend party rooms and host streamers in order of taste similarity to said one of the user terminals. However, Torstensen teaches a recommendation unit configured to compare the user information of one of the user terminals with the party room information and user information of other user terminals including the host streamer terminal, respectively, and recommend party rooms and host streamers in order of taste similarity to said one of the user terminals (see Torstensen Paragraph [0067], Some embodiments of group-determination logic 230 include logic for querying or processing user data for a potential group member to detect one or more characteristics for defining group membership. For instance, in an embodiment, group-determination logic 230 includes logic for performing comparison of user data to determine similarities among potential group members to determine that they are part of a group and/or specifying explicit or predefined user data to evaluate and determine group membership. For example, group-determination logic 230 may include a set of rules for comparing user-data feature values (which may be determined by features determiner 256), corresponding to a characteristic of a group or group members, for determining identical data feature types, and/or determining a level of similarity for feature values of similar feature types. For instance, some embodiments may utilize classification models such as statistical clustering (e.g., k-means or nearest neighbor) or a proximity of features to each other on a semantic knowledge graph, neural network, or other classification techniques to determine similarity, Paragraph [0097], an insight may be determined based on comparing information about a user with information about group members (using user data associated with the user and with the group or group members) to identify one or more similarities, and then providing the similar information as an insight). It would have been further obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a community live streaming system where a host creates a party room, the server provides a list of participant candidates to the host that includes the candidate(s) device configuration and network quality, and the host selects participant(s) to display alongside the host’s stream and broadcast to the room (as taught in Hartnett in view of Anderson and Sirivara), with comparing a participant’s information with party room data and other participants, to recommend relevant party rooms and streamers (as taught in Torstensen), the motivation being to improve engagement and discoverability by helping participants or viewers find and view information about a group that has shared interests (see Torstensen Paragraph [0001]). Regarding Claim 5, Hartnett in view of Anderson, Sirivara and Torstensen teaches The community live streaming system of claim 4, further comprising: a similarity calculator configured to calculate similarity of a party room by comparing the party room information with the user information and activity history of said one of the user terminals (see Torstensen Paragraph [0067], Some embodiments of group-determination logic 230 include logic for querying or processing user data for a potential group member to detect one or more characteristics for defining group membership. For instance, in an embodiment, group-determination logic 230 includes logic for performing comparison of user data to determine similarities among potential group members to determine that they are part of a group and/or specifying explicit or predefined user data to evaluate and determine group membership. For example, group-determination logic 230 may include a set of rules for comparing user-data feature values (which may be determined by features determiner 256), corresponding to a characteristic of a group or group members, for determining identical data feature types, and/or determining a level of similarity for feature values of similar feature types. For instance, some embodiments may utilize classification models such as statistical clustering (e.g., k-means or nearest neighbor) or a proximity of features to each other on a semantic knowledge graph, neural network, or other classification techniques to determine similarity, Paragraph [0097], an insight may be determined based on comparing information about a user with information about group members (using user data associated with the user and with the group or group members) to identify one or more similarities, and then providing the similar information as an insight, and Paragraph [0043], user data received via user-data collection component 210 may be obtained from a data source (such as data source 104(a) in FIG. 1, which may be a social networking site, a professional networking site, a corporate network, an organization intranet or file share, or other data source containing user or group data) or determined via one or more sensors (such as sensors 103a and 107 of FIG. 1), which may be on or associated with one or more user devices (such as user device 102a), servers (such as server 106), and/or other computing devices. As used herein, a sensor may include a function, routine, component, or combination thereof for sensing, detecting, or otherwise obtaining information such as user data from a data source 104a, and may be embodied as hardware, software, or both. By way of example and not limitation, user data may include data that is sensed, detected, or determined from one or more sensors (referred to herein as sensor data), such as location information of mobile device(s), properties or characteristics of the user device(s), user-activity information (for example: app usage; online activity; searches; voice data such as automatic speech recognition; activity logs; communications data, including calls, texts, chats, messages, and emails; document comments; website posts; other user data associated with communication events, including user history, session logs, application data, contacts data, calendar and schedule data, notification data, social-network data, ecommerce activity, user-account(s) data (which may include data from user preferences or settings associated with a personalization-related application, a personal assistant application or service, an online service or cloud-based account such as Microsoft 365, an entertainment or streaming media account, a purchasing club or services); global positioning system (GPS) data; other user device data (which may include device settings, profiles, network-related information, payment or credit card usage data, or purchase history data); other sensor data that may be sensed or otherwise detected by a sensor (or other detector) component(s), including data derived from a sensor component associated with the user (including location, motion, orientation, position, user-access, user-activity, network-access, user-device charging, or other data that is capable of being provided by one or more sensor component); data derived based on other data (for example, location data that can be derived from Wi-Fi, cellular network, or IP address data), and nearly any other source of data that may be sensed, detected, or determined as described herein), wherein the candidate streamer list generation unit is further configured to generate a list of candidate streamers (see Hartnett Paragraph [0004], the disclosed systems can generate a digital waiting room during the public combined live video stream where a pending participant device can prepare to join the public combined live video stream. For example, the disclosed systems can generate a digital waiting room that includes an additional video stream from a pending participant device, Paragraph [0034], Additionally, the live video streaming system can provide dynamic user interfaces to pending, current, and past participant devices. For example, for a pending participant device that is waiting in a digital waiting room before joining the public combined live video stream, the live video streaming system can provide a user interface to the pending participant device that includes a live video stream of the devices in the digital waiting room as well as includes the public combined live video stream. Additionally, the live video streaming system can provide a similar user interface to a participant device (e.g., a host device) currently participating in the public combined live video stream such that the participant device can view which pending participant devices are in the digital waiting room without having to leave the public combined live video stream. In this manner, the participant device can visually confirm that the pending participant devices in the digital waiting room are prepared and ready before the live video streaming system adds them to the public combined live video stream, Paragraph [0142], the live video streaming system 106 allows the host device 108 to remove participant devices using other approaches, such as from a drop-down list or from a live stream settings element, and Paragraph [0152], the live video streaming system 106 can provide additional information regarding a target digital room. For example, the live video streaming system 106 indicates the number and/or names of each participant device in occupied digital rooms) based on the on/off status of cameras and microphones on the at least one participant terminal (see Anderson Figure 5 and Paragraph [0181], The interface 500 also includes a participant panel 510 showing a list of the current conference participants, along participant type icons and participant status icons 509, and Paragraph [0184], In a preferred embodiment, type [icons] refers to the type of connection from the participant to the server: a) video (including audio and text, also known as full video, indicated by video camera icon), audio (including text, indicated by a music note and speaker icon)), network quality (see Sirivara Paragraph [0020], The server 106 includes a Quality of Service (QOS) determinator 108 responsible for determining quality of service received by the client, and Paragraph [0025], transmission statistics are logged 206, e.g., received and stored, by the sending server or other machine), and similarity to the party room (see Torstensen Paragraph [0067], Some embodiments of group-determination logic 230 include logic for querying or processing user data for a potential group member to detect one or more characteristics for defining group membership. For instance, in an embodiment, group-determination logic 230 includes logic for performing comparison of user data to determine similarities among potential group members to determine that they are part of a group and/or specifying explicit or predefined user data to evaluate and determine group membership. For example, group-determination logic 230 may include a set of rules for comparing user-data feature values (which may be determined by features determiner 256), corresponding to a characteristic of a group or group members, for determining identical data feature types, and/or determining a level of similarity for feature values of similar feature types, Paragraph [0097], an insight may be determined based on comparing information about a user with information about group members (using user data associated with the user and with the group or group members) to identify one or more similarities, and then providing the similar information as an insight). Regarding Claim 6, Hartnett in view of Anderson, Sirivara and Torstensen teaches The community live streaming system of claim 5, wherein the user information includes at least one of age, gender, country/region, and language, and categorical interest information, the party room information includes at least one of category, party room title, party room description, related tags, and capacity, and the participant activity history information includes at least one of number and frequency of party room participation, donation information, party evaluation score, number of fellow streamer requests, and number of fellow streamer selections (see Hartnett Paragraph [0379], As mentioned above, the live video streaming system 106 can operate within a networking system 104, which may be a social networking system in various implementations. In addition to the description given above, a social networking system may enable its users (such as persons or organizations) to interact with the system and with each other. The social networking system may, with input from a user, create and store in the social networking system a user profile associated with the user. The user profile may include demographic information, communication-channel information, and information on personal interests of the user. The social networking system may also, with input from a user, create and store a record of relationships of the user with other users of the social networking system, as well as provide services (e.g. wall posts, photo-sharing, online calendars and event organization, messaging, games, or advertisements) to facilitate social interaction between or among users, Paragraph [0388], The networking system 104 may generate, store, receive, and send networking data, such as user-profile data, concept-profile data, graph information (e.g., social-graph information), or other suitable data related to the online network of users, Paragraph [0396], A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. For example, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing ” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external), Paragraph [0416], history of user’s actions, and Figure 6A and Figure 6B, digital preparation room is the title of the party room). Regarding Claims 14 - 16, they are rejected similarly as Claims 4 - 6, respectively. The method can be found in Hartnett (Abstract, method). Claims 7 – 10 and 17 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hartnett et al. (U.S. Pub. No. 2022/0070243, hereinafter “Hartnett”) in view of Anderson et al. (U.S. Pub. No. 2012/0182384, hereinafter “Anderson”), Sirivara et al. (U.S. Pub. No. 2003/0046384, hereinafter “Sirivara”) and Powell et al. (U.S. Pub. No. 2023/0028265, hereinafter “Powell”). Regarding Claim 7, Hartnett in view of Anderson and Sirivara teaches The community live streaming system of claim 2, further comprising: a UI generation unit configured to provide a user interface (UI) to the user terminal when the party room is created, wherein the UI comprises a background area, multiple split screen areas nested within the background area, and multiple function icons (see Hartnett Figure 6A and 6B, in which the digital preparation room is created and multiple live streamers are connected, in which the screen is split into three areas for each video output, and there are function icons on the user interface (such as the side and bottom), and Figure 9B showing a background area in the digital waiting room, in which the video is nested within the background area), wherein the multiple split screen areas include multiple camera output areas of fellow streamers centered on the camera output area of the host streamer terminal (see Hartnett Figure 6A and 7A, a user interface of the host in which the digital preparation room and public live stream is created and multiple live streamers are connected, in which the screen is split into three areas for each video output in an arrangement centered around the middle of the interface), Hartnett in view of Anderson and Sirivara does not expressively teach each camera output area of the fellow streamers having a set seat number. However, Powell teaches each camera output area of the fellow streamers having a set seat number (see Powell Paragraph [0054], FIG. 8 illustrates a user interface providing for a selection of a preconfigured virtual seating chart in accordance with an example embodiment. In some embodiments, a host selects that participants be assigned seats in a random order, or based on a number of participants and/or grouping preferences associated with each of the participants). It would have been further obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a community live streaming system where a host creates a party room, the server provides a list of participant candidates to the host that includes the candidate(s) device configuration and network quality, and the host selects participant(s) to display alongside the host’s stream and broadcast to the room (as taught in Hartnett in view of Anderson and Sirivara), with an output area of streamers having a set seat number (as taught in Powell), the motivation being to mimic a physical world experience and improve an ability of a stream host to control and manage the environment (see Powell Paragraph [0035]). Regarding Claim 8, Hartnett in view of Anderson, Sirivara and Powell teaches The system of claim 7, further comprising: a pre-screening screen, wherein when a camera output area representing a fellow streamer position is selected at the host streamer terminal, a list of candidate streamers including participant information is provided to the pre-screening screen and the pre-screening screen allows the host streamer to select a fellow streamer for the corresponding seat number of the fellow streamer position (see Hartnett Paragraph [0092], In addition, the live video streaming system 106 can arrange the received live video streams into an arrangement and dynamically update the layout based on activity metrics associated with the participant devices 110 (e.g., detecting when speakers change), as participant devices 110 enter and exit the public combined live video stream, and/or based on input provided by a participant device (e.g., the host device), Paragraph [0112], In many implementations, the live video streaming system 106 utilizes the digital preparation room 404 to generate the public combined live video stream 402. For example, based on detecting user input from a participant device within the digital preparation room 404, the live video streaming system 106 can transfer the live video streams of the participant devices in the digital preparation room 404 (e.g., the combined live video stream) to the public combined live video stream 402. As a result, the live video streaming system 106 can start broadcasting a combined live video stream that includes multiple participant devices, which are ready to participate, in an efficient and seamless manner, Paragraph [0134], To illustrate, the series of acts 500 includes an act 510 of the live video streaming system 106 receiving input to broadcast the public combined live video stream to the viewer devices 112. In one or more implementations, the live video streaming system 106 detects user input at the host device requesting that the digital preparation room be broadcast to the viewer devices 112, Paragraph [0137], In one or more implementations, the act 512 includes the live video streaming system 106 first broadcasting the live video stream of a host device (e.g., of the participant devices 110) to the viewer devices 112. Then, the live video streaming system 106 adds the remaining participant devices 110 to the live video stream. For example, the public combined live video stream initially includes the live video stream of the host device, who first appears to introduce the additional participant devices. Then, the live video streaming system 106 adds the remaining participant devices (e.g., one-by-one, as a group, or in multiple groups). For instance, the host device provides input to the live video streaming system 106 as to who and/or when to add each of the participant devices to the public combined live video stream, and Paragraph [0047], In one or more implementations, the live video streaming system can facilitate and provide uniform layouts to the viewer devices. For example, the live video streaming system can generate a uniform visual arrangement (e.g., layout) of the public combined live video stream where each live video stream is correctly synchronized in time. The live video streaming system can provide this uniform layout to the viewer devices. In this manner, each viewer device shares the same experience of the public combined live video stream. In some implementations, however, the live video streaming system generates one or more alternative visual arrangements, which are also correctly synchronized in time, and provides a viewer device with a requested visual arrangement. In any case, the live video streaming system generates and provides a handful of layouts of the public combined live video stream rather than each viewer device trying to generate a separate layout, in view of preconfigured virtual seating chart in Powell). Regarding Claim 9, Hartnett in view of Anderson, Sirivara and Powell teaches The system of claim 7, wherein the UI generation unit provides a pre-screening screen which is associated with the corresponding seat number for a fellow streamer position when a camera output area representing the fellow streamer position is selected at the host streamer terminal (see Hartnett Paragraph [0092], In addition, the live video streaming system 106 can arrange the received live video streams into an arrangement and dynamically update the layout based on activity metrics associated with the participant devices 110 (e.g., detecting when speakers change), as participant devices 110 enter and exit the public combined live video stream, and/or based on input provided by a participant device (e.g., the host device), Paragraph [0112], In many implementations, the live video streaming system 106 utilizes the digital preparation room 404 to generate the public combined live video stream 402. For example, based on detecting user input from a participant device within the digital preparation room 404, the live video streaming system 106 can transfer the live video streams of the participant devices in the digital preparation room 404 (e.g., the combined live video stream) to the public combined live video stream 402. As a result, the live video streaming system 106 can start broadcasting a combined live video stream that includes multiple participant devices, which are ready to participate, in an efficient and seamless manner, Paragraph [0134], To illustrate, the series of acts 500 includes an act 510 of the live video streaming system 106 receiving input to broadcast the public combined live video stream to the viewer devices 112. In one or more implementations, the live video streaming system 106 detects user input at the host device requesting that the digital preparation room be broadcast to the viewer devices 112, Paragraph [0137], In one or more implementations, the act 512 includes the live video streaming system 106 first broadcasting the live video stream of a host device (e.g., of the participant devices 110) to the viewer devices 112. Then, the live video streaming system 106 adds the remaining participant devices 110 to the live video stream. For example, the public combined live video stream initially includes the live video stream of the host device, who first appears to introduce the additional participant devices. Then, the live video streaming system 106 adds the remaining participant devices (e.g., one-by-one, as a group, or in multiple groups). For instance, the host device provides input to the live video streaming system 106 as to who and/or when to add each of the participant devices to the public combined live video stream, and Paragraph [0047], In one or more implementations, the live video streaming system can facilitate and provide uniform layouts to the viewer devices. For example, the live video streaming system can generate a uniform visual arrangement (e.g., layout) of the public combined live video stream where each live video stream is correctly synchronized in time. The live video streaming system can provide this uniform layout to the viewer devices. In this manner, each viewer device shares the same experience of the public combined live video stream. In some implementations, however, the live video streaming system generates one or more alternative visual arrangements, which are also correctly synchronized in time, and provides a viewer device with a requested visual arrangement. In any case, the live video streaming system generates and provides a handful of layouts of the public combined live video stream rather than each viewer device trying to generate a separate layout, in view of preconfigured virtual seating chart in Powell), wherein the prescreening screen includes: a candidate preview having participant information and camera output images of one of the participant terminals, the candidate preview being sorted by priority of the candidate streamers; and a main frame area providing the highest prioritized candidate streamer overlapped on the preview (see Hartnett Paragraph [0092], In addition, the live video streaming system 106 can arrange the received live video streams into an arrangement and dynamically update the layout based on activity metrics associated with the participant devices 110 (e.g., detecting when speakers change), as participant devices 110 enter and exit the public combined live video stream, and/or based on input provided by a participant device (e.g., the host device), Paragraph [0112], In many implementations, the live video streaming system 106 utilizes the digital preparation room 404 to generate the public combined live video stream 402. For example, based on detecting user input from a participant device within the digital preparation room 404, the live video streaming system 106 can transfer the live video streams of the participant devices in the digital preparation room 404 (e.g., the combined live video stream) to the public combined live video stream 402. As a result, the live video streaming system 106 can start broadcasting a combined live video stream that includes multiple participant devices, which are ready to participate, in an efficient and seamless manner, Paragraph [0134], To illustrate, the series of acts 500 includes an act 510 of the live video streaming system 106 receiving input to broadcast the public combined live video stream to the viewer devices 112. In one or more implementations, the live video streaming system 106 detects user input at the host device requesting that the digital preparation room be broadcast to the viewer devices 112, Paragraph [0137], In one or more implementations, the act 512 includes the live video streaming system 106 first broadcasting the live video stream of a host device (e.g., of the participant devices 110) to the viewer devices 112. Then, the live video streaming system 106 adds the remaining participant devices 110 to the live video stream. For example, the public combined live video stream initially includes the live video stream of the host device, who first appears to introduce the additional participant devices. Then, the live video streaming system 106 adds the remaining participant devices (e.g., one-by-one, as a group, or in multiple groups). For instance, the host device provides input to the live video streaming system 106 as to who and/or when to add each of the participant devices to the public combined live video stream, and Paragraph [0047], In one or more implementations, the live video streaming system can facilitate and provide uniform layouts to the viewer devices. For example, the live video streaming system can generate a uniform visual arrangement (e.g., layout) of the public combined live video stream where each live video stream is correctly synchronized in time. The live video streaming system can provide this uniform layout to the viewer devices. In this manner, each viewer device shares the same experience of the public combined live video stream. In some implementations, however, the live video streaming system generates one or more alternative visual arrangements, which are also correctly synchronized in time, and provides a viewer device with a requested visual arrangement. In any case, the live video streaming system generates and provides a handful of layouts of the public combined live video stream rather than each viewer device trying to generate a separate layout, Paragraph [0253], for each activity metric, the live video streaming system 106 can numerically rank, order, prioritize, arrange, and/or score the viewer devices 112 and Figures 6A, 6B, 7A, 7B, 7C, and Figure 9A, in which videos can be arranged to have larger areas, and host may see the video output of a user in the digital waiting room as a preview that overlaps that UI of the host in the public live stream). Regarding Claim 10, Hartnett in view of Anderson, Sirivara and Powell teaches The system of claim 9, wherein, when the host streamer terminal selects one of the candidate streamers on the main frame area, the service support unit outputs, at the split screen area of the corresponding seat number, the camera output images of the corresponding participant terminal associated with said one of the candidate streamers (see Hartnett Paragraph [0092], In addition, the live video streaming system 106 can arrange the received live video streams into an arrangement and dynamically update the layout based on activity metrics associated with the participant devices 110 (e.g., detecting when speakers change), as participant devices 110 enter and exit the public combined live video stream, and/or based on input provided by a participant device (e.g., the host device), Paragraph [0112], In many implementations, the live video streaming system 106 utilizes the digital preparation room 404 to generate the public combined live video stream 402. For example, based on detecting user input from a participant device within the digital preparation room 404, the live video streaming system 106 can transfer the live video streams of the participant devices in the digital preparation room 404 (e.g., the combined live video stream) to the public combined live video stream 402. As a result, the live video streaming system 106 can start broadcasting a combined live video stream that includes multiple participant devices, which are ready to participate, in an efficient and seamless manner, Paragraph [0134], To illustrate, the series of acts 500 includes an act 510 of the live video streaming system 106 receiving input to broadcast the public combined live video stream to the viewer devices 112. In one or more implementations, the live video streaming system 106 detects user input at the host device requesting that the digital preparation room be broadcast to the viewer devices 112, Paragraph [0137], In one or more implementations, the act 512 includes the live video streaming system 106 first broadcasting the live video stream of a host device (e.g., of the participant devices 110) to the viewer devices 112. Then, the live video streaming system 106 adds the remaining participant devices 110 to the live video stream. For example, the public combined live video stream initially includes the live video stream of the host device, who first appears to introduce the additional participant devices. Then, the live video streaming system 106 adds the remaining participant devices (e.g., one-by-one, as a group, or in multiple groups). For instance, the host device provides input to the live video streaming system 106 as to who and/or when to add each of the participant devices to the public combined live video stream, and Paragraph [0047], In one or more implementations, the live video streaming system can facilitate and provide uniform layouts to the viewer devices. For example, the live video streaming system can generate a uniform visual arrangement (e.g., layout) of the public combined live video stream where each live video stream is correctly synchronized in time. The live video streaming system can provide this uniform layout to the viewer devices. In this manner, each viewer device shares the same experience of the public combined live video stream. In some implementations, however, the live video streaming system generates one or more alternative visual arrangements, which are also correctly synchronized in time, and provides a viewer device with a requested visual arrangement. In any case, the live video streaming system generates and provides a handful of layouts of the public combined live video stream rather than each viewer device trying to generate a separate layout, in view of preconfigured virtual seating chart in Powell). Regarding Claims 17 - 20, they are rejected similarly as Claims 7 - 10, respectively. The method can be found in Hartnett (Abstract, method). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARISSA A JONES whose telephone number is (703)756-1677. The examiner can normally be reached Telework M-F 6:30 AM - 4:00 PM CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 5712727503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARISSA A JONES/Examiner, Art Unit 2691 /DUC NGUYEN/Supervisory Patent Examiner, Art Unit 2691
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598267
IMAGE CAPTURE APPARATUS AND CONTROL METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598354
INFORMATION PROCESSING SERVER, RECORD CREATION SYSTEM, DISPLAY CONTROL METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12593004
DISPLAY METHOD, DISPLAY SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12556468
QUALITY TESTING OF COMMUNICATIONS FOR CONFERENCE CALL ENDPOINTS
2y 5m to grant Granted Feb 17, 2026
Patent 12556655
Efficient Detection of Co-Located Participant Devices in Teleconferencing Sessions
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month