DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim(s) 5-8, 11, 20 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1--4, 9-10, 12-19 is/are rejected under 35 U.S.C. 103 as being disclosed by Choi et al (hereinafter Choi), US Patent Publication 20200092562 A1 (publication date December 2013) in view of Good et al (hereinafter Good), US Patent Publication 20140376623 A1 (publication date December 2014).
As per claim{s} 1, 14, 16, Choi discloses substantial features of the claimed invention, such as a method for processing a media stream, performed by a computer device, the method comprising:
determining a cloud application and a (Choi: e.g., a ‘Viewing Engagement’ session {application} allowing a number of connecting Clients 150A –150N to view ‘broadcasted’ media content {i.e., ‘video content’ of an event / show such as a sporting event, award show, news show, game show, television shows, or a ‘live’ broadcast, etc.) [Abstract, 0029; Fig. 1] created in the cloud application (Choi: e.g., Cloud platform) [0084], and obtaining, during running of the cloud application, media-stream processing capability information of terminals joining the (Choi: e.g., Clients 150A, 150B, 150C…150N connected to System 100 / cloud platform request content / video stream according to a particular / selected content ‘profile’ {encoding} from among a plurality of ‘available profiles’ of a manifest provided to the clients…For example, Client 150A may select the ‘second profile’ {i.e., ‘resolution of 720p’ (1280×720) and ‘bitrate of about 0.9-3 Mbps’), read the manifest for the second profile and request from the Encoder 140A the corresponding segments for viewing at the client 150A ) [0039; Fig. 1] (e.g., In one aspect, the ‘profile’ (e.g., resolution and/or bitrate) may be selected solely by the Client 150A-N. In another aspect, if system 100 conditions change, such as the ‘Client's 150A-N capabilities’, the ‘load on the system 100’, and/or the ‘available network bandwidth’, the Client 150A-N may switch to a ‘higher or lower profile’ as required) [0043; Fig. 1] (e.g., By way of non-limiting example, the ‘Viewer analytics’_155 provided by each Client 150A-N may include at least one of a ‘session start / end time’, ‘client type’, ‘operating system’, a ‘network connection type’, ‘geographic location’, ‘available bandwidth’, ‘network protocol’, ‘screen size’, ‘device type’, ‘display capabilities’, and/or ‘CODEC capabilities’) [0044; Fig. 1];
performing adaptive encoding on media data to be delivered in the interaction room based on the media-stream processing capability information, to obtain at least one type of media stream for the terminals in the (Choi: e.g., For example, as discussed further below, the Encoder control module 130 may utilize data representing viewership demographics 165 {i.e., viewer count, geography, paid service tier, device type} received from the viewer data module 160 to accomplish ‘adaptive encoding’ based on detected or real-time changes in viewership demographics) [0029; Fig. 1];
determining a media stream to be delivered matching a subset of the terminals in the (Choi: e.g., For example, referring to FIG. 1, the Client 150A may request content or a video stream from the Encoder 140A. In response to the request for content, the Encoder 140A may provide to the Client 150A the encoded video data_145A that includes a ‘manifest’ identifying the ‘profiles’ available to the client 150A {i.e., a ‘first profile’ having a resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps, a ‘second profile’ having a resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps, a ‘third profile’ having a resolution of 576p (1024×576) and bitrate of about 0.6-2 Mbps, a ‘fourth profile’ having a resolution of 480p (848×480) and bitrate of about 0.4-1.5 Mbps, a ‘fifth profile’ having a resolution of 432p (768×432) and bitrate of about 0.3-1.3 Mbps, a ‘sixth profile’ having a resolution of 360p (640×360) and bitrate of about 0.2-1 Mbps, and a ‘seventh profile’ having a resolution of 240p (424×240) and bitrate of about 0.1-0.7 Mbps }. The Client 150A may ‘select’ the ‘second profile’ (e.g., resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps), read the manifest for the second profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150A….Continuing with the example shown in FIG. 1, the Client 150B may request content or a video stream from the encoder 140A. In response to the request for content, the encoder 140A may provide to the client 150B the encoded video data 145A that includes the ‘manifest’ identifying the ‘profiles available’ to the client 150B. The Client 150B may ‘select’ the first profile (e.g., resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps), read the manifest for the first profile and request from the encoder 140A the corresponding segments for ‘viewing’ at the client 150B {processing / decoding, etc.}) [0039-0042; Fig. 1]; and
delivering the media stream to the subset of the terminals in the (Choi: e.g., expressly discloses in one aspect that Video data 135N provided to existing Encoders may be redirected to the newly instantiated Encoder(s) 140N to provide encoded Video data 145N {media stream} to an increasing ‘subset of clients 150N’) [0046; Fig. 1].
But while Choi discloses substantial features of the invention above but does not expressly disclose the additional recited feature of the method further comprising determining a cloud application and an ‘interaction room’.
However, in a related endeavor, Good particularly discloses the additional recited feature of the method further comprising determining a cloud application and an ‘interaction room’ (Good: e.g., The system 200 of FIG. 2 may thus enable ‘distributed encoding of adaptive streaming renditions’, including dynamic generation of an ‘adaptive streaming manifest’ and ‘simultaneous streaming to multiple playback devices 270’ and/or servers 280. In one embodiment, the distributed encoding techniques described with reference to FIGS. 1-2 may be ‘cloud-hosted’ and may be provided to customers/subscribers (e.g., video content providers) for a fee {i.e., a per-hour or per-byte fee}) [0034; Fig. 2] (e.g., Figs. 1-5 illustrate distributed encoding of ‘live’ streams….For example, a ‘live’ stream may be used to transmit audio and/or video content corresponding to an ‘event’ as the event is being captured {i.e., in real-time or near-real time}. Examples of such ‘events’ may include, but are not limited to, in-progress ‘sporting events’, ‘musical performances’, ‘video-conferences’ {interaction room}, and webcam feeds, etc.) [0050].
It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify and/or combine Choi’s invention with the above said additional feature, as expressly disclosed by Good, for the motivation of providing methods and systems of configuring and performing distributed encoding of a video stream, and in particular to streaming live {i.e., sport events, video conferences, webcam feeds, etc.} and/or video on demand [VOD] content {i.e., TV shows, movies, etc.} to client / personal devices {i.e., computers, mobile phones, and internet-enabled televisions} [Good: Abstract, 0001 Figs. 1].
Claim(s) 14 recite(s) substantially the same limitations / features as claim 1, and accordingly rejected on the same basis.
Claim(s) 16 recite(s) substantially the same limitations / features as claim 1, is/are distinguishable only by its/their statutory category (non-transitory CRSM), and accordingly rejected on the same basis.
As per claim{s} 2, 17, Choi discloses the method wherein the performing adaptive encoding on media data to be delivered in the interaction room based on the media-stream processing capability information, to obtain at least one type of media stream for the terminals in the interaction room comprises: determining at least one type of media-stream encoding condition for the terminals in the interaction room based on the media-stream processing capability information; obtaining the media data in the interaction room (Choi: e.g., In one aspect, the profile (e.g., resolution and/or bitrate) may be selected solely by the client 150A-N. In another aspect, if system 100 ‘conditions’ change, such as the client's 150A-N ‘capabilities’, the ‘load’ on the system 100, and/or the ‘available network bandwidth’, the client 150A-N may switch to a higher or lower ‘profile’ as required. The clients 150A-N may base selection of a particular profile on various parameters and/or observations, including the current (observed/available) bandwidth and the amount of data currently residing in a client buffer. Throughout the duration of a given viewing experience, the client 150A-N may upshift or downshift (e.g., switch to a profile having a higher or lower bitrate) or stay at the same bitrate based on the available bandwidth and buffer conditions, among other factors ) [0043]; and performing adaptive encoding on the media data based on the at least one type of media-stream encoding condition, to obtain the at least one type of media stream, wherein the media-stream parameter of the at least one type of media stream satisfies the media-stream encoding condition (Choi: e.g., For example, as discussed further below, the Encoder control module 130 may utilize data representing viewership demographics 165 {i.e., viewer count, geography, paid service tier, device type} received from the viewer data module 160 to accomplish ‘adaptive encoding’ based on detected or real-time changes in viewership demographics) [0029; Fig. 1].
As per claim{s} 3, 18, Choi discloses the method wherein the media-stream processing capability information comprises network resource information and device decoding information (Choi: e.g. the viewer analytics 155 provided by each client 150A-N may include at least one of a session start time, session end time, client type, operating system, a network connection type, geographic location, ‘available bandwidth’, network protocol, screen size, device type, ‘display capabilities’, and/or ‘codec capabilities’ ) [0044], and the determining at least one type of media-stream encoding condition for the terminals in the interaction room based on the media-stream processing capability information comprises:
determining a bit rate based on the network resource information; determining an encoding format (Choi: e.g., encoding / compression ‘format’ {i.e., H.265, H.264, MPEG-2, MPEG-4, etc.}) [0026], a frame rate (Choi: e.g., ‘2-10 seconds’ of plurality of video ‘frames’ requested by the client ) [0026], and a resolution based on the device decoding information (Choi: e.g., ‘resolution’ of 240, 360, 480, 720, 1080) [0026]; and determining the at least one type of media-stream encoding condition for the terminals in the interaction room based on the bit rate, the encoding format, the frame rate, and the resolution (Choi: e.g., For example, referring to FIG. 1, the Client 150A may request content or a video stream from the Encoder 140A. In response to the request for content, the Encoder 140A may provide to the Client 150A the encoded video data_145A that includes a ‘manifest’ identifying the ‘profiles’ available to the client 150A {i.e., a ‘first profile’ having a ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps, a ‘second profile’ having a resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps, a ‘third profile’ having a resolution of 576p (1024×576) and bitrate of about 0.6-2 Mbps, a ‘fourth profile’ having a resolution of 480p (848×480) and bitrate of about 0.4-1.5 Mbps, a ‘fifth profile’ having a resolution of 432p (768×432) and bitrate of about 0.3-1.3 Mbps, a ‘sixth profile’ having a resolution of 360p (640×360) and bitrate of about 0.2-1 Mbps, and a ‘seventh profile’ having a resolution of 240p (424×240) and bitrate of about 0.1-0.7 Mbps }. The Client 150A may ‘select’ the ‘second profile’ (e.g., resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps), read the manifest for the second profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150A….Continuing with the example shown in FIG. 1, the Client 150B may request content or a video stream from the encoder 140A. In response to the request for content, the encoder 140A may provide to the client 150B the encoded video data 145A that includes the ‘manifest’ identifying the ‘profiles available’ to the client 150B. The Client 150B may ‘select’ the first profile (e.g., resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps), read the manifest for the first profile and request from the encoder 140A the corresponding segments for ‘viewing’ at the client 150B {processing / decoding, etc.}) [0039-0042; Fig. 1].
As per claim{s} 4, 19, Choi discloses the method wherein the media-stream processing capability information comprises network resource information and device decoding information (Choi: e.g. the viewer analytics 155 provided by each client 150A-N may include at least one of a session start time, session end time, client type, operating system, a network connection type, geographic location, ‘available bandwidth’, network protocol, screen size, device type, ‘display capabilities’, and/or ‘codec capabilities’ ) [0044], and the at least one type of media-stream encoding condition comprises at least one of an encoding format, a bit rate, a frame rate, or a resolution (Choi: e.g., For example, referring to FIG. 1, the Client 150A may request content or a video stream from the Encoder 140A. In response to the request for content, the Encoder 140A may provide to the Client 150A the encoded video data_145A that includes a ‘manifest’ identifying the ‘profiles’ available to the client 150A {i.e., a ‘first profile’ having a ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps, a ‘second profile’ having a resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps, a ‘third profile’ having a resolution of 576p (1024×576) and bitrate of about 0.6-2 Mbps, a ‘fourth profile’ having a resolution of 480p (848×480) and bitrate of about 0.4-1.5 Mbps, a ‘fifth profile’ having a resolution of 432p (768×432) and bitrate of about 0.3-1.3 Mbps, a ‘sixth profile’ having a resolution of 360p (640×360) and bitrate of about 0.2-1 Mbps, and a ‘seventh profile’ having a resolution of 240p (424×240) and bitrate of about 0.1-0.7 Mbps }. The Client 150A may ‘select’ the ‘second profile’ (e.g., resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps), read the manifest for the second profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150A….Continuing with the example shown in FIG. 1, the Client 150B may request content or a video stream from the encoder 140A. In response to the request for content, the encoder 140A may provide to the client 150B the encoded video data 145A that includes the ‘manifest’ identifying the ‘profiles available’ to the client 150B. The Client 150B may ‘select’ the first profile (e.g., resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps), read the manifest for the first profile and request from the encoder 140A the corresponding segments for ‘viewing’ at the client 150B {processing / decoding, etc.}) [0039-0042; Fig. 1].
As per claim{s} 9, Choi discloses the method further comprising when the terminal joining the interaction room for interaction triggers an update, obtaining information about a media-stream processing capability of an updated terminal; and when the information about the media-stream processing capability of the updated terminal satisfies a media-stream update condition, updating the at least one type of media stream based on the information about the media-stream processing capability of the updated terminal (Choi: e.g., expressly discloses / illustrates in one aspect wherein a Client 150b, for example, may be additionally connected to the system and the Client 150b ‘requests’ content or a video stream from the encoder 140A. In response to the ‘request’ for content, the Encoder 140A may provide to the Client 150B the encoded video data 145A that includes the manifest identifying the ‘profiles’ available to the Client 150B. The Client 150B may ‘select’ the ‘first profile’ {i.e., ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps), read the manifest for the first profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150B ) [0041].
As per claim{s} 10, Choi discloses the method further comprising wherein the performing adaptive encoding on media data in the interaction room based on the media-stream processing capability information, to obtain at least one type of media stream for the terminals in the interaction room comprises: performing adaptive encoding on the media data in the interaction room based on the media-stream processing capability information and information about an encoding processing capability of the computer device (Choi: e.g., For example, as discussed further below, the Encoder control module 130 may utilize data representing viewership demographics 165 {i.e., viewer count, geography, paid service tier, device type} received from the viewer data module 160 to accomplish ‘adaptive encoding’ based on detected or real-time changes in viewership demographics) [0029; Fig. 1], to obtain the at least one type of media stream for the terminals in the interaction room, wherein an entirety of the media-stream parameter of the at least one type of media stream matches the information about the encoding processing capability (Choi: e.g., For example, referring to FIG. 1, the Client 150A may request content or a video stream from the Encoder 140A. In response to the request for content, the Encoder 140A may provide to the Client 150A the encoded video data_145A that includes a ‘manifest’ identifying the ‘profiles’ available to the client 150A {i.e., a ‘first profile’ having a ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps, a ‘second profile’ having a resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps, a ‘third profile’ having a resolution of 576p (1024×576) and bitrate of about 0.6-2 Mbps, a ‘fourth profile’ having a resolution of 480p (848×480) and bitrate of about 0.4-1.5 Mbps, a ‘fifth profile’ having a resolution of 432p (768×432) and bitrate of about 0.3-1.3 Mbps, a ‘sixth profile’ having a resolution of 360p (640×360) and bitrate of about 0.2-1 Mbps, and a ‘seventh profile’ having a resolution of 240p (424×240) and bitrate of about 0.1-0.7 Mbps }. The Client 150A may ‘select’ the ‘second profile’ (e.g., resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps), read the manifest for the second profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150A….Continuing with the example shown in FIG. 1, the Client 150B may request content or a video stream from the encoder 140A. In response to the request for content, the encoder 140A may provide to the client 150B the encoded video data 145A that includes the ‘manifest’ identifying the ‘profiles available’ to the client 150B. The Client 150B may ‘select’ the first profile (e.g., resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps), read the manifest for the first profile and request from the encoder 140A the corresponding segments for ‘viewing’ at the client 150B {processing / decoding, etc.}) [0039-0042; Fig. 1].
As per claim{s} 12, Choi discloses the method further comprising wherein the obtaining, during running of the cloud application, media-stream processing capability information of terminals joining the interaction room for interaction comprises: in response to detecting that the terminals join, via a node server, the interaction room for interaction, performing network resource detection on the terminals, to obtain the network resource information; and obtaining the device decoding information of the terminals (Choi: e.g. the viewer analytics 155 provided by each client 150A-N may include at least one of a session start time, session end time, client type, operating system, a network connection type, geographic location, ‘available bandwidth’, network protocol, screen size, device type, ‘display capabilities’, and/or ‘codec capabilities’ ) [0044] and obtaining the media-stream processing capability information of the terminals based on the network resource information and the device decoding information (Choi: e.g., expressly discloses / illustrates in one aspect wherein a Client 150b, for example, may be additionally connected to the system and the Client 150b ‘requests’ content or a video stream from the encoder 140A. In response to the ‘request’ for content, the Encoder 140A may provide to the Client 150B the encoded video data 145A that includes the manifest identifying the ‘profiles’ available to the Client 150B. The Client 150B may ‘select’ the ‘first profile’ {i.e., ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps), read the manifest for the first profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150B ) [0041] (e.g., In one aspect, the ‘profile’ (e.g., resolution and/or bitrate) may be selected solely by the Client 150A-N. In another aspect, if system 100 conditions change, such as the ‘Client's 150A-N capabilities’, the ‘load on the system 100’, and/or the ‘available network bandwidth’, the Client 150A-N may switch to a ‘higher or lower profile’ as required) [0043; Fig. 1].
As per claim{s} 13, Choi discloses the method wherein the determining a media stream matching a subset of the terminals in the interaction room comprises at least one of the followings: determining the media stream from the at least one type of media stream based on a media-stream selection request transmitted by the subset of the terminals in the interaction room, or determining, from the at least one type of media stream, the media stream whose media-stream parameter matches the media-stream processing capability information of the subset of the terminals in the interaction room (Choi: e.g., expressly discloses / illustrates in one aspect wherein a Client 150b, for example, may be additionally connected to the system and the Client 150b ‘requests’ content or a video stream from the encoder 140A. In response to the ‘request’ for content, the Encoder 140A may provide to the Client 150B the encoded video data 145A that includes the manifest identifying the ‘profiles’ available to the Client 150B. The Client 150B may ‘select’ the ‘first profile’ {i.e., ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps), read the manifest for the first profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150B).
As per claim{s} 15, Choi discloses the method wherein the determining a media stream matching a subset of the terminals in the interaction room comprises: generating a media-stream selection request based on the media-stream processing capability information of the subset of the terminals; and transmitting the media-stream selection request to the server, wherein the media-stream selection request is configured for indicating the server to determine, from the at least one type of media stream, the media stream matching the subset of the terminals (Choi: e.g., For example, referring to FIG. 1, the Client 150A may request content or a video stream from the Encoder 140A. In response to the request for content, the Encoder 140A may provide to the Client 150A the encoded video data_145A that includes a ‘manifest’ identifying the ‘profiles’ available to the client 150A {i.e., a ‘first profile’ having a ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps, a ‘second profile’ having a resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps, a ‘third profile’ having a resolution of 576p (1024×576) and bitrate of about 0.6-2 Mbps, a ‘fourth profile’ having a resolution of 480p (848×480) and bitrate of about 0.4-1.5 Mbps, a ‘fifth profile’ having a resolution of 432p (768×432) and bitrate of about 0.3-1.3 Mbps, a ‘sixth profile’ having a resolution of 360p (640×360) and bitrate of about 0.2-1 Mbps, and a ‘seventh profile’ having a resolution of 240p (424×240) and bitrate of about 0.1-0.7 Mbps }. The Client 150A may ‘select’ the ‘second profile’ (e.g., resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps), read the manifest for the second profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150A…Continuing with the example shown in FIG. 1, the Client 150B may request content or a video stream from the encoder 140A. In response to the request for content, the encoder 140A may provide to the client 150B the encoded video data 145A that includes the ‘manifest’ identifying the ‘profiles available’ to the client 150B. The Client 150B may ‘select’ the first profile (e.g., resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps), read the manifest for the first profile and request from the encoder 140A the corresponding segments for ‘viewing’ at the client 150B {processing / decoding, etc.}) [0039-0042; Fig. 1] (e.g., expressly discloses / illustrates in one aspect wherein a Client 150b, for example, may be additionally connected to the system and the Client 150b ‘requests’ content or a video stream from the encoder 140A. In response to the ‘request’ for content, the Encoder 140A may provide to the Client 150B the encoded video data 145A that includes the manifest identifying the ‘profiles’ available to the Client 150B. The Client 150B may ‘select’ the ‘first profile’ {i.e., ‘resolution’ of 1080p (1920×1080) and ‘bitrate’ of about 2-6 Mbps), read the manifest for the first profile and request from the Encoder 140A the corresponding segments for viewing at the Client 150B).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GLENFORD J MADAMBA whose telephone number is (571)272-7989. The examiner can normally be reached on Mondays to Fridays, from 9am to 5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry, can be reached at telephone number 571-272-7989. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/GLENFORD J MADAMBA/Primary Examiner, Art Unit 2451