Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9, 14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al (10,440,367) in view of Singh (2017/0085420).
Consider claims 1, 14 and 18, Choi et al teach a method of encoding an operating room video feed into a video over internet protocol (VOIP) stream, system and non-transitory computer readable memory having stored thereon instructions that, when executed by a system comprising a video encoder cause the video encoder to perform (col. 3 lines 59-62; “system 100 for providing adaptive video encoding, in accordance with various aspects of the subject technology”) a method comprising: receiving a video feed (col. 4 lines 52-66; col. 5 lines 8-35; “The incoming signals 115 are received at the signal extractor 120… The signal extractor 120 may alternatively route or forward the incoming signal 115 to the encoder control module 130”); splitting the video feed into a first video stream and a second video stream (col. 17 line 59 – col. 18 lines 17; “routing a video stream from an encoder control module (e.g., encoder control module 130 from FIG. 1) to a first virtual encoder… routing the video stream from the encoder control module to the second virtual encoder”; thus, implies splitting the video stream between two encoders); encoding the first video stream into a first The first virtual encoder may be instantiated on a cloud platform and configured to provide a first video output at a first bitrate… The second virtual encoder may be configured to provide a second video output at a second bitrate. The second bitrate may be different from the first bitrate; col. 13 lines 38-54; “the encoder 140A generates higher resolution and higher bitrate encoded video data simultaneously with other active encoders 140B that are providing encoded video data at lower resolutions and bitrates (e.g., video streams having resolution of 480p, 360p, and 240p)”).
Choi et al disclosed various type of networks for the transmission of video streams and can be configured to support the transmission of data formatted using any number of protocols (col. 3 lines 59 – col. 4 lines 11). Choi et al didn’t explicitly suggest of utilizing VOIP. Singh teaches the transmission of streaming content, including streaming audio/video content such as a voice-over-Internet-Protocol (VoIP) communication exchange (par. 0002; 0061; “The stream encoding routine 440 may encode the media content for transmission according to one or more audio and/or video standards. The stream encoding routine 440 may encode the media content according to a target bitrate”). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date to incorporate the teaching of Singh into view of Choi et al in order to utilized various type of communications technology for transmission of the content.
Consider claim 2, Choi et al teach wherein the second video stream is compressed using one of HEVC, H.264, and AV1 compression (col. 4 lines 59-64; “The video encoder may compress the data into one or more video compression formats, such as H.265 (i.e., HEVC), H.264, MPEG-H, MPEG-4, and/or MPEG-2”).
Consider claim 3, the combination teaches wherein the first video stream is compressed when it is encoded into the first VOIP feed (col. 4 lines 59-64 of Choi et al; “The video encoder may compress the data into one or more video compression formats”).
Consider claim 4, Choi et al teach wherein the first stream is encoded using a visually lossless compression (col. 4 lines 59-64; “The video encoder may compress the data into one or more video compression formats, such as H.265 (i.e., HEVC), H.264, MPEG-H, MPEG-4, and/or MPEG-2”; noted that H.265 is lossless video compression).
Consider claim 5, the combination teaches wherein the first video stream is encoded such that the first VoIP feed has a bit rate of between 5 and 6 gigabits per second and/or wherein the second video stream is encoded such that the second VOIP feed has a bit rate of less than 0.2 gigabits per second (col. 7 lines 1-14 of Choi et al; “Encoded video data 145A may comprise a plurality of profiles that include a first profile having a resolution of 1080p (1920×1080) and bitrate of about 2-6 Mbps, a second profile having a resolution of 720p (1280×720) and bitrate of about 0.9-3 Mbps, a third profile having a resolution of 576p (1024×576) and bitrate of about 0.6-2 Mbps, a fourth profile having a resolution of 480p (848×480) and bitrate of about 0.4-1.5 Mbps, a fifth profile having a resolution of 432p (768×432) and bitrate of about 0.3-1.3 Mbps, a sixth profile having a resolution of 360p (640×360) and bitrate of about 0.2-1 Mbps, and a seventh profile having a resolution of 240p (424×240) and bitrate of about 0.1-0.7 Mbps, using the H.264 standard”).
Consider claim 6, Choi et al teach wherein the method further comprises: determining at least one property of the video feed; wherein the encoding of the first video stream and/or the encoding of the second video stream is based upon the determined at least one property of the video feed (col. 17 lines 11-20; “the determination may be based on viewership (actual or predicted), schedule of the content associated with the video stream, or one or more characteristics associated with content in the video stream”).
Consider claim 7, Choi et al teach wherein the at least one determined property of the video feed comprises a resolution of the video feed, and wherein the encoding of the first video stream and/or the encoding of the second video stream is based upon the resolution of the video feed (col. 17 lines 11-30; “As described above, the determination may be based on viewership (actual or predicted), schedule of the content associated with the video stream, or one or more characteristics associated with content in the video stream. An operation 406 may include instantiating cloud-based encoders to satisfy encoding requirements and/or viewer demand determined at operation 404. As described above, a declining viewer count may lead to overcapacity in encoding capacity, thereby causing a system (e.g., system 100 of FIG. 1) to instantiate encoders with reduced-resolution video streams to realize computational processing and cost savings based on the reduced computing consumption required by the newly instantiated encoders. Alternatively, an increasing viewer count may lead to under capacity in encoding capacity, thereby causing a system (e.g., system 100 of FIG. 1) to instantiate encoders with higher-resolution video streams to satisfy viewer demand”).
Consider claim 8, Choi et al teach wherein the encoding of the first video stream and/or the encoding of the second video stream comprises scaling the first video stream and/or scaling the second video stream based upon the resolution of the video feed (col. 15 lines 12-23; “The cloud-based encoders 140A-N convert or transcode the received video data 135A-N to a required format using a cloud-based facility. …. In another aspect, by using cloud-based encoders 140A-N to transcode the video data 135A-N, an encoding capacity of the system 100 may be dynamically scaled based on demand, content schedule, or one or more content characteristics”).
Consider claim 9, Choi et al teach wherein the method further comprises receiving a resolution of a display, and wherein scaling the first video stream and/or scaling the second video stream is further based upon the resolution of the display (col. 15 lines 12-23; col. 9 lines 20-43; “In one aspect, the profile (e.g., resolution and/or bitrate) may be selected solely by the client 150A-N. In another aspect, if system 100 conditions change, such as the client's 150A-N capabilities, the load on the system 100, and/or the available network bandwidth, the client 150A-N may switch to a higher or lower profile as required…the viewer analytics 155 provided by each client 150A-N may include at least one of a session start time, session end time, client type, operating system, a network connection type, geographic location, available bandwidth, network protocol, screen size, device type, display capabilities, and/or codec capabilities”)
Consider claims 16 and 19, the combination teaches wherein the video encoder is further configured to perform a method comprising: dividing the second VOIP feed into a plurality of fragments; and sending the plurality of fragments in an asynchronous manner over a network (col. 7 line 60 – col. 8 line 6 of Choi et al; “In some aspects, the cloud-based encoders 140A-N may each be configured to fragment, segment, or divide each respective profile into individual files (e.g., segments). Each segment may be configured to be individually decodable without requiring data from a previous or subsequent segment to start decoding a particular segment. For example, for encoded video data 145A comprising an H.264 video stream generated by encoder 140A, the segments for each profile of the plurality of profiles (e.g., first, second, third, fourth, fifth, sixth, and seventh profiles generated by encoder 140A) may comprise a few seconds of the content. In this example, such segments may each have a duration of about 2-30 seconds, and may typically have a duration of about 4-10 seconds”).
Consider claims 17 and 20, as suggest above, Choi et al teach further comprising a server, the system further configured to perform the steps of: receiving, at a server, the plurality of fragments; and storing the plurality of fragments on the server (col. 9 lines 44-55; “The viewer data module 160 may comprise servers, routers, switches, or other network devices for collecting, compiling, and sending data relating to the clients 150A-N to the encoder control module 130. In one aspect, the viewer analytics 155 collected at the viewer data module 160 may be processed and/or reformatted to generate viewership demographics 165”).
Allowable Subject Matter
Claims 10-13 and 15 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any response to this action should be mailed to:
Mail Stop ____(explanation, e.g., Amendment or After-final, etc.) Commissioner for Patents
P.O. Box 1450
Alexandria, VA 22313-1450
Facsimile responses should be faxed to:
(571) 273-8300
Hand-delivered responses should be brought to:
Customer Service Window
Randolph Building
401 Dulany Street
Alexandria, VA 22314
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC DUC TRAN whose telephone number is (571) 272-7511. The examiner can normally be reached Monday-Friday 8:30am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Quoc D Tran/
Primary Examiner, Art Unit 2691
March 19, 2026