Prosecution Insights
Last updated: April 19, 2026
Application No. 18/395,726

PICTURE ENCODING/DECODING METHOD AND RELATED APPARATUS

Final Rejection §103§112§DP
Filed
Dec 25, 2023
Examiner
WONG, ALLEN C
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
4 (Final)
83%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
669 granted / 805 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
832
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 11/14/25 have been fully read and considered but they are not persuasive. Based on amendment to claims filed 11/14/25, claims 11-13 are now rejected under 35 USC 112(b) since claims 11-12 are dependent on a canceled claim 10. It appears that Applicant inadvertently forgot to amend claims 11 and 12 to depend on pending claim 9 after canceling claim 10. Appropriate correction is required. As far as to how claims 11-13 will be treated with respect to prior art rejection, claims 11-12 are rejected as being dependent on pending claim 9. However, Applicant still needs to amend claims 11-12 to depend on claim 9 in the next communication with the Office. With regards to lines 11-12 on page 7 of Applicant's remarks, Applicant asserts that Tao does not disclose selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture. The Examiner respectfully disagrees. In paragraph [97], Tao discloses the encoded video bitstream comprises a group of pictures that includes a current picture. And in paragraph [119], Tao’s figure 3 discloses that in video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can originate from element 64 of encoder 20A in figure 2, and also from element 92 of decoder 30B of figure 3, thus making sure that there is at least one reference picture that is selected from the reference picture list (ie. knowledge base) and wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures. Moreover, in paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decompression of the current picture by notifying the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and also a picture order count (POC) value is utilized for identifying the order the current picture that is displayed, wherein a picture with a smaller valued picture order count (PCO) is displayed earlier than a picture with a higher valued picture order count (POC). Thus, Tao discloses the picture order count (POC) functions as a reference picture index of the current picture. And thus, Tao discloses selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). With regards to lines 13-14 on page 7 of Applicant's remarks, Applicant asserts that Tao does not disclose the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs. The Examiner respectfully disagrees. In paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decompression of the current picture by notifying the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and also a picture order count (POC) value is utilized for identifying the order the current picture that is displayed, wherein a picture with a smaller valued picture order count (PCO) is displayed earlier than a picture with a higher valued picture order count (POC). Thus, Tao discloses the picture order count (POC) functions as a reference picture index of the current picture. And thus, Tao discloses the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Dependent claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) in view of Tao (US 2015/0156487). Peruse the rejection below. Thus, claims 1, 9, 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) in view of Tao (US 2015/0156487). Dependent claims 3-4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) and Tao (US 2015/0156487) in view of Hannuksela (US 2015/0103921). Peruse the rejection below. Dependent claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508), Tao (US 2015/0156487) and Hannuksela (US 2015/0103921) in view of Soroushian (US 2014/0003799). Peruse the rejection below. Dependent claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) and Tao (US 2015/0156487) in view of Soroushian (US 2014/0003799). Peruse the rejection below. With regards to lines 18-20 on page 8 of Applicant's remarks, Applicant asserts that Aggarwal and Tao do not disclose selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture. The Examiner respectfully disagrees. In paragraph [97], Tao discloses the encoded video bitstream comprises a group of pictures that includes a current picture. And in paragraph [119], Tao’s figure 3 discloses that in video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can originate from element 64 of encoder 20A in figure 2, and also from element 92 of decoder 30B of figure 3, thus making sure that there is at least one reference picture that is selected from the reference picture list (ie. knowledge base) and wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures. Moreover, in paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decompression of the current picture by notifying the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and also a picture order count (POC) value is utilized for identifying the order the current picture that is displayed, wherein a picture with a smaller valued picture order count (PCO) is displayed earlier than a picture with a higher valued picture order count (POC). Thus, the combination of Aggarwal and Tao discloses the picture order count (POC) functions as a reference picture index of the current picture. And thus, Tao discloses selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). With regards to lines 21-22 on page 8 of Applicant's remarks, Applicant asserts that Aggarwal and Tao do not disclose the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs. The Examiner respectfully disagrees. In paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decompression of the current picture by notifying the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and also a picture order count (POC) value is utilized for identifying the order the current picture that is displayed, wherein a picture with a smaller valued picture order count (PCO) is displayed earlier than a picture with a higher valued picture order count (POC). Thus, Tao discloses the picture order count (POC) functions as a reference picture index of the current picture. And thus, the combination of Aggarwal and Tao discloses the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). With regards to lines 1-5 on page 9 of Applicant’s remarks about claim 6, Applicant asserts that Campbell does not disclose “selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture” and “the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs”. Campbell is not relied on to meet the limitations of "selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture", and "the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs" since Tao already discloses the limitations "selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture", and "the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs" for the reasons as previously stated above for claims 1, 9 and 14 and in the rejection below. Thus, Tao discloses “selecting, from a knowledge base, K reference pictures of the current picture and based on a reference picture index of the current picture” and “the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs” of claim 6. Peruse the above paragraphs and in the rejection below for elaboration. Dependent claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508), Tao (US 2015/0156487) and Campbell (US 2015/0092076) in view of Yu (US 2014/0086557). Peruse the rejection below. With regards to the double patenting rejections, the Applicant has stated that the terminal disclaimer requirement be held in abeyance. Claims 1, 3-5, 9 and 11-13 are still rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 and 9-13 of U.S. Patent No. 11,889,058 in view of Aggarwal (US 2009/0257508). Claims 1 and 3-5 are still rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 of U.S. Patent No. 11,303,888 in view of Aggarwal (US 2009/0257508). Claims 6-8 are still rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6-8 of U.S. Patent No. 11,303,888 in view of Aggarwal (US 2009/0257508) in view of Tao (US 2015/0156487). Claims 1 and 3-5 are still rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 and 5-6 of U.S. Patent No. 10,917,638 in view of Aggarwal (US 2009/0257508). Thus, the rejection is maintained. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 11-12 are dependent on a canceled claim 10. Thus, the scope of the claims 11-13 is unknown. However, it appears that Applicant inadvertently forgot to amend claims 11 and 12 to depend on pending claim 9 after canceling claim 10. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9, 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) in view of Tao (US 2015/0156487). Regarding claim 1, Aggarwal discloses a picture decoding method for implementing a random access capability in a video (paragraph [29], Aggarwal discloses decoding pictures so as to be outputted to a video display 208, wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: determining a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtaining a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); selecting K reference pictures of the current picture (paragraph [60], Aggarwal discloses that reference frames along with current frames are selected to be decoded and queued in preparation for video display output, wherein paragraph [61], Aggarwal discloses that the decoded and queued frames are utilized for video display output according to the determined order in that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); decoding the current picture according to the K reference pictures (paragraph [61], Aggarwal discloses that the decoded and queued frames are utilized for video display output according to the determined order in that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Aggarwal does not disclose selecting, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located. However, Tao teaches selecting, from a knowledge base (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures) and based on a reference picture index of the current picture (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), K reference pictures of the current picture (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures), K being an integer greater than or equal to 1 (paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures, clearly there are one or more reference pictures that can be selected), wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures; and paragraph [124], Tao discloses that in HEVC video encoding standard, that reference picture lists are constructed to form knowledge base that incorporates the reference pictures from video encoder 20A of fig.2 and from video decoder 30B of fig.3, and based on the reference pictures accumulated from video encoder 20A and video decoder 30B, inter prediction decoding process takes place for decoding the current picture based on the selected reference picture or pictures as ascertained from reference picture lists as accumulated from video encoder 20A and video decoder 30B, and further, paragraph [125], Tao discloses implementation of multiple reference picture subsets and sets are utilized for High Efficiency Video Coding (HEVC), and paragraph [126], Tao discloses reference picture subsets and sets that include RefPicSetStCurrBefore, RefPicSetStCurrAfter, RefPicSetStFoll, RefPicSetLtCurr, and RefPicSetLtFoll, wherein RefPicSetStCurrBefore, RefPicSetStCurrAfter and RefPicSetLtCurr include all the pictures that can be used for decoding the particular picture, and that RefPicSetStCurrBefore can include any reference pictures determined to be short-term reference pictures that are displayed before the particular picture, and RefPicSetStCurrAfter may include any reference pictures determined to be short-term reference pictures that are displayed after the particular picture, wherein RefPicSetLtCurr can include any long-term reference pictures, and RefPicSetStFoll and RefPicSetLtFoll can include any reference pictures that are not used for encoding or decoding the particular picture, but may be used for the pictures that follow the particular picture in decoding order, and also, Tao discloses RefPicSetStFoll can include any reference pictures determined to be short-term reference pictures, and RefPicSetLtFoll can include any reference pictures determined to be long-term reference pictures, and that in some instances, the pictures in the sets can be exclusive (e.g., a picture in one of the sets may not be in any other set)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Regarding claim 9, Aggarwal discloses a picture decoding apparatus for implementing a random access capability in a video (paragraph [29], Aggarwal discloses decoding pictures so as to be outputted to a video display 208, wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: a memory storing instructions (paragraph [66], Aggarwal discloses a computer readable storage medium for storing that stores a computer program to be executed by a computer); and at least one processor in communication with the memory (paragraph [66], Aggarwal discloses a computer readable storage medium for storing that stores a computer program to be executed by a computer; paragraph [29], Aggarwal discloses processing system 202 comprises main processor 216, video processing unit 210 and display queue trick processing unit 212 that are in communication with memory 214), the at least one processor configured (paragraph [66], Aggarwal discloses a computer readable storage medium for storing that stores a computer program to be executed by a computer; paragraph [29], Aggarwal discloses processing system 202 comprises main processor 216, video processing unit 210 and display queue trick processing unit 212), upon execution of the instructions, to perform the following steps: determine a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtain a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); select K reference pictures of the current picture (paragraph [60], Aggarwal discloses that reference frames along with current frames are selected to be decoded and queued in preparation for video display output, wherein paragraph [61], Aggarwal discloses that the decoded and queued frames are utilized for video display output according to the determined order in that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); decode the current picture according to the K reference pictures (paragraph [61], Aggarwal discloses that the decoded and queued frames are utilized for video display output according to the determined order in that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Aggarwal does not disclose select, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located. However, select, from a knowledge base (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures) and based on a reference picture index of the current picture (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), K reference pictures of the current picture (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures), K being an integer greater than or equal to 1 (paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures, clearly there are one or more reference pictures that can be selected), wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures; and paragraph [124], Tao discloses that in HEVC video encoding standard, that reference picture lists are constructed to form knowledge base that incorporates the reference pictures from video encoder 20A of fig.2 and from video decoder 30B of fig.3, and based on the reference pictures accumulated from video encoder 20A and video decoder 30B, inter prediction decoding process takes place for decoding the current picture based on the selected reference picture or pictures as ascertained from reference picture lists as accumulated from video encoder 20A and video decoder 30B, and further, paragraph [125], Tao discloses implementation of multiple reference picture subsets and sets are utilized for High Efficiency Video Coding (HEVC), and paragraph [126], Tao discloses reference picture subsets and sets that include RefPicSetStCurrBefore, RefPicSetStCurrAfter, RefPicSetStFoll, RefPicSetLtCurr, and RefPicSetLtFoll, wherein RefPicSetStCurrBefore, RefPicSetStCurrAfter and RefPicSetLtCurr include all the pictures that can be used for decoding the particular picture, and that RefPicSetStCurrBefore can include any reference pictures determined to be short-term reference pictures that are displayed before the particular picture, and RefPicSetStCurrAfter may include any reference pictures determined to be short-term reference pictures that are displayed after the particular picture, wherein RefPicSetLtCurr can include any long-term reference pictures, and RefPicSetStFoll and RefPicSetLtFoll can include any reference pictures that are not used for encoding or decoding the particular picture, but may be used for the pictures that follow the particular picture in decoding order, and also, Tao discloses RefPicSetStFoll can include any reference pictures determined to be short-term reference pictures, and RefPicSetLtFoll can include any reference pictures determined to be long-term reference pictures, and that in some instances, the pictures in the sets can be exclusive (e.g., a picture in one of the sets may not be in any other set)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Regarding claim 12, Aggarwal does not disclose wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture. However, Tao discloses wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture (paragraph [145], Tao discloses that a unique identifier for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Regarding claim 14, Aggarwal discloses a non-transitory computer-readable storage media storing a video (paragraph [43], Aggarwal discloses memory 214 for storing video data) and storing computer instructions that configure at least one processor (paragraph [66], Aggarwal discloses a computer readable storage medium for storing that stores a computer program to be executed by a computer; paragraph [29], Aggarwal discloses processing system 202 comprises main processor 216, video processing unit 210 and display queue trick processing unit 212), upon execution of the instructions, to perform the following steps: determine a random access segment in a video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtain a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); select K reference pictures of the current picture (paragraph [60], Aggarwal discloses that reference frames along with current frames are selected to be decoded and queued in preparation for video display output, wherein paragraph [61], Aggarwal discloses that the decoded and queued frames are utilized for video display output according to the determined order in that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); decode the current picture according to the K reference pictures (paragraph [61], Aggarwal discloses that the decoded and queued frames are utilized for video display output according to the determined order in that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Aggarwal does not disclose select, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located. However, select, from a knowledge base (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures) and based on a reference picture index of the current picture (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), K reference pictures of the current picture (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures), K being an integer greater than or equal to 1 (paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures, clearly there are one or more reference pictures that can be selected), wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located (paragraph [97], Tao discloses that encoded video bitstream comprises a group of pictures that includes a current picture, wherein paragraph [119], Tao discloses that in fig.3, video decoder 30B, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures; and paragraph [124], Tao discloses that in HEVC video encoding standard, that reference picture lists are constructed to form knowledge base that incorporates the reference pictures from video encoder 20A of fig.2 and from video decoder 30B of fig.3, and based on the reference pictures accumulated from video encoder 20A and video decoder 30B, inter prediction decoding process takes place for decoding the current picture based on the selected reference picture or pictures as ascertained from reference picture lists as accumulated from video encoder 20A and video decoder 30B, and further, paragraph [125], Tao discloses implementation of multiple reference picture subsets and sets are utilized for High Efficiency Video Coding (HEVC), and paragraph [126], Tao discloses reference picture subsets and sets that include RefPicSetStCurrBefore, RefPicSetStCurrAfter, RefPicSetStFoll, RefPicSetLtCurr, and RefPicSetLtFoll, wherein RefPicSetStCurrBefore, RefPicSetStCurrAfter and RefPicSetLtCurr include all the pictures that can be used for decoding the particular picture, and that RefPicSetStCurrBefore can include any reference pictures determined to be short-term reference pictures that are displayed before the particular picture, and RefPicSetStCurrAfter may include any reference pictures determined to be short-term reference pictures that are displayed after the particular picture, wherein RefPicSetLtCurr can include any long-term reference pictures, and RefPicSetStFoll and RefPicSetLtFoll can include any reference pictures that are not used for encoding or decoding the particular picture, but may be used for the pictures that follow the particular picture in decoding order, and also, Tao discloses RefPicSetStFoll can include any reference pictures determined to be short-term reference pictures, and RefPicSetLtFoll can include any reference pictures determined to be long-term reference pictures, and that in some instances, the pictures in the sets can be exclusive (e.g., a picture in one of the sets may not be in any other set)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Claims 3-4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) and Tao (US 2015/0156487) in view of Hannuksela (US 2015/0103921). Regarding claim 3, Aggarwal and Tao do not disclose wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located. However, Hannuksela teaches wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located (paragraph [232], Hannuksela discloses that reference index is utilized for identifying reference picture(s) that are relevant for HEVC compression standard in that the reference picture matches with a certain reference picture that has an identifier or index to locate the reference picture that is used for decoding the current picture before the random access segment occurs, wherein the indication is specific to identify certain situations that signal prior to reaching a RAP (random access picture) picture, I (intra) picture, P (predictive) picture or B (bidirectional) picture in a sequence of pictures). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao and Hannuksela together as a whole for efficiently compression of video data and to produce high quality image display with accuracy. Regarding claim 4, Aggarwal does not disclose wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of the reference picture, or a picture feature of the current picture. However, Tao teaches wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of the reference picture, or a picture feature of the current picture (paragraph [145], Tao discloses that a unique identifier for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Regarding claim 11, Aggarwal and Tao do not disclose wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located. However, Hannuksela teaches wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located (paragraph [232], Hannuksela discloses that reference index is utilized for identifying reference picture(s) that are relevant for HEVC compression standard in that the reference picture matches with a certain reference picture that has an identifier or index to locate the reference picture that is used for decoding the current picture before the random access segment occurs, wherein the indication is specific to identify certain situations that signal prior to reaching a RAP (random access picture) picture, I (intra) picture, P (predictive) picture or B (bidirectional) picture in a sequence of pictures). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao and Hannuksela together as a whole for efficiently compression of video data and to produce high quality image display with accuracy. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508), Tao (US 2015/0156487) and Hannuksela (US 2015/0103921) in view of Soroushian (US 2014/0003799). Regarding claim 5, Aggarwal, Tao and Hannusela do not disclose wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. However, Soroushian teaches wherein if the reference picture index indicates the picture feature of the current picture (paragraph [14], Soroushian discloses that “geotags” are index or indices that can be utilized for associated with a video sequence of frames and/or with individual reference frames within the video sequence for identifying features within a certain picture that can include latitude and longitude coordinates as well as altitude, bearing, distance, tilt, accuracy data, place name, and paragraph [77], Soroushian discloses picture features of reference pictures match the picture feature of the current picture to establish correlation between reference image data and current image data for video compression/decompression, and that a scale-invariant feature detector transform (SIFT) feature detector is utilized for determining matches of feature points between reference images and current image during image processing for determining similarities between reference image data and current image data), then picture features of the K reference pictures match the picture feature of the current picture (paragraph [77], Soroushian discloses picture features of reference pictures match the picture feature of the current picture to establish correlation between reference image data and current image data for video compression/decompression, and that a scale-invariant feature detector transform (SIFT) feature detector is utilized for determining matches of feature points between reference images and current image during image processing for determining similarities between reference image data and current image data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao, Hannuksela and Soroushian together as a whole for precisely monitoring of common picture features between reference image data and current image data so as to ensure robust video compression. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) and Tao (US 2015/0156487) in view of Soroushian (US 2014/0003799). Regarding claim 13, Aggarwal and Tao do not disclose wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. However, Soroushian teaches wherein if the reference picture index indicates the picture feature of the current picture (paragraph [14], Soroushian discloses that “geotags” are index or indices that can be utilized for associated with a video sequence of frames and/or with individual reference frames within the video sequence for identifying features within a certain picture that can include latitude and longitude coordinates as well as altitude, bearing, distance, tilt, accuracy data, place name, and paragraph [77], Soroushian discloses picture features of reference pictures match the picture feature of the current picture to establish correlation between reference image data and current image data for video compression/decompression, and that a scale-invariant feature detector transform (SIFT) feature detector is utilized for determining matches of feature points between reference images and current image during image processing for determining similarities between reference image data and current image data), then picture features of the K reference pictures match the picture feature of the current picture (paragraph [77], Soroushian discloses picture features of reference pictures match the picture feature of the current picture to establish correlation between reference image data and current image data for video compression/decompression, and that a scale-invariant feature detector transform (SIFT) feature detector is utilized for determining matches of feature points between reference images and current image during image processing for determining similarities between reference image data and current image data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao and Soroushian together as a whole for precisely monitoring of common picture features between reference image data and current image data so as to ensure robust video compression. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508) and Tao (US 2015/0156487) in view of Campbell (US 2015/0092076). Regarding claim 6, Aggarwal discloses a picture encoding method for implementing a random access capability in a video (paragraph [20], Aggarwal discloses encoding video streams with MPEG1/2, AVC (H.263)and /or VC1; paragraph [23], Aggarwal discloses that terrestrial TV head end 104 can broadcast or send out digitally encoded terrestrial TV signals, thus Aggarwal discloses video images are encoded at the head end 104 and transmitted to users via reception over TV antenna 108; wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: determining a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtaining a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized); encoding the current picture with reference pictures (paragraph [20], Aggarwal discloses encoding video streams with MPEG1/2, AVC (H.263)and /or VC1, in that frames in the MPEG video encoding standard comprises of I (intra frame), P (predictive) and B (bidirectional) frames; paragraph [23], Aggarwal discloses that terrestrial TV head end 104 can broadcast or send out digitally encoded terrestrial TV signals, thus Aggarwal discloses video images are encoded at the head end 104 and transmitted to users via reception over TV antenna 108). Aggarwal does not disclose selecting, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located; and encoding the current picture into a first video bitstream according to the K reference pictures. However, Tao teaches selecting, from a knowledge base (paragraph [119], Tao discloses that in fig.3, video encoder 20A, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures) and based on a reference picture index of the current picture (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), K reference pictures of the current picture, K being an integer greater than or equal to 1 (paragraph [119], Tao discloses that in fig.3, video encoder 20A, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures), wherein at least one reference picture in the knowledge base does not belong to the random access segment in which the current picture is located (paragraph [76], Tao discloses element 39 obtains a current picture to be encoded, wherein paragraph [119], Tao discloses that in fig.3, video encoder 20A, there can be one or more reference picture to be potentially selected from a group of reference pictures in the reference picture list for providing better coding efficiency, and the pictures selected can be from element 64 of encoder 20A in fig.2 and also element 92 of decoder 30B of fig.3, thus ensuring that there is at least one a reference picture is selected from the reference picture list (ie. knowledge base) wherein at least one picture does not belong to a random access segment, and there will be no unexpected reference pictures; and paragraph [124], Tao discloses that in HEVC video encoding standard, that reference picture lists are constructed to form knowledge base that incorporates the reference pictures from video encoder 20A of fig.2 and from video decoder 30B of fig.3, and based on the reference pictures accumulated from video encoder 20A and video decoder 30B, inter prediction encoding process takes place for encoding the current picture based on the selected reference picture or pictures as ascertained from reference picture lists as accumulated from video encoder 20A and video decoder 30B, and further, paragraph [125], Tao discloses implementation of multiple reference picture subsets and sets are utilized for High Efficiency Video Coding (HEVC), and paragraph [126], Tao discloses reference picture subsets and sets that include RefPicSetStCurrBefore, RefPicSetStCurrAfter, RefPicSetStFoll, RefPicSetLtCurr, and RefPicSetLtFoll, wherein RefPicSetStCurrBefore, RefPicSetStCurrAfter and RefPicSetLtCurr include all the pictures that can be used for decoding the particular picture, and that RefPicSetStCurrBefore can include any reference pictures determined to be short-term reference pictures that are displayed before the particular picture, and RefPicSetStCurrAfter may include any reference pictures determined to be short-term reference pictures that are displayed after the particular picture, wherein RefPicSetLtCurr can include any long-term reference pictures, and RefPicSetStFoll and RefPicSetLtFoll can include any reference pictures that are not used for encoding or decoding the particular picture, but may be used for the pictures that follow the particular picture in decoding order, and also, Tao discloses RefPicSetStFoll can include any reference pictures determined to be short-term reference pictures, and RefPicSetLtFoll can include any reference pictures determined to be long-term reference pictures, and that in some instances, the pictures in the sets can be exclusive (e.g., a picture in one of the sets may not be in any other set)); encoding the current picture into a first video bitstream according to the K reference pictures (paragraph [75], Tao discloses a video encoder 20A with an output for outputting the encoded video data at element 56, and paragraph [76], Tao discloses that the selected reference picture as identified in video decoder 30B for inter-predicting a current picture; paragraph [124], Tao discloses that in HEVC video encoding standard, that reference picture lists are constructed to form knowledge base that incorporates the reference pictures from video encoder 20A of fig.2 and from video decoder 30B of fig.3, and based on the reference pictures accumulated from video encoder 20A and video decoder 30B, inter prediction decoding process takes place for decoding the current picture based on the selected reference picture or pictures as ascertained from reference picture lists as accumulated from video encoder 20A and video decoder 30B). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Aggarwal and Tao do not disclose encoding the at least one reference picture into a second video bitstream. However, Campbell teaches encoding the current picture into a first video bitstream (paragraph [83], fig.10, Campbell discloses that H.264 encoder 1002 encodes the current picture into a first video bitstream); and encoding the at least one reference picture into a second video bitstream (paragraph [83], fig.10, Campbell discloses that after the video stream that exits element 1002, the video stream enters the frame parser 1004 for splitting the encoded I-frames 1006 from the other frames, and thus, the encoded I-frames 1006 enter the VC-5 encoder 1010 for performing more video processing for compressing the reference I frames into a second bitstream in which all of the pictures represented are coded in the intra coding mode, thus, all of the frames that enter element encoder 1010 are compressed only in I pictures or intra coded pictures). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao and Campbell together as a whole for permitting high quality video image display while saving costs by implementing less expensive hardware for image capturing and processing tasks (Campbell’s paragraph [38]). Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2009/0257508), Tao (US 2015/0156487) and Campbell (US 2015/0092076) in view of Yu (US 2014/0086557). Regarding claim 7, Aggarwal, Tao and Campbell do not disclose wherein the knowledge base comprises a key picture in a video sequence to which the current picture belongs, and the key picture comprises at least one of a scene cut picture or a background picture. However, Yu teaches wherein the knowledge base comprises a key picture in a video sequence to which the current picture belongs, and the key picture comprises at least one of a scene cut picture or a background picture (paragraph [55], Yu discloses the detection of a scene change or scene cut, and then the retrieval of the key frame is ascertained and stored in the knowledge base for later use for ascertaining the key frame or key picture to which the current picture belongs to in the sequence of video images at the display terminal). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao, Campbell and Yu together as a whole for generating a clear distinction between scenes in order to efficiently produce a smooth display of a television program for viewing at the display terminal. Regarding claim 8, Aggarwal, Tao and Campbell do not disclose wherein the scene cut picture is obtained by performing scene cut detection on the video sequence to which the current picture belongs, or by performing background modeling on the video sequence to which the current picture belongs. However, Yu teaches wherein the scene cut picture is obtained by performing scene cut detection on the video sequence to which the current picture belongs, or by performing background modeling on the video sequence to which the current picture belongs (paragraph [55], Yu discloses the detection of a scene change or scene cut, and then the retrieval of the key frame is ascertained and stored in the knowledge base for later use for ascertaining the key frame or key picture to which the current picture belongs to in the sequence of video images at the display terminal). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Aggarwal, Tao, Campbell and Yu together as a whole for generating a clear distinction between scenes in order to efficiently produce a smooth display of a television program for viewing at the display terminal. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 3-5, 9 and 11-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 and 9-13 of U.S. Patent No. 11,889,058 in view of Aggarwal (US 2009/0257508). Claim 1 of present Application ‘726 is similar to the combination of claims 1 and 2 of Patent ‘058 in that the combination of claims 1 and 2 of Patent ‘058 discloses most of the limitations of claim 1 of present Application ‘726. The combination of claims 1 and 2 of Patent ‘058 does not disclose “A picture decoding method for implementing a random access capability in a video, comprising: determining a random access segment in the video; obtaining a current picture from the video, the current picture belonging to the random access segment of the video…” However, Aggarwal teaches a picture decoding method for implementing a random access capability in a video (paragraph [29], Aggarwal discloses decoding pictures so as to be outputted to a video display 208, wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: determining a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtaining a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the combination of claims 1 and 2 of Patent ‘058 and Aggarwal together as a whole for enabling a user to jump from one video segment to another video segment in an efficient manner while playing back video data (Aggarwal’s paragraph [20]). Claim 3 of present Application ‘726 is similar to claim 3 of Patent ‘058. Thus, claim 3 of present Application ‘726 is anticipated by claim 3 of Patent ‘058. Claim 4 of present Application ‘726 is similar to claim 4 of Patent ‘058. Thus, claim 4 of present Application ‘726 is anticipated by claim 4 of Patent ‘058. Claim 5 of present Application ‘726 is similar to claim 5 of Patent ‘058. Thus, claim 5 of present Application ‘726 is anticipated by claim 5 of Patent ‘058. Claim 9 of present Application ‘726 is similar to the combination of claims 9 and 10 of Patent ‘058 in that the combination of claims 9 and 10 of Patent ‘058 discloses most of the limitations of claim 9 of present Application ‘726. The combination of claims 9 and 10 of Patent ‘058 does not disclose “A picture decoding apparatus for implementing a random access capability in a video, comprising… determine a random access segment in the video; obtain a current picture from the video, the current picture belonging to the random access segment of the video”. However, Aggarwal teaches A picture decoding apparatus for implementing a random access capability in a video (paragraph [29], Aggarwal discloses decoding pictures so as to be outputted to a video display 208, wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising… determine a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtain a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the combination of claims 9 and 10 of Patent ‘058 and Aggarwal together as a whole for enabling a user to jump from one video segment to another video segment in an efficient manner while playing back video data (Aggarwal’s paragraph [20]). Claim 11 of present Application ‘726 is similar to claim 11 of Patent ‘058. Thus, claim 11 of present Application ‘726 is anticipated by claim 11 of Patent ‘058. Claim 12 of present Application ‘726 is similar to claim 12 of Patent ‘058. Thus, claim 12 of present Application ‘726 is anticipated by claim 12 of Patent ‘058. Claim 13 of present Application ‘726 is similar to claim 13 of Patent ‘058. Thus, claim 13 of present Application ‘726 is anticipated by claim 13 of Patent ‘058. Claims 1 and 3-5 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 of U.S. Patent No. 11,303,888 in view of Aggarwal (US 2009/0257508). Claim 1 of present Application ‘726 is similar to the combination of claims 1 and 2 of Patent ‘888 in that the combination of claims 1 and 2 of Patent ‘888 discloses most of the limitations of claim 1 of present Application ‘726. The combination of claims 1 and 2 of Patent ‘888 does not disclose “A picture decoding method for implementing a random access capability in a video, comprising: determining a random access segment in the video; obtaining a current picture from the video, the current picture belonging to the random access segment of the video…” However, Aggarwal teaches a picture decoding method for implementing a random access capability in a video (paragraph [29], Aggarwal discloses decoding pictures so as to be outputted to a video display 208, wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: determining a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtaining a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the combination of claims 1 and 2 of Patent ‘888 and Aggarwal together as a whole for enabling a user to jump from one video segment to another video segment in an efficient manner while playing back video data (Aggarwal’s paragraph [20]). Claim 3 of present Application ‘726 is similar to claim 3 of Patent ‘888. Thus, claim 3 of present Application ‘726 is anticipated by claim 3 of Patent ‘888. Claim 4 of present Application ‘726 is similar to claim 4 of Patent ‘888. Thus, claim 4 of present Application ‘726 is anticipated by claim 4 of Patent ‘888. Claim 5 of present Application ‘726 is similar to claim 5 of Patent ‘888. Thus, claim 5 of present Application ‘726 is anticipated by claim 5 of Patent ‘888. Claims 6-8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6-8 of U.S. Patent No. 11,303,888 in view of Aggarwal (US 2009/0257508) in view of Tao (US 2015/0156487). Claim 6 of present Application ‘726 is similar to claim 6 of Patent ‘888 in that claim 6 of Patent ‘888 discloses most of the limitations of claim 6 of present Application ‘726. Claim 6 of Patent ‘888 does not disclose “A picture encoding method for implementing a random access capability in a video, comprising: determining a random access segment in the video; obtaining a current picture from the video, the current picture belonging to the random access segment of the video”. However, Aggarwal teaches a picture encoding method for implementing a random access capability in a video (paragraph [20], Aggarwal discloses encoding video streams with MPEG1/2, AVC (H.263)and /or VC1; paragraph [23], Aggarwal discloses that terrestrial TV head end 104 can broadcast or send out digitally encoded terrestrial TV signals, thus Aggarwal discloses video images are encoded at the head end 104 and transmitted to users via reception over TV antenna 108; wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: determining a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtaining a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 6 of Patent ‘888 and Aggarwal together as a whole for enabling a user to jump from one video segment to another video segment in an efficient manner while playing back video data (Aggarwal’s paragraph [20]). Claim 6 of Patent ‘888 and Aggarwal do not disclose “…and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs”. However, Tao discloses based on a reference picture index of the current picture (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture), the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs (paragraph [145], Tao discloses that a unique identifier (ie. index) for identifying the reference picture(s) is utilized for the decoding of the current picture by informing the video encoder 20A of the preferred set of reference pictures in element 92 of video decoder 30B that can be used in encoder 20A, and that a picture order count (POC) value is utilized for identifying the order the current picture is displayed during the decoding process of the first bitstream, in that a picture with a smaller valued POC is displayed earlier than a picture with a higher valued POC, thus, the POC functions as a reference picture index of the current picture). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 6 of Patent ‘888, Aggarwal and Tao together as a whole for efficiently transmitting video data in real-time for streaming live events and video broadcasts (Tao’s paragraph [20]). Claim 7 of present Application ‘726 is similar to claim 7 of Patent ‘888. Thus, claim 7 of present Application ‘726 is anticipated by claim 7 of Patent ‘888. Claim 8 of present Application ‘726 is similar to claim 8 of Patent ‘888. Thus, claim 8 of present Application ‘726 is anticipated by claim 8 of Patent ‘888. Claims 1 and 3-5 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 and 5-6 of U.S. Patent No. 10,917,638 in view of Aggarwal (US 2009/0257508). Claim 1 of present Application ‘726 is similar to the combination of claims 1 and 2 of Patent ‘638 in that the combination of claims 1 and 2 of Patent ‘638 discloses most of the limitations of claim 1 of present Application ‘726. The combination of claims 1 and 2 of Patent ‘638 does not disclose “A picture decoding method for implementing a random access capability in a video, comprising: determining a random access segment in the video; obtaining a current picture from the video, the current picture belonging to the random access segment of the video…” However, Aggarwal teaches a picture decoding method for implementing a random access capability in a video (paragraph [29], Aggarwal discloses decoding pictures so as to be outputted to a video display 208, wherein paragraph [35], Aggarwal discloses that video data can be accessed for decoding and jump to and frame segments (ie. random access segments) as delineated by random access points in order to display the selected frame segment for viewing, and paragraph [36], Aggarwal discloses performing video playback and trick mode operations for displaying video images by utilizing random access capability), comprising: determining a random access segment in the video (paragraph [39], Aggarwal discloses an encoded bitstream 302 comprises a frame stream (ie. sequence of frames) with random access points 308a-308d, wherein fig.3A illustrates that between sequential random access points, the random access segments are formed 306a, 306b and 306c, in that random access segment 306a (Frame Segment 1) is comprised of frames between random access points 308a-308b, random access segment 306b (Frame Segment 2) is comprised of frames between random access points 308b-308c, and random access segment 306c (Frame Segment 3) is comprised of frames between random access points 308c-308d, etc.); obtaining a current picture from the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized), the current picture belonging to the random access segment of the video (paragraph [61], Aggarwal discloses that a current picture of video can be obtained from the present RAP (random access point) at step 414, and paragraph [62], Aggarwal discloses that in cases where the present RAP is not skipped, then the frames within the present frame segment can be utilized). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the combination of claims 1 and 2 of Patent ‘638 and Aggarwal together as a whole for enabling a user to jump from one video segment to another video segment in an efficient manner while playing back video data (Aggarwal’s paragraph [20]). Claim 3 of present Application ‘726 is similar to claim 3 of Patent ‘638. Thus, claim 3 of present Application ‘726 is anticipated by claim 3 of Patent ‘638. Claim 4 of present Application ‘726 is similar to claim 5 of Patent ‘638. Thus, claim 4 of present Application ‘726 is anticipated by claim 5 of Patent ‘638. Claim 5 of present Application ‘726 is similar to claim 6 of Patent ‘638. Thus, claim 5 of present Application ‘726 is anticipated by claim 6 of Patent ‘638. Peruse the table below. Peruse table below. Present Application 18/395,726 US Patent No. 11,889,058 US Patent No. 11,303,888 US Patent No. 10,917,638 Claim 1. A picture decoding method, comprising: obtaining a current picture from a video; selecting, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located; and decoding the current picture according to the K reference pictures. Claim 1. A picture decoding method, comprising: obtaining a current picture from a video, wherein the current picture is a to-be-decoded picture; selecting, from a knowledge base, K reference pictures of the current picture, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located, K being an integer greater than or equal to 1, the random access segment comprising a picture sequence arranged in a decoding order from a closest random access point before the current picture to a closest random access point after the current picture; and decoding the current picture according to the K reference pictures, wherein the decoding the current picture according to the K reference pictures includes: adding the K reference pictures to a reference picture list of the current picture; and decoding the current picture according to a reference picture in the reference picture list. Claim 2. The method according to claim 1, wherein the selecting, from the knowledge base, the K reference pictures of the current picture comprises: selecting the K reference pictures based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs. Claim 1. A picture decoding method, comprising: selecting, from a knowledge base, K reference pictures of a current picture, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located and wherein K is an integer greater than or equal to 1, wherein the current picture belongs to a first video bitstream and the at least one reference picture in the knowledge base is obtained by decoding a second video bitstream, wherein the random access segment is a picture sequence arranged in a decoding order from a closest random access point before the current picture to a closest random access point after the current picture; and decoding the current picture according to the K reference pictures. Claim 2. The method according to claim 1, wherein the selecting, from a knowledge base, the K reference pictures of the current picture comprises: selecting, from the knowledge base, the K reference pictures of the current picture based on a reference picture index of the current picture, wherein the reference picture index is obtained by decoding a first video bitstream to which the current picture belongs. Claim 1. A picture decoding method, comprising: selecting, from a knowledge base, K reference pictures of a current picture, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located and wherein K is an integer greater than or equal to 1, wherein the current picture belongs to a first video bitstream and the at least one reference picture in the knowledge base is obtained by decoding a second video bitstream from which any decoded picture is only in intra decoding mode, wherein the random access segment is a picture sequence arranged in a decoding order from a closest random access point before the current picture to a closest random access point after the current picture; and decoding the current picture according to the K reference pictures. Claim 2. The method according to claim 1, wherein the selecting, from a knowledge base, K reference pictures of the current picture comprises: selecting, from the knowledge base, the K reference pictures of the current picture based on a reference picture index of the current picture, wherein the reference picture index is obtained by decoding a first video bitstream to which the current picture belongs. Claim 3. The method according to claim 2, wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located. Claim 3. The method according to claim 2, wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located. Claim 3. The method according to claim 2, wherein a reference picture that matches the reference picture index is a reconstructed picture located before the random access segment in which the current picture is located. Claim 3. The method according to claim 2, wherein a reference picture that matches the reference picture index is a reconstructed picture located before the random access segment in which the current picture is located. Claim 4. The method according to claim 3, wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of the reference picture, or a picture feature of the current picture. Claim 4. The method according to claim 3, wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture. Claim 4. The method according to claim 3, wherein the reference picture index of the current picture indicates at least one of a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture. Claim 5. The method according to claim 2, wherein the reference picture index of the current picture indicates at least one of a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture. Claim 5. The method according to claim 4, wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. Claim 5. The method according to claim 4, wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. Claim 5. The method according to claim 4, wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. Claim 6. The method according to claim 5, wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. Claim 6. A picture encoding method, comprising: obtaining a current picture; selecting, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located; encoding the current picture into a first video bitstream according to the K reference pictures; and encoding the at least one reference picture into a second video bitstream. Claim 6. A picture encoding method, comprising: selecting, from a knowledge base, K reference pictures of a current picture, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located, and wherein K is an integer greater than or equal to 1, wherein the random access segment is a picture sequence arranged in a decoding order from a closest random access point before the current picture to a closest random access point after the current picture; encoding the current picture into a first video bitstream according to the K reference pictures; and encoding the at least one reference picture into a second video bitstream. Claim 7. The method according to claim 6, wherein the knowledge base comprises a key picture in a video sequence to which the current picture belongs, and the key picture comprises at least one of a scene cut picture or a background picture. Claim 7. The method according to claim 6, wherein the knowledge base comprises a key picture in a video sequence to which the current picture belongs, and the key picture in the video sequence to which the current picture belongs comprises at least one of a scene cut picture or a background picture in the video sequence to which the current picture belongs. Claim 8. The method according to claim 7, wherein the scene cut picture is obtained by performing scene cut detection on the video sequence to which the current picture belongs, or by performing background modeling on the video sequence to which the current picture belongs. Claim 8. The method according to claim 7, wherein the scene cut picture is obtained by performing scene cut detection on the video sequence to which the current picture belongs, or the background picture is obtained by performing background modeling on the video sequence to which the current picture belongs. Claim 9. A picture decoding apparatus, comprising: a memory storing instructions; and at least one processor in communication with the memory, the at least one processor configured, upon execution of the instructions, to perform the following steps: obtain a current picture of a video; select, from a knowledge base and based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs, K reference pictures of the current picture, K being an integer greater than or equal to 1, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located; and decode the current picture according to the K reference pictures. Claim 9. A picture decoding apparatus, comprising: a memory storing instructions; and at least one processor in communication with the memory, the at least one processor configured, upon execution of the instructions, to perform the following steps: obtain a current picture of a video, wherein the current picture is a to-be-decoded picture; select, from a knowledge base, K reference pictures of the current picture, wherein at least one reference picture in the knowledge base does not belong to a random access segment in which the current picture is located, K being an integer greater than or equal to 1, the random access segment comprising a picture sequence arranged in a decoding order from a closest random access point before the current picture to a closest random access point after the current picture; and decode the current picture according to the K reference pictures, wherein the decoding the current picture according to the K reference pictures includes: adding the K reference pictures to a reference picture list of the current picture; and decoding the current picture according to a reference picture in the reference picture list. Claim 10. The apparatus according to claim 9, wherein the processor further executes the instructions to select, from the knowledge base, the K reference pictures based on a reference picture index of the current picture, the reference picture index being obtained by decoding a first video bitstream to which the current picture belongs. Claim 11. The apparatus according to claim 10, wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located. Claim 11. The apparatus according to claim 10, wherein a reference picture that matches the reference picture index comprises a reconstructed picture located before the random access segment in which the current picture is located. Claim 12. The apparatus according to claim 10, wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture. Claim 12. The apparatus according to claim 10, wherein the reference picture index of the current picture indicates at least one of: a number of a reference picture, a picture feature of a reference picture, or a picture feature of the current picture. Claim 13. The apparatus according to claim 12, wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. Claim 13. The apparatus according to claim 12, wherein if the reference picture index indicates the picture feature of the current picture, then picture features of the K reference pictures match the picture feature of the current picture. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C WONG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Dec 25, 2023
Application Filed
Oct 18, 2024
Non-Final Rejection — §103, §112, §DP
Jan 22, 2025
Response Filed
Apr 01, 2025
Final Rejection — §103, §112, §DP
Jul 03, 2025
Response after Non-Final Action
Aug 04, 2025
Request for Continued Examination
Aug 06, 2025
Response after Non-Final Action
Aug 13, 2025
Non-Final Rejection — §103, §112, §DP
Nov 14, 2025
Response Filed
Jan 15, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604009
IMAGE ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598321
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12587671
VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12581134
FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581091
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
83%
Grant Probability
95%
With Interview (+11.8%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month