Prosecution Insights
Last updated: April 19, 2026
Application No. 17/847,673

Scalable Video Coding Using Derivation Of Subblock Subdivision For Prediction From Base Layer

Final Rejection §103§DP
Filed
Jun 23, 2022
Examiner
DANG, PHILIP
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Dolby Video Compression LLC
OA Round
8 (Final)
77%
Grant Probability
Favorable
9-10
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
363 granted / 470 resolved
+19.2% vs TC avg
Strong +33% interview lift
Without
With
+33.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
49 currently pending
Career history
519
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 470 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/03/2025 has been entered. Examiner's Note The instant application has a lengthy prosecution history and the examiner encourages the applicant to have a telephonic interview with the examiner prior to filing a response to the instant office action. Also, prior to the interview the examiner encourages the applicant to present multiple possible claim amendments, so as to enable the examiner to identify claim amendments that will advance prosecution in a meaningful manner. Acknowledgment Claims 1-20, 22-40, and 61 were cancelled. They are acknowledged by the examiner. Claims 21, 47, 54, and 62, amended on 12/03/2025, are acknowledged by the examiner. Claims 63-64, added on 12/03/2025, are acknowledged by the examiner. Response to Arguments Presented arguments with respect to claims 21, 47, 54, and their dependent claims have been fully considered, but some are rendered moot in view of the new ground of rejection necessitated by amendments initiated by the applicants. Examiner addresses the main arguments of the Applicant as below. Regarding the Double Patenting rejections, the Applicant indicated that it will be addressed after all other rejections are withdrawn. As a result, the Double Patenting rejections are maintained. Double Patenting The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a non-statutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 21 and 41-62 of the instant application are rejected on the ground of non-statutory double patenting as being unpatentable over related claims of the U.S. Patent 11,477,467 B2. Although the claims at issue are not identical, they are not patentably distinct from each other. Claims 21 and 41-62 of the instant application are rejected on the ground of non-statutory double patenting as being unpatentable over related claims of the U.S. Patent 10,694,183 B2. Although the claims at issue are not identical, they are not patentably distinct from each other. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 21, 41, 47-48, 54-55, and 61-62 are rejected under 35 U.S.C. 103 as being unpatentable over are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. (US Patent Application Publication 2008/0165848 A1), (“Ye”), in view of Zhai et al. (US Patent 7,847,861 B2), (“Zhai”), in view of Yi et al. (US Patent Application Publication 2012/0195364 A1), (“Yi”), in view of Cha et al. (US Patent Application Publication 2006/0233240 A1), (“Cha”), in view of Sole et al. (US Patent Application Publication 2010/0027897 A1), (“Sole”). Regarding claim 21, Ye meets the claim limitations as follow. A video decoder (i.e. video decoder) [Ye: para. 0034] including a processor (i.e. a processor, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP).) [Ye: para. 0012] for decoding a video represented by a base layer signal and an enhancement layer signal ((i.e. video decoder 28 may be configured to support scalable video coding (SVC) for spatial scalability) [Ye: para. 0034] – Note: SVC includes a based layer and one or more enhancement layers), the video decoder (i.e. video decoder) [Ye: para. 0034] comprising: a first decoding unit (i.e. video decoder 28 may be included in one or more ) [Ye: para. 0042] including a base layer decoder (i.e. Video decoder 28 may comprise a combined base/enhancement decoder that decodes the video blocks associated with both base and enhancement layers) [Ye: para. 0036] configured to reconstruct (i.e. a video decoder to reconstruct) [Ye: para. 0033], using a processor (i.e. be executed in a processor) [Ye: para. 0012], the base layer signal based on a base layer residual signal (i.e. summer 49B, which is positioned between inverse transform unit 44 and summer 51, also receives the upsampled information from upsampler 45. Summer 49B adds the up sampled block of data back to the output of inverse transform unit 44) [Ye: para. 0056] from a coded data stream (i.e. the encoded video bitstream) [Ye: para. 0065]; and a second decoding unit (i.e. video decoder 28 may be included in one or more ) [Ye: para. 0042] including an enhancement layer decoder (i.e. Video decoder 28 may comprise a combined base/ enhancement decoder that decodes the video blocks associated with both base and enhancement layers) [Ye: para. 0036] configured to reconstruct (i.e. a video decoder to reconstruct) [Ye: para. 0033], using the processor (i.e. be executed in a processor) [Ye: para. 0012], the enhancement layer signal (i.e. decoding enhancement layer bitstream from an SVC bitstream) [Ye: para. 0033] in units of blocks ((i.e. video blocks) [Ye: para. 0004]; (i.e. enhancement layer video blocks) [Ye: para. 0023]) from the coded data stream ((i.e. the encoded video bitstream) [Ye: para. 0065]; (i.e. Video decoder 28 may comprise a combined base/ enhancement decoder that decodes the video blocks associated with both base and enhancement layers and combines the decoded video to reconstruct the frames of a video sequence. On the decoding side, the techniques of this disclosure, which involve up sampling of base layer data to the spatial resolution of enhancement layer video data so that the up sampled data may be used to code enhancement layer data, may be performed by video decoder 28.) [Ye: para. 0036]) based on a syntax element ((i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120]; (i.e. In SVC, whether residual prediction is used or not may be indicated using a one-bit flag ResPred associated with the macroblock, which may be coded as a macroblock level syntax element. If ResPred = 1, then the enhancement layer residual is coded after subtracting from it the base layer residual block. When the enhancement layer bitstream represents a video signal with higher spatial resolution, the base layer residual signal is upsampled to the resolution of the enhancement layer before being used in inter-layer prediction. This is the function of upsamplers 4S and S9 in FIGS. 2 and 3, i.e., generation of the upsampled video blocks. In SVC Joint Draft 8 (JD8), a bilinear filter is proposed for the upsampler in order to upsample the base layer residual signal, with some exceptions on base layer block boundaries.) [Ye: para. 0067]) in the coded data stream (i.e. the encoded video bitstream) [Ye: para. 0065] at least by decoding a predetermined block of the blocks (i.e. Video decoder 60 may perform an inter-decoding of blocks within video frames) [Ye: para. 0063], wherein the decoding comprises (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]: generating (i.e. divided) [Ye: para. 0044] a set of possible subblock subdivisions ((i.e. block partitions) [Ye: para. 0065]; (i.e. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may be divided into a series of slices. Each slice may include a series of macro blocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.) [Ye: para. 0043] – Note: Block partitioning is well defined in video coding standards, such as in H.264, H.265. Please see the NPL for further details); (i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit) [Ye: para. 0044]) including all possible subblock subdivisions for the predetermined block (i.e. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may be divided into a series of slices. Each slice may include a series of macro blocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.) [Ye: para. 0043] wherein each possible subblock subdivision corresponds to a possible manner for subdividing the predetermined block ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043] – Note: Block partitioning is well defined in video coding standards, such as in H.264, H.265. Please see the NPL for further details) of the enhancement layer signal into subblocks (i.e. video blocks for one or more pixel locations of the enhancement layer video blocks that correspond to a location between two different edges of two different base layer video blocks) [Ye: para. 0062], selecting (i.e. select) [Ye: para. 0062] a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block, wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070] of the base layer signal ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]; (i.e. pixel locations of the upsampled video data that correspond to internal pixel locations of the enhancement layer video blocks and are located between the different base layer video blocks when the two different base layer video blocks define different coding modes) [Ye: claim 3]) to satisfy a similarity criterion ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; (i.e. Interpolation may involve the generation of a weighted average for an upsampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. copying techniques may be used in upsampling base layer data on an adaptive basis) [Ye: para. 0007] – Note: Ye discloses techniques to generate the inter-layer signal by interpolating the base layer signal or copying the base layer signal to reconstruct the enhancement layer), selecting ((i.e. select between) [Ye: para. 0062]; (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]), for the predetermined block (i.e. coding block) [Ye: para. 0078], a subblock subdivision from a set of eligible subblock subdivisions ((i.e. block partitions) [Ye: para. 0065]; (i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]), wherein the predetermined block is subdivided into subblocks in accordance with the selected subblock subdivision ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043]), selecting ((i.e. select between) [Ye: para. 0062]; (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]) the context model is selected from a plurality of context models based on a gradient in the co-located portion of the base layer signal (i.e. The upsampling can change the block boundaries. For example, if the base layer and the enhancement layer each define 4 by 4 pixel video blocks, up sampling of the base layer to define more pixels according to the spatial resolution of the enhancement layer results in the block boundaries of the base layer being different than those of the up sampled data. This observation can be exploited such that decisions regarding interpolation or nearest neighbor techniques may be based on whether the up sampled values correspond to edge pixel locations of the enhancement layer (i.e., block boundaries in the enhancement layer) and whether such locations also correspond to locations between block boundaries of the base layer) [Ye: para. 0047] – Note: Ye discloses a consideration of a change between a block in the enhancement layer and a co-located block in a base-later), decoding (i.e. Video decoder 60 may perform an inter-decoding of blocks within video frames) [Ye: para. 0063] the syntax element (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] related to the predetermined block (i.e. inter-decoding of blocks within video frames) [Ye: para. 0063] of the enhancement layer signal ((i.e. decoding enhancement layer bitstream) [Ye: para. 0033]; (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]) using a context model (i.e. CABAC) [Ye: para. 0058], and predictively reconstructing ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques) [Ye: para. 0006]; (i.e. the reconstructed enhancement layer video) [Ye: para. 0122]) the predetermined block using the selected subblock subdivision ((i.e. video blocks reconstructed from previously encoded blocks) [Ye: para. 0052]; (i.e. The residual video block can be sent to a video decoder along with the motion vector, and the decoder can use this information to reconstruct the original video block or an approximation of the original video block.) [Ye: para. 0004]) based on the syntax element (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120]. Ye does not explicitly disclose the following claim limitations (Emphasis Added). selecting a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block, wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion of the base layer signal to satisfy a similarity criterion,selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions, selecting the context model is selected from a plurality of context models based on a gradient in the co-located portion of the base layer signal. In addition, in the same field of endeavor Zhai discloses a deficient limitation as follows: selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions (i.e. A plurality of subblocks of a macroblock are defined. A first subblock is selected from the defined subblocks or from subblocks of a neighboring macroblock) [Zhai: col. 5, line 64-66; Fig. 5], wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion of the base layer signal to satisfy a similarity criterion (i.e. FIG. 4 is a flow chart showing the method for encoding video pictures using an encoder. A first and a second picture are generated from a video picture in step 40. The second picture has a higher resolution than the first picture and each macroblock in the first picture has a plurality of corresponding macroblocks in the second picture. The first picture is intra-coded on macroblock level in step 42. Macroblocks are intra predicted and for a first. The second picture is intra coded on macroblock level in step 44. Macroblocks corresponding to said first macroblock, instead of determining the intra prediction direction. The intra prediction direction of the first macro block of the first picture is reused) [Zhai: col. 5, line 59-60; Fig. 4] – Note: In this illustration, the first picture is a base-layer and the second picture is an enhancement layer. Zhai discloses that there is a collocated macroblock in the base layer for several macroblocks in the enhancement layer. The intra prediction mode of the first macroblock in the first picture can be reused for several macroblocks in the second picture). selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions (i.e. A plurality of subblocks of a macroblock are defined. A first subblock is selected from the defined subblocks or from subblocks of a neighboring macroblock) [Zhai: col. 5, line 64-66; Fig. 5]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye with Zhai to program the system to implement the method of Zhai. Therefore, the combination of Ye with Zhai will improve the coding efficiency [Zhai: col. 7, line 7-9]. Moreover, in the same field of endeavor Yi further discloses the eligible subblock subdivisions as follows: a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block ((i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]; (i.e. The coding mode(s) selected as having an acceptable coding quality for other pixel blocks in the frame may also have an acceptable coding quality for the current pixel block. The coding mode(s) used in the frame may be evaluated, such that the coding mode used the most often in the frame has the greatest influence on the weights of the available coding modes for the current pixel block. Or the coding mode( s) used the most often for the pixel blocks in a region of the frame nearest to the current pixel block may have a greater influence on the weights of the available coding modes for the current pixel block as compared to the coding mode( s) used in spatially distant pixel blocks) [Yi: para 0033; Figs. 3, 5], wherein at least one eligible subblock subdivision enables coding parameters ((i.e. Choose a coding mode from eligible modes) [Yi: Fig. 5]; (i.e. The controller 204 may select a coding mode to be utilized by the coding engine 203 and may control operation of the coding engine 203 to implement each coding mode by setting operational parameters. For example, for each coding mode, the controller 204 may set parameters determining the predictive coding of the pixel blocks) [Yi: para. 0025; Figs. 4-5]; (i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]) of a co-located portion of the base layer signal (i.e. wherein the indicator is pattern of coding assignments made to co-located pixel blocks) [Yi: claim 2] to satisfy a similarity criterion ((i.e. A selected coding mode may be used to code a single pixel block, multiple pixel blocks spatially or temporally adjacent to the pixel block, multiple pixel blocks with similar image content, a single frame, or a sequence of frames) [Yi: para. 0029] ; (i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]). It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye and Zhai with Yi to program the system to implement the method of Yi. Therefore, the combination of Ye and Zhai with Yi will improve the efficiency of coding mode decision process [Yi: para. 0013]. Ye, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). selecting a context model from a plurality of context models. In addition, in the same field of endeavor Cha discloses a deficient limitation as follows: wherein the context model is selected from a plurality of context models (The method according to the fifth exemplary embodiment includes selecting a context model that offers the highest coding efficiency among context models used in the first through fourth exemplary embodiments and performing arithmetic coding according to the selected model) [Cha: para 0067; Fig. 5] It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai and Yi with Cha to program the system to implement the Cha’s method. Therefore, the combination of Ye, Zhai and Yi with Cha will improve the efficiency of coding mode decision process [Cha: para. 0003, 0067, Abstract, Title]. In addition, in the same field of endeavor Sole further discloses the gradient of the collocated block as follows: a gradient in the co-located portion ((wherein the local variation across the block border, the local variation across the collocated block border, the local variation within the block and the local variation within the co-located block relate to one or more gradients) [Sole: claim 3; Fig. 7]; (Still another advantage/feature is the apparatus having the full-reference blocking artifact detector as described above, wherein a quantity of blockiness is determined responsive to a difference between a gradient at a block border in the original version of the picture and a gradient at a co-located block border in the processed version of the picture, and a gradient within a block, contiguous to the block border, in the original version of the picture and a gradient within a collocated block, contiguous to the co-located block border, in the processed version of the picture) [Sole: para 0104; Fig. 7]) It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Yi and Cha with Sole to program the system to implement the Sole’s method. Therefore, the combination of Ye, Zhai, Yi and Cha with Sole will improve the video visual quality [Sole: para. 0005]. Regarding claim 41, Ye meets the claim limitations as set forth in claim 21. Ye further meets the claim limitations as follow. wherein the selecting (i.e. select between) [Ye: para. 0062] the set of eligible subblock subdivisions (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]) includes detecting one or more edges within the co-located portion of the base layer residual signal (i.e. An additional filter 47 may also be included to filter block edges of the base layer information prior to upsampling by upsampler) [Ye: para. 0055] or the base layer signal (i.e. The upsampling can change the block boundaries. For example, if the base layer and the enhancement layer each define 4 by 4 pixel video blocks, up sampling of the base layer to define more pixels according to the spatial resolution of the enhancement layer results in the block boundaries of the base layer being different than those of the up sampled data. This observation can be exploited such that decisions regarding interpolation or nearest neighbor techniques may be based on whether the up sampled values correspond to edge pixel locations of the enhancement layer (i.e., block boundaries in the enhancement layer) and whether such locations also correspond to locations between block boundaries of the base layer) [Ye: para. 0047]. Regarding claim 47, Ye meets the claim limitations as follow. A non-transitory computer-readable medium (i.e. the techniques may be realized at least in part by a computer-readable medium comprising instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer) [Ye: para. 0125] for storing data associated with a video (i.e. The adjacent frame or frames may be retrieved from reference frame store 35, which may comprise any type of memory or data storage device to store video blocks reconstructed from previously encoded blocks) [Ye: para. 0051], comprising a data stream stored in the non-transitory computer-readable medium, the data stream comprising information related to an encoded represented by a base layer signal and an enhancement layer signal (i.e. video decoder 28 may be configured to support scalable video coding (SVC) for spatial scalability) [Ye: para. 0034] – Note: SVC includes a based layer and one or more enhancement layers), wherein the data stream (i.e. the encoded video bitstream) [Ye: para. 0064] is decoded using a plurality of operations (i.e. the techniques may be realized at least in part by a computer-readable medium comprising instructions that, when executed, performs one or more of the methods described above) [Ye: para. 0125] comprising:reconstructing (i.e. a video decoder to reconstruct) [Ye: para. 0033], using a processor (i.e. be executed in a processor) [Ye: para. 0012], the base layer signal based on a base layer residual signal (i.e. summer 49B, which is positioned between inverse transform unit 44 and summer 51, also receives the upsampled information from upsampler 45. Summer 49B adds the up sampled block of data back to the output of inverse transform unit 44) [Ye: para. 0056] from a coded data stream (i.e. the encoded video bitstream) [Ye: para. 0065]; and reconstructing (i.e. a video decoder to reconstruct) [Ye: para. 0033], using the processor (i.e. be executed in a processor) [Ye: para. 0012], the enhancement layer signal (i.e. decoding enhancement layer bitstream from an SVC bitstream) [Ye: para. 0033] in units of blocks ((i.e. video blocks) [Ye: para. 0004]; (i.e. enhancement layer video blocks) [Ye: para. 0023]) from the coded data stream ((i.e. the encoded video bitstream) [Ye: para. 0065]; (i.e. Video decoder 28 may comprise a combined base/ enhancement decoder that decodes the video blocks associated with both base and enhancement layers and combines the decoded video to reconstruct the frames of a video sequence. On the decoding side, the techniques of this disclosure, which involve up sampling of base layer data to the spatial resolution of enhancement layer video data so that the up sampled data may be used to code enhancement layer data, may be performed by video decoder 28.) [Ye: para. 0036]) based on a syntax element (((i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120]; (i.e. In SVC, whether residual prediction is used or not may be indicated using a one-bit flag ResPred associated with the macroblock, which may be coded as a macroblock level syntax element. If ResPred = 1, then the enhancement layer residual is coded after subtracting from it the base layer residual block. When the enhancement layer bitstream represents a video signal with higher spatial resolution, the base layer residual signal is upsampled to the resolution of the enhancement layer before being used in inter-layer prediction. This is the function of upsamplers 4S and S9 in FIGS. 2 and 3, i.e., generation of the upsampled video blocks. In SVC Joint Draft S (JDS), a bilinear filter is proposed for the upsampler in order to upsample the base layer residual signal, with some exceptions on base layer block boundaries.) [Ye: para. 0067]) in the coded data stream (i.e. the encoded video bitstream) [Ye: para. 0065] at least by decoding a predetermined block of the blocks (i.e. Video decoder 60 may perform an inter-decoding of blocks within video frames) [Ye: para. 0063], wherein the decoding comprises (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]: generating (i.e. divided) [Ye: para. 0044] a set of possible subblock subdivisions ((i.e. block partitions) [Ye: para. 0065]; (i.e. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may be divided into a series of slices. Each slice may include a series of macro blocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.) [Ye: para. 0043] – Note: Block partitioning is well defined in video coding standards, such as in H.264, H.265. Please see the NPL for further details); (i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit) [Ye: para. 0044]) including all possible subblock subdivisions for the predetermined block (i.e. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may be divided into a series of slices. Each slice may include a series of macro blocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.) [Ye: para. 0043] wherein each possible subblock subdivision corresponds to a possible manner for subdividing the predetermined block ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043] – Note: Block partitioning is well defined in video coding standards, such as in H.264, H.265. Please see the NPL for further details) of the enhancement layer signal into subblocks (i.e. video blocks for one or more pixel locations of the enhancement layer video blocks that correspond to a location between two different edges of two different base layer video blocks) [Ye: para. 0062], selecting (i.e. select) [Ye: para. 0062] a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block, wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070] of the base layer signal ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]; (i.e. pixel locations of the upsampled video data that correspond to internal pixel locations of the enhancement layer video blocks and are located between the different base layer video blocks when the two different base layer video blocks define different coding modes) [Ye: claim 3]) to satisfy a similarity criterion ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; (i.e. Interpolation may involve the generation of a weighted average for an upsampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. copying techniques may be used in upsampling base layer data on an adaptive basis) [Ye: para. 0007] – Note: Ye discloses techniques to generate the inter-layer signal by interpolating the base layer signal or copying the base layer signal to reconstruct the enhancement layer), selecting ((i.e. select between) [Ye: para. 0062]; (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]), for the predetermined block (i.e. coding block) [Ye: para. 0078], a subblock subdivision from a set of eligible subblock subdivisions ((i.e. block partitions) [Ye: para. 0065]; (i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]), wherein the predetermined block is subdivided into subblocks in accordance with the selected subblock subdivision ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043]), selecting ((i.e. select between) [Ye: para. 0062]; (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]) the context model is selected from a plurality of context models based on a gradient in the co-located portion of the base layer signal (i.e. The upsampling can change the block boundaries. For example, if the base layer and the enhancement layer each define 4 by 4 pixel video blocks, up sampling of the base layer to define more pixels according to the spatial resolution of the enhancement layer results in the block boundaries of the base layer being different than those of the up sampled data. This observation can be exploited such that decisions regarding interpolation or nearest neighbor techniques may be based on whether the up sampled values correspond to edge pixel locations of the enhancement layer (i.e., block boundaries in the enhancement layer) and whether such locations also correspond to locations between block boundaries of the base layer) [Ye: para. 0047] – Note: Ye discloses a consideration of a change between a block in the enhancement layer and a co-located block in a base-later), decoding (i.e. Video decoder 60 may perform an inter-decoding of blocks within video frames) [Ye: para. 0063] the syntax element (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] related to the predetermined block (i.e. inter-decoding of blocks within video frames) [Ye: para. 0063] of the enhancement layer signal ((i.e. decoding enhancement layer bitstream) [Ye: para. 0033]; (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]) using a context model (i.e. CABAC) [Ye: para. 0058], and predictively reconstructing ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques) [Ye: para. 0006]; (i.e. the reconstructed enhancement layer video) [Ye: para. 0122]) the predetermined block using the selected subblock subdivision ((i.e. video blocks reconstructed from previously encoded blocks) [Ye: para. 0052]; (i.e. The residual video block can be sent to a video decoder along with the motion vector, and the decoder can use this information to reconstruct the original video block or an approximation of the original video block.) [Ye: para. 0004]) based on the syntax element (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120]. Ye does not explicitly disclose the following claim limitations (Emphasis Added). selecting a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block, wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion of the base layer signal to satisfy a similarity criterion,selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions,selecting a context model from a plurality of context models. In addition, in the same field of endeavor Zhai discloses a deficient limitation as follows: selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions (i.e. A plurality of subblocks of a macroblock are defined. A first subblock is selected from the defined subblocks or from subblocks of a neighboring macroblock) [Zhai: col. 5, line 64-66; Fig. 5], wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion of the base layer signal to satisfy a similarity criterion (i.e. FIG. 4 is a flow chart showing the method for encoding video pictures using an encoder. A first and a second picture are generated from a video picture in step 40. The second picture has a higher resolution than the first picture and each macroblock in the first picture has a plurality of corresponding macroblocks in the second picture. The first picture is intra-coded on macroblock level in step 42. Macroblocks are intra predicted and for a first. The second picture is intra coded on macroblock level in step 44. Macroblocks corresponding to said first macroblock, instead of determining the intra prediction direction. The intra prediction direction of the first macro block of the first picture is reused) [Zhai: col. 5, line 59-60; Fig. 4] – Note: In this illustration, the first picture is a base-layer and the second picture is an enhancement layer. Zhai discloses that there is a collocated macroblock in the base layer for several macroblocks in the enhancement layer. The intra prediction mode of the first macroblock in the first picture can be reused for several macroblocks in the second picture). selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions (i.e. A plurality of subblocks of a macroblock are defined. A first subblock is selected from the defined subblocks or from subblocks of a neighboring macroblock) [Zhai: col. 5, line 64-66; Fig. 5]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye with Zhai to program the system to implement the method of Zhai. Therefore, the combination of Ye with Zhai will improve the coding efficiency [Zhai: col. 7, line 7-9]. Moreover, in the same field of endeavor Yi further discloses the eligible subblock subdivisions as follows: a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block ((i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]; (i.e. The coding mode(s) selected as having an acceptable coding quality for other pixel blocks in the frame may also have an acceptable coding quality for the current pixel block. The coding mode(s) used in the frame may be evaluated, such that the coding mode used the most often in the frame has the greatest influence on the weights of the available coding modes for the current pixel block. Or the coding mode( s) used the most often for the pixel blocks in a region of the frame nearest to the current pixel block may have a greater influence on the weights of the available coding modes for the current pixel block as compared to the coding mode( s) used in spatially distant pixel blocks) [Yi: para 0033; Figs. 3, 5], wherein at least one eligible subblock subdivision enables coding parameters ((i.e. Choose a coding mode from eligible modes) [Yi: Fig. 5]; (i.e. The controller 204 may select a coding mode to be utilized by the coding engine 203 and may control operation of the coding engine 203 to implement each coding mode by setting operational parameters. For example, for each coding mode, the controller 204 may set parameters determining the predictive coding of the pixel blocks) [Yi: para. 0025; Figs. 4-5]; (i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]) of a co-located portion of the base layer signal (i.e. wherein the indicator is pattern of coding assignments made to co-located pixel blocks) [Yi: claim 2] to satisfy a similarity criterion ((i.e. A selected coding mode may be used to code a single pixel block, multiple pixel blocks spatially or temporally adjacent to the pixel block, multiple pixel blocks with similar image content, a single frame, or a sequence of frames) [Yi: para. 0029] ; (i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]). It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye and Zhai with Yi to program the system to implement the method of Yi. Therefore, the combination of Ye and Zhai with Yi will improve the efficiency of coding mode decision process [Yi: para. 0013]. Ye, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). selecting the context model is selected from a plurality of context models. In addition, in the same field of endeavor Cha discloses a deficient limitation as follows: selecting the context model is selected from a plurality of context models (The method according to the fifth exemplary embodiment includes selecting a context model that offers the highest coding efficiency among context models used in the first through fourth exemplary embodiments and performing arithmetic coding according to the selected model.) [Cha: para 0067; Fig. 5] It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai and Yi with Cha to program the system to implement the Cha’s method. Therefore, the combination of Ye, Zhai and Yi with Cha will improve the efficiency of coding mode decision process [Cha: para. 0003, 0067, Abstract, Title]. In addition, in the same field of endeavor Sole further discloses the gradient of the collocated block as follows: a gradient in the co-located portion ((wherein the local variation across the block border, the local variation across the collocated block border, the local variation within the block and the local variation within the co-located block relate to one or more gradients) [Sole: claim 3; Fig. 7]; (Still another advantage/feature is the apparatus having the full-reference blocking artifact detector as described above, wherein a quantity of blockiness is determined responsive to a difference between a gradient at a block border in the original version of the picture and a gradient at a co-located block border in the processed version of the picture, and a gradient within a block, contiguous to the block border, in the original version of the picture and a gradient within a collocated block, contiguous to the co-located block border, in the processed version of the picture) [Sole: para 0104; Fig. 7]) It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Yi and Cha with Sole to program the system to implement the Sole’s method. Therefore, the combination of Ye, Zhai, Yi and Cha with Sole will improve the video visual quality [Sole: para. 0005]. Regarding claim 48, Ye meets the claim limitations as set forth in claim 47. Ye further meets the claim limitations as follow. wherein the selecting (i.e. select between) [Ye: para. 0062] the set of eligible subblock subdivisions (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]) includes detecting one or more edges within the co-located portion of the base layer residual signal (i.e. An additional filter 47 may also be included to filter block edges of the base layer information prior to upsampling by upsampler) [Ye: para. 0055] or the base layer signal (i.e. The upsampling can change the block boundaries. For example, ifthe base layer and the enhancement layer each define 4 by 4 pixel video blocks, up sampling of the base layer to define more pixels according to the spatial resolution of the enhancement layer results in the block boundaries of the base layer being different than those of the up sampled data. This observation can be exploited such that decisions regarding interpolation or nearest neighbor techniques may be based on whether the up sampled values correspond to edge pixel locations of the enhancement layer (i.e., block boundaries in the enhancement layer) and whether such locations also correspond to locations between block boundaries of the base layer) [Ye: para. 0047]. Regarding claim 54, Ye meets the claim limitations as follow. A video encoder for encoding a video (i.e. video encoder 22 ) [Ye: para. 0034], comprising: a first encoding unit (i.e. video encoder 22 ) [Ye: para. 0042] including a base layer encoder (i.e. base layer encoder 32) [Ye: para. 0035; Fig. 1] configured to, using a processor (i.e. be executed in a processor) [Ye: para. 0012], determine a base layer residual signal for a base layer of the video ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; (i.e. Interpolation may involve the generation of a weighted average for an upsampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. copying techniques may be used in upsampling base layer data on an adaptive basis) [Ye: para. 0007] – Note: Ye discloses techniques to generate the inter-layer signal by interpolating the base layer signal or copying the base layer signal to reconstruct the enhancement layer), and encode (i.e. to encode) [Ye: para. 0043] into a data stream (i.e. the encoded video bitstream) [Ye: para. 0065] a base layer signal ((i.e. perform encoding of a base layer) [Ye: para. 0035]; (i.e. reconstructed residual signal in the base layer) [Ye: para. 0076]) based on the base layer residual signal ((i.e. summer 49B, which is positioned between inverse transform unit 44 and summer 51, also receives the upsampled information from upsampler 45. Summer 49B adds the up sampled block of data back to the output of inverse transform unit 44) [Ye: para. 0056]; (i.e. decoding of a base layer) [Ye: para. 0034]; (i.e. reconstructed residual signal in the base layer) [Ye: para. 0076]); and a second encoding unit (i.e. video encoder 22 ) [Ye: para. 0042] including an enhancement layer encoder (i.e. enhancement layer encoder 34) [Ye: para. 0035; Fig. 1] configured to encode (i.e. to encode) [Ye: para. 0043], using the processor (i.e. be executed in a processor) [Ye: para. 0012], a syntax element (((i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120]; (i.e. In SVC, whether residual prediction is used or not may be indicated using a one-bit flag ResPred associated with the macroblock, which may be coded as a macroblock level syntax element. If ResPred = 1, then the enhancement layer residual is coded after subtracting from it the base layer residual block. When the enhancement layer bitstream represents a video signal with higher spatial resolution, the base layer residual signal is upsampled to the resolution of the enhancement layer before being used in inter-layer prediction. This is the function of upsamplers 4S and S9 in FIGS. 2 and 3, i.e., generation of the upsampled video blocks. In SVC Joint Draft 8 (JD8), a bilinear filter is proposed for the upsampler in order to upsample the base layer residual signal, with some exceptions on base layer block boundaries.) [Ye: para. 0067]) and the enhancement layer signal (i.e. decoding enhancement layer bitstream from an SVC bitstream) [Ye: para. 0033] in units of blocks ((i.e. video blocks) [Ye: para. 0004]; (i.e. enhancement layer video blocks) [Ye: para. 0023]) into the coded data stream ((i.e. the encoded video bitstream) [Ye: para. 0065]; (i.e. Video decoder 28 may comprise a combined base/ enhancement decoder that decodes the video blocks associated with both base and enhancement layers and combines the decoded video to reconstruct the frames of a video sequence. On the decoding side, the techniques of this disclosure, which involve up sampling of base layer data to the spatial resolution of enhancement layer video data so that the up sampled data may be used to code enhancement layer data, may be performed by video decoder 28.) [Ye: para. 0036]) at least by encoding a predetermined block of the blocks (i.e. inter-based predictive coding) [Ye: para. 0045], wherein the encoding comprises ((i.e., encoding : generating (i.e. divided) [Ye: para. 0044] a set of possible subblock subdivisions ((i.e. block partitions) [Ye: para. 0065]; (i.e. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may be divided into a series of slices. Each slice may include a series of macro blocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.) [Ye: para. 0043] – Note: Block partitioning is well defined in video coding standards, such as in H.264, H.265. Please see the NPL for further details); (i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit) [Ye: para. 0044]) including all possible subblock subdivisions for the predetermined block (i.e. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may be divided into a series of slices. Each slice may include a series of macro blocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, 4 by 4 for luma components, and 8x8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components.) [Ye: para. 0043] wherein each possible subblock subdivision corresponds to a possible manner for subdividing the predetermined block ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043] – Note: Block partitioning is well defined in video coding standards, such as in H.264, H.265. Please see the NPL for further details) of the enhancement layer signal into subblocks (i.e. video blocks for one or more pixel locations of the enhancement layer video blocks that correspond to a location between two different edges of two different base layer video blocks) [Ye: para. 0062], selecting (i.e. select) [Ye: para. 0062] a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block, wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070] of the base layer signal ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]; (i.e. pixel locations of the upsampled video data that correspond to internal pixel locations of the enhancement layer video blocks and are located between the different base layer video blocks when the two different base layer video blocks define different coding modes) [Ye: claim 3]) to satisfy a similarity criterion ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; (i.e. Interpolation may involve the generation of a weighted average for an upsampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. copying techniques may be used in upsampling base layer data on an adaptive basis) [Ye: para. 0007] – Note: Ye discloses techniques to generate the inter-layer signal by interpolating the base layer signal or copying the base layer signal to reconstruct the enhancement layer), selecting ((i.e. select between) [Ye: para. 0062]; (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]), for the predetermined block (i.e. coding block) [Ye: para. 0078], a subblock subdivision from a set of eligible subblock subdivisions ((i.e. block partitions) [Ye: para. 0065]; (i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]), wherein the predetermined block is subdivided into subblocks in accordance with the selected subblock subdivision ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043]),selecting ((i.e. select between) [Ye: para. 0062]; (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]) the context model is selected from a plurality of context models based on a gradient in the co-located portion of the base layer signal (i.e. The upsampling can change the block boundaries. For example, if the base layer and the enhancement layer each define 4 by 4 pixel video blocks, up sampling of the base layer to define more pixels according to the spatial resolution of the enhancement layer results in the block boundaries of the base layer being different than those of the up sampled data. This observation can be exploited such that decisions regarding interpolation or nearest neighbor techniques may be based on whether the up sampled values correspond to edge pixel locations of the enhancement layer (i.e., block boundaries in the enhancement layer) and whether such locations also correspond to locations between block boundaries of the base layer) [Ye: para. 0047] – Note: Ye discloses a consideration of a change between a block in the enhancement layer and a co-located block in a base-later), decoding (i.e. Video decoder 60 may perform an inter-decoding of blocks within video frames) [Ye: para. 0063] the syntax element (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] related to the predetermined block (i.e. inter-decoding of blocks within video frames) [Ye: para. 0063] of the enhancement layer signal ((i.e. decoding enhancement layer bitstream) [Ye: para. 0033]; (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]) using a context model (i.e. CABAC) [Ye: para. 0058], and predictively reconstructing ((i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques) [Ye: para. 0006]; (i.e. the reconstructed enhancement layer video) [Ye: para. 0122]) the predetermined block using the selected subblock subdivision ((i.e. video blocks reconstructed from previously encoded blocks) [Ye: para. 0052]; (i.e. The residual video block can be sent to a video decoder along with the motion vector, and the decoder can use this information to reconstruct the original video block or an approximation of the original video block.) [Ye: para. 0004]) based on the syntax element (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120]. Ye does not explicitly disclose the following claim limitations (Emphasis Added). selecting a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block, wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion of the base layer signal to satisfy a similarity criterion,selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions, selecting the context model is selected from a plurality of context models based on a gradient in the co-located portion of the base layer signal. In addition, in the same field of endeavor Zhai discloses a deficient limitation as follows: selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions (i.e. A plurality of subblocks of a macroblock are defined. A first subblock is selected from the defined subblocks or from subblocks of a neighboring macroblock) [Zhai: col. 5, line 64-66; Fig. 5], wherein at least one eligible subblock subdivision enables coding parameters of a co-located portion of the base layer signal to satisfy a similarity criterion (i.e. FIG. 4 is a flow chart showing the method for encoding video pictures using an encoder. A first and a second picture are generated from a video picture in step 40. The second picture has a higher resolution than the first picture and each macroblock in the first picture has a plurality of corresponding macroblocks in the second picture. The first picture is intra-coded on macroblock level in step 42. Macroblocks are intra predicted and for a first. The second picture is intra coded on macroblock level in step 44. Macroblocks corresponding to said first macroblock, instead of determining the intra prediction direction. The intra prediction direction of the first macro block of the first picture is reused) [Zhai: col. 5, line 59-60; Fig. 4] – Note: In this illustration, the first picture is a base-layer and the second picture is an enhancement layer. Zhai discloses that there is a collocated macroblock in the base layer for several macroblocks in the enhancement layer. The intra prediction mode of the first macroblock in the first picture can be reused for several macroblocks in the second picture). selecting, for the predetermined block, a subblock subdivision from the set of eligible subblock subdivisions (i.e. A plurality of subblocks of a macroblock are defined. A first subblock is selected from the defined subblocks or from subblocks of a neighboring macroblock) [Zhai: col. 5, line 64-66; Fig. 5]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye with Zhai to program the system to implement the method of Zhai. Therefore, the combination of Ye with Zhai will improve the coding efficiency [Zhai: col. 7, line 7-9]. Moreover, in the same field of endeavor Yi further discloses the eligible subblock subdivisions as follows: a set of eligible subblock subdivisions from the set of possible subblock subdivisions for the predetermined block ((i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]; (i.e. The coding mode(s) selected as having an acceptable coding quality for other pixel blocks in the frame may also have an acceptable coding quality for the current pixel block. The coding mode(s) used in the frame may be evaluated, such that the coding mode used the most often in the frame has the greatest influence on the weights of the available coding modes for the current pixel block. Or the coding mode( s) used the most often for the pixel blocks in a region of the frame nearest to the current pixel block may have a greater influence on the weights of the available coding modes for the current pixel block as compared to the coding mode( s) used in spatially distant pixel blocks) [Yi: para 0033; Figs. 3, 5], wherein at least one eligible subblock subdivision enables coding parameters ((i.e. Choose a coding mode from eligible modes) [Yi: Fig. 5]; (i.e. The controller 204 may select a coding mode to be utilized by the coding engine 203 and may control operation of the coding engine 203 to implement each coding mode by setting operational parameters. For example, for each coding mode, the controller 204 may set parameters determining the predictive coding of the pixel blocks) [Yi: para. 0025; Figs. 4-5]; (i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]) of a co-located portion of the base layer signal (i.e. wherein the indicator is pattern of coding assignments made to co-located pixel blocks) [Yi: claim 2] to satisfy a similarity criterion ((i.e. A selected coding mode may be used to code a single pixel block, multiple pixel blocks spatially or temporally adjacent to the pixel block, multiple pixel blocks with similar image content, a single frame, or a sequence of frames) [Yi: para. 0029] ; (i.e. A coding mode selected as having an acceptable coding quality for an adjacent pixel block may also have an acceptable coding quality for the current pixel block. For example, if a previously coded pixel block was coded using an 8x8 P-type coding mode, then the 8x8 P-type coding mode may have a greater weight then a 16x16 I-type coding mode for the pixel blocks adjacent to the previously coded block) [Yi: para 0032; Fig. 3]). It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye and Zhai with Yi to program the system to implement the method of Yi. Therefore, the combination of Ye and Zhai with Yi will improve the efficiency of coding mode decision process [Yi: para. 0013]. Ye, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). selecting the context model is selected from a plurality of context models. In addition, in the same field of endeavor Cha discloses a deficient limitation as follows: selecting the context model is selected from a plurality of context models (The method according to the fifth exemplary embodiment includes selecting a context model that offers the highest coding efficiency among context models used in the first through fourth exemplary embodiments and performing arithmetic coding according to the selected model.) [Cha: para 0067; Fig. 5] It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai and Yi with Cha to program the system to implement the Cha’s method. Therefore, the combination of Ye, Zhai and Yi with Cha will improve the efficiency of coding mode decision process [Cha: para. 0003, 0067, Abstract, Title]. In addition, in the same field of endeavor Sole further discloses the gradient of the collocated block as follows: a gradient in the co-located portion ((wherein the local variation across the block border, the local variation across the collocated block border, the local variation within the block and the local variation within the co-located block relate to one or more gradients) [Sole: claim 3; Fig. 7]; (Still another advantage/feature is the apparatus having the full-reference blocking artifact detector as described above, wherein a quantity of blockiness is determined responsive to a difference between a gradient at a block border in the original version of the picture and a gradient at a co-located block border in the processed version of the picture, and a gradient within a block, contiguous to the block border, in the original version of the picture and a gradient within a collocated block, contiguous to the co-located block border, in the processed version of the picture) [Sole: para 0104; Fig. 7]) It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Yi and Cha with Sole to program the system to implement the Sole’s method. Therefore, the combination of Ye, Zhai, Yi and Cha with Sole will improve the video visual quality [Sole: para. 0005]. Regarding claim 55, Ye meets the claim limitations as set forth in claim 54. Ye further meets the claim limitations as follow. wherein the selecting (i.e. select between) [Ye: para. 0062] the set of eligible subblock subdivisions (i.e. the decision as to whether to invoke interpolation or to copy from nearest neighboring pixel may be determined depending on the alignment between the base layer and the enhancement layer blocks) [Ye: para. 0082]) includes detecting one or more edges within the co-located portion of the base layer residual signal (i.e. An additional filter 47 may also be included to filter block edges of the base layer information prior to upsampling by upsampler) [Ye: para. 0055] or the base layer signal (i.e. The upsampling can change the block boundaries. For example, if the base layer and the enhancement layer each define 4 by 4 pixel video blocks, up sampling of the base layer to define more pixels according to the spatial resolution of the enhancement layer results in the block boundaries of the base layer being different than those of the up sampled data. This observation can be exploited such that decisions regarding interpolation or nearest neighbor techniques may be based on whether the up sampled values correspond to edge pixel locations of the enhancement layer (i.e., block boundaries in the enhancement layer) and whether such locations also correspond to locations between block boundaries of the base layer) [Ye: para. 0047]. Regarding claim 62, Ye, Zhai, and Yi meet the claim limitations as set forth in claim 21. Ye further meets the claim limitations as follow. the context model is selected from the plurality of context models further based on information on a spectral decomposition of the base layer signal (i.e. base layer data) [Yi: 0036] or the base layer residual signal (i.e. the 8x8 residual block or 4x4 residual block) [Yi: 0044]. Ye, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). the context model is selected from the plurality of context models further based on information on a spectral decomposition. In addition, in the same field of endeavor Cha discloses deficient limitations as follows: wherein the context model is selected from a plurality of context models (The method according to the fifth exemplary embodiment includes selecting a context model that offers the highest coding efficiency among context models used in the first through fourth exemplary embodiments and performing arithmetic coding according to the selected model.) [Cha: para 0067; Fig. 5] further based on information on a spectral decomposition (As shown in FIG. 1, in the temporally filtered hierarchical structure, slices in a high-pass frame are encoded in the order from the lowest temporal level to the highest temporal level while consecutively referring to a context model for a slice coded immediately before a given slice as an initial value of a context model for the given slice. Arrows shown in FIGS. 1 through 6 indicate directions in which context models are referred to. In other words, the context model for a slice coded immediately before a given slice is used as an initial value of a context model for the given slice.) [Cha: para 0060; Figs. 1-6] – Note: Cha discussed a video frame can be decomposed into low-pass temporal level and high-pass temporal level. Then the context models are referred to these data. In other words, Cha discloses the context models that are based on the spectral decomposition. Please see more details in Figs. 1-6). It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai and Yi with Cha to program the system to implement the Cha’s method. Therefore, the combination of Ye, Zhai and Yi with Cha will improve the efficiency of coding mode decision process [Cha: para. 0003, 0067, Abstract, Title]. Regarding claim 63, Ye, Zhai, and Yi meet the claim limitations as set forth in claim 21. Ye , Zhai, and Yi further meet the claim limitations as follow. the gradient in the co-located portion of the base layer signal (i.e. In JSVM, another coding tool named "INTRA_BL" is employed to exploit the correlation between two layers. In INTRA_BL mode, the base layer (having low resolution) is first up sampled using a half pixel interpolation 6-tap filter, which is defined in the H.264/ AVC standard. Then the upsampled signal is used to predict the current layer signal, so that only the residual needs to be encoded. In INTRA_BL mode, the side information is very small. Only one flag per macro block needs to be sent. The residual coding could be the same as in H.264/AVC) [Zhai: col. 1, line 51-60] indicates a directional change ((i.e. According to the invention, the intra-prediction direction in the BL is directly given to four co-located 4x4 blocks in the process of intra prediction direction upsampling. Thus, when one macroblock uses INTRA_DIRECT mode, the intra prediction direction needs not explicitly be encoded. Instead, it can be derived at the decoder side by just upsampling the BL prediction directions) [Zhai: col. 4, line 7-3]; (i.e. determines a displacement between the blocks. On this basis, motion estimation unit 33 produces a motion vector (MY) (or multiple MV's in the case of bidirectional prediction) that indicates the magnitude and trajectory of the displacement between current video block 31 and a predictive block used to code current video block 31) [Yi: 0052] – Note: MV indicates direction changes) in intensity or color at a pixel in the co-located portion ((i.e. the 8x8 residual block or 4x4 residual block) [Yi: 0044] – Note: Residual block includes intensity or color changes); (i.e. According to the invention, the intra-prediction direction in the BL is directly given to four co-located 4x4 blocks in the process of intra prediction direction upsampling. Thus, when one macroblock uses INTRA_DIRECT mode, the intra prediction direction needs not explicitly be encoded. Instead, it can be derived at the decoder side by just upsampling the BL prediction directions) [Zhai: col. 4, line 7-3]). Ye, Zhai, Yi, and Cha do not explicitly disclose the following claim limitations (Emphasis Added). the gradient in the co-located portion. In addition, in the same field of endeavor Sole further discloses the gradient of the collocated block as follows: a gradient in the co-located portion ((wherein the local variation across the block border, the local variation across the collocated block border, the local variation within the block and the local variation within the co-located block relate to one or more gradients) [Sole: claim 3; Fig. 7]; (Still another advantage/feature is the apparatus having the full-reference blocking artifact detector as described above, wherein a quantity of blockiness is determined responsive to a difference between a gradient at a block border in the original version of the picture and a gradient at a co-located block border in the processed version of the picture, and a gradient within a block, contiguous to the block border, in the original version of the picture and a gradient within a collocated block, contiguous to the co-located block border, in the processed version of the picture) [Sole: para 0104; Fig. 7]) It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Yi and Cha with Sole to program the system to implement the Sole’s method. Therefore, the combination of Ye, Zhai, Yi and Cha with Sole will improve the video visual quality [Sole: para. 0005]. Regarding claim 64, Ye, Zhai, and Yi meet the claim limitations as set forth in claim 21. Ye further meets the claim limitations as follow. the gradient in the co-located portion of the base layer signal (i.e. In JSVM, another coding tool named "INTRA_BL" is employed to exploit the correlation between two layers. In INTRA_BL mode, the base layer (having low resolution) is first up sampled using a half pixel interpolation 6-tap filter, which is defined in the H.264/ AVC standard. Then the upsampled signal is used to predict the current layer signal, so that only the residual needs to be encoded. In INTRA_BL mode, the side information is very small. Only one flag per macro block needs to be sent. The residual coding could be the same as in H.264/AVC) [Zhai: col. 1, line 51-60] indicates a gradient direction ((i.e. According to the invention, the intra-prediction direction in the BL is directly given to four co-located 4x4 blocks in the process of intra prediction direction upsampling. Thus, when one macroblock uses INTRA_DIRECT mode, the intra prediction direction needs not explicitly be encoded. Instead, it can be derived at the decoder side by just upsampling the BL prediction directions) [Zhai: col. 4, line 7-3]; (i.e. the 8x8 residual block or 4x4 residual block) [Yi: 0044] – Note: Residual block includes intensity or color changes); (i.e. determines a displacement between the blocks. On this basis, motion estimation unit 33 produces a motion vector (MY) (or multiple MV's in the case of bidirectional prediction) that indicates the magnitude and trajectory of the displacement between current video block 31 and a predictive block used to code current video block 31) [Yi: 0052] – Note: MV indicates direction changes) that occurs most in a block in the co-located portion ((i.e. According to the invention, the intra-prediction direction in the BL is directly given to four co-located 4x4 blocks in the process of intra prediction direction upsampling. Thus, when one macroblock uses INTRA_DIRECT mode, the intra prediction direction needs not explicitly be encoded. Instead, it can be derived at the decoder side by just upsampling the BL prediction directions) [Zhai: col. 4, line 7-3]). Ye, Zhai, Yi, and Cha do not explicitly disclose the following claim limitations (Emphasis Added). the gradient in the co-located portion. In addition, in the same field of endeavor Sole further discloses the gradient of the collocated block as follows: a gradient in the co-located portion ((wherein the local variation across the block border, the local variation across the collocated block border, the local variation within the block and the local variation within the co-located block relate to one or more gradients) [Sole: claim 3; Fig. 7]; (Still another advantage/feature is the apparatus having the full-reference blocking artifact detector as described above, wherein a quantity of blockiness is determined responsive to a difference between a gradient at a block border in the original version of the picture and a gradient at a co-located block border in the processed version of the picture, and a gradient within a block, contiguous to the block border, in the original version of the picture and a gradient within a collocated block, contiguous to the co-located block border, in the processed version of the picture) [Sole: para 0104; Fig. 7]) It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Yi and Cha with Sole to program the system to implement the Sole’s method. Therefore, the combination of Ye, Zhai, Yi and Cha with Sole will improve the video visual quality [Sole: para. 0005]. Claims 42-44, 49-51, and 56-58 are rejected under 35 U.S.C. 103 as being unpatentable over are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. (US Patent Application Publication 2008/0165848 A1), (“Ye”), in view of Zhai et al. (US Patent 7,847,861 B2), (“Zhai”), in view of Yi et al. (US Patent Application Publication 2012/0195364 A1), (“Yi”), in view of Cha et al. (US Patent Application Publication 2006/0233240 A1), (“Cha”), , (“Cha”), in view of Sole et al. (US Patent Application Publication 2010/0027897 A1), (“Sole”), in view of Wiegand et al. (US Patent Application Publication 2010/0020867 A1), (“Wiegand”). Regarding claim 42, Ye meets the claim limitations as set forth in claim 21. Ye further meets the claim limitations as follow. the predetermined block (i.e. inter-decoding of blocks within video frames) [Ye: para. 0063] is a transform coefficient block having transform coefficients (i.e. the residual transform block coefficients) [Ye: para. 0056] that represent the enhancement layer signal ((i.e. enhancement layers carry additional video data) [Ye: para. 0005]; (i.e. enhancement layer video data) [Ye: para. 0009]; (i.e. decoding enhancement layer bitstream from an SVC bitstream) [Ye: para. 0033]); and the decoding further comprises (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]: for a current subblock being traversed, decoding from the coded data stream ((i.e. decoding enhancement layer bitstream) [Ye: para. 0033]; (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]) (a) a first syntax element indicating whether (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] the current subblock (i.e. a current video block) [Ye: para. 0051] comprises any significant transform coefficient, and (b) second syntax elements indicating (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] levels of transform coefficients within the current subblock (i.e. a current video block) [Ye: para. 0051], when the first syntax element indicates that (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] the current subblock (i.e. a current video block) [Ye: para. 0051] comprises a significant transform coefficient. Ye, Zhai, Cha and Yi do not explicitly disclose the following claim limitations (Emphasis Added). for a current subblock being traversed, decoding from the coded data stream (a) a first syntax element indicating whether the current subblock comprises any significant transform coefficient. However, in the same field of endeavor Wiegand further discloses the claim limitations and the deficient claim limitations, as follows: a current subblock being traversed (i.e. the transform coefficients in a progressive refinement slice are coded using several scans over the transform blocks) [Wiegand: para. 0010], (i.e. Thus, by providing the parameter coeff_token 240, the positions of the significant transform coefficients have been determined to the extent that no more than total coeff (coeff_token) non-zero transform coefficients exist.) [Wiegand: para. 0071], and (b) second syntax elements indicating levels of transform coefficients ((i.e. the coefficient levels coeff_level for the remaining non-zero transform coefficients are provided) [Wiegand: para. 0074]; (i.e. Then, the values of the levels of these non-zero transform coefficients are provided. This is done in reverse scan order. To be more specific, firstly it is checked as to whether the total number of non-zero transform coefficients is greater than zero 242. This is the case in the above example, since total_coeff ( coeff_token) is 5) [Wiegand: para. 0072] within the current subblock, when the first syntax element indicates that the current subblock comprises a significant transform coefficient (i.e. Thus, by providing the parameter coeff_token 240, the positions of the significant transform coefficients have been determined to the extent that no more than total coeff (coeff_token) non-zero transform coefficients exist.) [Wiegand: para. 0071]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Cha, Sole and Yi with Wiegand to program the system to implement the method of Wiegand. Therefore, the combination of Ye, Zhai, Cha, Sole and Yi and Wiegand will improve the coding efficiency [Wiegand: para. 0004; 0092]. Regarding claim 43, Ye meets the claim limitations as set forth in claim 42. Ye further meets the claim limitations as follow. wherein the second decoding unit comprises (i.e. video decoder 28 may be included in one or more ) [Ye: para. 0042]:an inverse transformer (i.e. inverse transform unit) [Ye: para. 0051] configured to perform an inverse transform on the transform coefficients of the transform coefficient block (i.e. In addition, inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block. Summer 49B adds back the upsampled data from upsampler 45 (which represents an upsampled version of the base layer residual block), and summer 51 adds the final reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 37 to produce a reconstructed video block for storage in reference frame store 35) [Ye: para. 0058] to obtain an enhancement layer residual signal representing a prediction residual of a prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; and a predictive decoder (i.e. decoder) [Ye: para. 0042] configured to reconstruct the enhancement layer signal by spatially, temporally (i.e. Spatial prediction coding codes intra-coded blocks, while temporal prediction coding codes inter-coded blocks.) [Ye: para. 0057] and/or inter-layer predicting the enhancement layer signal to obtain the prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006], and applying the enhancement layer residual signal to the prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]. Regarding claim 44, Ye meets the claim limitations as set forth in claim 42. Ye further meets the claim limitations as follow. wherein the second decoding unit is configured to (i.e. video decoder 28 may be included in one or more ) [Ye: para. 0042] form a spectral decomposition (i.e. discrete cosine transformation DCT) [Ye: para. 0045] of the co-located portion of the base layer residual signal or the base layer signal by ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]):applying a transform (i.e. the residual transform block coefficients) [Ye: para. 0056] onto the base layer residual signal or the base layer signal from spatial domain to frequency domain ((i.e. Block transform unit 39 applies a transform, such as a discrete cosine transform (DCT), to the residual block, producing residual transform block coefficients. At this point, further compression is applied by subtracting base layer residual information from the enhancement layer residual information) [Ye: para. 0054]; (i.e. frequency conversion) [Ye: para. 0041]); and combining and scaling transform coefficient blocks of the base layer residual signal ((i.e. Interpolation may involve the generation of a weighted average for an up sampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. For dyadic spatial scalability, the pixel distances used to derive weights in bilinear upsampling in the horizontal direction are shown in FIG. 6. Bilinear up sampling in the vertical dimension is done in the same manner as the horizontal direction.) [Ye: para. 0072; Fig. 6]; (i.e. For ESS with scaling ratio 5:3, the weights used in bilinear upsampling in the horizontal direction are shown in FIG. 7. Again, bilinear upsampling in the vertical dimension is done in the same manner as the horizontal direction.) [Ye: para. 0074; Fig. 7]; (i.e. It is noteworthy that the scope of this disclosure is not limited by the use of bilinear interpolation. The upsampling decision based on block alignment between the base layer and the enhancement layer may be applied to any interpolation scheme. The 2: 1 and 5:3 spatial ratios, as well as the corresponding block alignments for these ratios, and the corresponding weights given in the interpolation equations, are provided above as examples, but are not meant to limit the scope of this disclosure. Furthermore, the disclosed scheme may be applied to residual up sampling in other video coding systems and/or standards where coding block size other than 4x4 and 8x8 may be used. Interpolation may also use weighted averages of several pixels located on either side of the pixel to be interpolated.) [Ye: para. 0095]). Regarding claim 49, Ye meets the claim limitations as set forth in claim 47. Ye further meets the claim limitations as follow. the predetermined block (i.e. inter-decoding of blocks within video frames) [Ye: para. 0063] is a transform coefficient block having transform coefficients (i.e. the residual transform block coefficients) [Ye: para. 0056] that represent the enhancement layer signal ((i.e. enhancement layers carry additional video data) [Ye: para. 0005]; (i.e. enhancement layer video data) [Ye: para. 0009]; (i.e. decoding enhancement layer bitstream from an SVC bitstream) [Ye: para. 0033]); and the decoding further comprises (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]: for a current subblock being traversed, decoding from the data stream ((i.e. decoding enhancement layer bitstream) [Ye: para. 0033]; (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]) (a) a first syntax element indicating whether (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] the current subblock (i.e. a current video block) [Ye: para. 0051] comprises any significant transform coefficient, and (b) second syntax elements indicating (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] levels of transform coefficients within the current subblock (i.e. a current video block) [Ye: para. 0051], when the first syntax element indicates that (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] the current subblock (i.e. a current video block) [Ye: para. 0051] comprises a significant transform coefficient. Ye, Zhai, Cha and Yi do not explicitly disclose the following claim limitations (Emphasis Added). for a current subblock being traversed, decoding from the data stream (a) a first syntax element indicating whether the current subblock comprises any significant transform coefficient. However, in the same field of endeavor Wiegand further discloses the claim limitations and the deficient claim limitations, as follows: a current subblock being traversed (i.e. the transform coefficients in a progressive refinement slice are coded using several scans over the transform blocks) [Wiegand: para. 0010], Wiegand: para. 0071], and (b) second syntax elements indicating levels of transform coefficients ((i.e. the coefficient levels coeff_level for the remaining non-zero transform coefficients are provided) [Wiegand: para. 0074]; (i.e. Then, the values of the levels of these non-zero transform coefficients are provided. This is done in reverse scan order. To be more specific, firstly it is checked as to whether the total number of non-zero transform coefficients is greater than zero 242. This is the case in the above example, since total_coeff ( coeff_token) is 5) [Wiegand: para. 0072] within the current subblock, when the first syntax element indicates that the current subblock comprises a significant transform coefficient (i.e. Thus, by providing the parameter coeff_token 240, the positions of the significant transform coefficients have been determined to the extent that no more than total coeff (coeff_token) non-zero transform coefficients exist.) [Wiegand: para. 0071]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Cha, Sole and Yi with Wiegand to program the system to implement the method of Wiegand. Therefore, the combination of Ye, Zhai, Cha, Sole and Yi and Wiegand will improve the coding efficiency [Wiegand: para. 0004; 0092]. Regarding claim 50, Ye meets the claim limitations as set forth in claim 49. Ye further meets the claim limitations as follow. an inverse transformer (i.e. inverse transform unit) [Ye: para. 0051] configured to perform an inverse transform on the transform coefficients of the transform coefficient block (i.e. In addition, inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block. Summer 49B adds back the upsampled data from upsampler 45 (which represents an upsampled version of the base layer residual block), and summer 51 adds the final reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 37 to produce a reconstructed video block for storage in reference frame store 35) [Ye: para. 0058] to obtain an enhancement layer residual signal representing a prediction residual of a prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; and a predictive decoder (i.e. decoder) [Ye: para. 0042] configured to reconstruct the enhancement layer signal by spatially, temporally (i.e. Spatial prediction coding codes intra-coded blocks, while temporal prediction coding codes inter-coded blocks.) [Ye: para. 0057] and/or inter-layer predicting the enhancement layer signal to obtain the prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006], and applying the enhancement layer residual signal to the prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]. Regarding claim 51, Ye meets the claim limitations as set forth in claim 49. Ye further meets the claim limitations as follow. wherein the second decoding unit is configured to (i.e. video decoder 28 may be included in one or more ) [Ye: para. 0042] form a spectral decomposition (i.e. discrete cosine transformation DCT) [Ye: para. 0045] of the co-located portion of the base layer residual signal or the base layer signal by ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]):applying a transform (i.e. the residual transform block coefficients) [Ye: para. 0056] onto the base layer residual signal or the base layer signal from spatial domain to frequency domain ((i.e. Block transform unit 39 applies a transform, such as a discrete cosine transform (DCT), to the residual block, producing residual transform block coefficients. At this point, further compression is applied by subtracting base layer residual information from the enhancement layer residual information) [Ye: para. 0054]; (i.e. frequency conversion) [Ye: para. 0041]); and combining and scaling transform coefficient blocks of the base layer residual signal ((i.e. Interpolation may involve the generation of a weighted average for an up sampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. For dyadic spatial scalability, the pixel distances used to derive weights in bilinear upsampling in the horizontal direction are shown in FIG. 6. Bilinear up sampling in the vertical dimension is done in the same manner as the horizontal direction.) [Ye: para. 0072; Fig. 6]; (i.e. For ESS with scaling ratio 5:3, the weights used in bilinear upsampling in the horizontal direction are shown in FIG. 7. Again, bilinear upsampling in the vertical dimension is done in the same manner as the horizontal direction.) [Ye: para. 0074; Fig. 7]; (i.e. It is noteworthy that the scope of this disclosure is not limited by the use of bilinear interpolation. The upsampling decision based on block alignment between the base layer and the enhancement layer may be applied to any interpolation scheme. The 2: 1 and 5:3 spatial ratios, as well as the corresponding block alignments for these ratios, and the corresponding weights given in the interpolation equations, are provided above as examples, but are not meant to limit the scope of this disclosure. Furthermore, the disclosed scheme may be applied to residual up sampling in other video coding systems and/or standards where coding block size other than 4x4 and 8x8 may be used. Interpolation may also use weighted averages of several pixels located on either side of the pixel to be interpolated.) [Ye: para. 0095]). Regarding claim 56, Ye meets the claim limitations as set forth in claim 54. Ye further meets the claim limitations as follow. the predetermined block (i.e. inter-decoding of blocks within video frames) [Ye: para. 0063] is a transform coefficient block having transform coefficients (i.e. the residual transform block coefficients) [Ye: para. 0056] that represent the enhancement layer signal ((i.e. enhancement layers carry additional video data) [Ye: para. 0005]; (i.e. enhancement layer video data) [Ye: para. 0009]; (i.e. decoding enhancement layer bitstream from an SVC bitstream) [Ye: para. 0033]); and the decoding further comprises (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]: for a current subblock being traversed, decoding from the data stream ((i.e. decoding enhancement layer bitstream) [Ye: para. 0033]; (i.e. decoding of a base layer and one or more scalable enhancement layers) [Ye: para. 0034]) (a) a first syntax element indicating whether (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] the current subblock (i.e. a current video block) [Ye: para. 0051] comprises any significant transform coefficient, and (b) second syntax elements indicating (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] levels of transform coefficients within the current subblock (i.e. a current video block) [Ye: para. 0051], when the first syntax element indicates that (i.e. A syntax element FRext may be defined as part of the block header) [Ye: para. 0120] the current subblock (i.e. a current video block) [Ye: para. 0051] comprises a significant transform coefficient. Ye Zhai, Cha and Yi do not explicitly disclose the following claim limitations (Emphasis Added). for a current subblock being traversed, decoding from the coded data stream (a) a first syntax element indicating whether the current subblock comprises any significant transform coefficient. However, in the same field of endeavor Wiegand further discloses the claim limitations and the deficient claim limitations, as follows: a current subblock being traversed (i.e. the transform coefficients in a progressive refinement slice are coded using several scans over the transform blocks) [Wiegand: para. 0010], Wiegand: para. 0071], and (b) second syntax elements indicating levels of transform coefficients ((i.e. the coefficient levels coeff_level for the remaining non-zero transform coefficients are provided) [Wiegand: para. 0074]; (i.e. Then, the values of the levels of these non-zero transform coefficients are provided. This is done in reverse scan order. To be more specific, firstly it is checked as to whether the total number of non-zero transform coefficients is greater than zero 242. This is the case in the above example, since total_coeff ( coeff_token) is 5) [Wiegand: para. 0072] within the current subblock, when the first syntax element indicates that the current subblock comprises a significant transform coefficient (i.e. Thus, by providing the parameter coeff_token 240, the positions of the significant transform coefficients have been determined to the extent that no more than total coeff (coeff_token) non-zero transform coefficients exist.) [Wiegand: para. 0071]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Cha, Sole and Yi with Wiegand to program the system to implement the method of Wiegand. Therefore, the combination of Ye, Zhai, Cha, Sole and Yi with Wiegand will improve the coding efficiency [Wiegand: para. 0004; 0092]. Regarding claim 57, Ye meets the claim limitations as set forth in claim 56. Ye further meets the claim limitations as follow. an inverse transformer (i.e. inverse transform unit) [Ye: para. 0051] configured to perform an inverse transform on the transform coefficients of the transform coefficient block (i.e. In addition, inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block. Summer 49B adds back the upsampled data from upsampler 45 (which represents an upsampled version of the base layer residual block), and summer 51 adds the final reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 37 to produce a reconstructed video block for storage in reference frame store 35) [Ye: para. 0058] to obtain an enhancement layer residual signal representing a prediction residual of a prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; and a predictive decoder (i.e. decoder) [Ye: para. 0042] configured to reconstruct the enhancement layer signal by spatially, temporally (i.e. Spatial prediction coding codes intra-coded blocks, while temporal prediction coding codes inter-coded blocks.) [Ye: para. 0057] and/or inter-layer predicting the enhancement layer signal to obtain the prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006], and applying the enhancement layer residual signal to the prediction signal for the enhancement layer signal (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]. Regarding claim 58, Ye meets the claim limitations as set forth in claim 56. Ye further meets the claim limitations as follow. form a spectral decomposition (i.e. discrete cosine transformation DCT) [Ye: para. 0045] of the co-located portion of the base layer residual signal or the base layer signal by ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]):applying a transform (i.e. the residual transform block coefficients) [Ye: para. 0056] onto the base layer residual signal or the base layer signal from spatial domain to frequency domain ((i.e. Block transform unit 39 applies a transform, such as a discrete cosine transform (DCT), to the residual block, producing residual transform block coefficients. At this point, further compression is applied by subtracting base layer residual information from the enhancement layer residual information) [Ye: para. 0054]; (i.e. frequency conversion) [Ye: para. 0041]); and combining and scaling transform coefficient blocks of the base layer residual signal ((i.e. Interpolation may involve the generation of a weighted average for an up sampled value, wherein the weighted average is defined between two or more pixel values of the base layer. For nearest neighbor techniques, the upsampled value is defined as that of the pixel location in the base layer that is in closest spatial proximity to the upsampled pixel location. According to this disclosure, by using interpolation for some specific conditions of the upsampling, and nearest neighbor copying for other conditions, the coding of enhancement layer video blocks may be improved) [Ye: para. 0023]; (i.e. For dyadic spatial scalability, the pixel distances used to derive weights in bilinear upsampling in the horizontal direction are shown in FIG. 6. Bilinear up sampling in the vertical dimension is done in the same manner as the horizontal direction.) [Ye: para. 0072; Fig. 6]; (i.e. For ESS with scaling ratio 5:3, the weights used in bilinear upsampling in the horizontal direction are shown in FIG. 7. Again, bilinear upsampling in the vertical dimension is done in the same manner as the horizontal direction.) [Ye: para. 0074; Fig. 7]; (i.e. It is noteworthy that the scope of this disclosure is not limited by the use of bilinear interpolation. The upsampling decision based on block alignment between the base layer and the enhancement layer may be applied to any interpolation scheme. The 2: 1 and 5:3 spatial ratios, as well as the corresponding block alignments for these ratios, and the corresponding weights given in the interpolation equations, are provided above as examples, but are not meant to limit the scope of this disclosure. Furthermore, the disclosed scheme may be applied to residual up sampling in other video coding systems and/or standards where coding block size other than 4x4 and 8x8 may be used. Interpolation may also use weighted averages of several pixels located on either side of the pixel to be interpolated.) [Ye: para. 0095]). Claims 45-46, 52-53, and 59-60 are rejected under 35 U.S.C. 103 as being unpatentable over are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. (US Patent Application Publication 2008/0165848 A1), (“Ye”), in view of Zhai et al. (US Patent 7,847,861 B2), (“Zhai”), in view of Yi et al. (US Patent Application Publication 2012/0195364 A1), (“Yi”), in view of Cha et al. (US Patent Application Publication 2006/0233240 A1), (“Cha”), in view of Sole et al. (US Patent Application Publication 2010/0027897 A1), (“Sole”), in view of Chen et al. (US Patent Application Publication 2011/0194613 A1), (“Chen”). Regarding claim 45, Ye meets the claim limitations as set forth in claim 21. Ye further meets the claim limitations as follow. the base layer signal is reconstructed using base layer coding parameters ((i.e. a video decoder to reconstruct) [Ye: para. 0033]; (i.e. residual data blocks may be coded using reference blocks in the base layer) [Ye: para. 0006]; (i.e. the decoder can use this information to reconstruct the original video block or an approximation of the original video block.) [Ye: para. 0004]) spatially varying over the base layer signal ((i.e. video blocks reconstructed from previously encoded blocks) [Ye: para. 0052]; (i.e. Spatial prediction coding codes intra-coded blocks) [Ye: para. 0057] (i.e. summer 49B, which is positioned between inverse transform unit 44 and summer 51, also receives the upsampled information from upsampler 45. Summer 49B adds the up sampled block of data back to the output of inverse transform unit 44) [Ye: para. 0056]); and the selected subblock subdivision is a coarsest subblock subdivision among the set of eligible subblock subdivisions which (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006], when transferred onto the co-located portion of the base layer signal ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]), subdivides the base layer signal into areas such that the base layer coding parameters are sufficiently similar to each other within each area ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043]). Ye, Cha, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). the selected subblock subdivision is a coarsest subblock subdivision among the set of possible subblock subdivisions. However, in the same field of endeavor Chen further discloses the claim limitations and the deficient claim limitations, as follows: the selected subblock subdivision is a coarsest subblock subdivision among the set of possible subblock subdivisions (i.e. Video decoder 30 may select a block-based syntax decoder based on the indication in the coded unit syntax information of the largest block in the coded unit (314). For example, assuming that the coded unit syntax information indicated that the largest block in the coded unit) [Chen: para. 0230]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Cha, Zhai, Sole and Yi with Chen to program the system to implement the method of Chen. Therefore, the combination of Ye, Cha, Zhai, Sole and Yi with Chen will improve the coding efficiency [Chen: para. 0044]. Regarding claim 46, Ye meets the claim limitations as set forth in claim 45. Ye further meets the claim limitations as follow. predicting, for the predetermined block, enhancement layer coding parameters based on the base layer coding parameters being co-located to the predetermined block (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; andpredictively reconstructing the predetermined block (i.e. In addition, inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block. Summer 49B adds back the upsampled data from upsampler 45 (which represents an upsampled version of the base layer residual block), and summer 51 adds the final reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 37 to produce a reconstructed video block for storage in reference frame store 35) [Ye: para. 0058] using the enhancement layer coding parameters (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]. Regarding claim 52, Ye meets the claim limitations as set forth in claim 47. Ye further meets the claim limitations as follow. the base layer signal is reconstructed using base layer coding parameters ((i.e. a video decoder to reconstruct) [Ye: para. 0033]; (i.e. residual data blocks may be coded using reference blocks in the base layer) [Ye: para. 0006]; (i.e. the decoder can use this information to reconstruct the original video block or an approximation of the original video block.) [Ye: para. 0004]) spatially varying over the base layer signal ((i.e. video blocks reconstructed from previously encoded blocks) [Ye: para. 0052]; (i.e. Spatial prediction coding codes intra-coded blocks) [Ye: para. 0057] (i.e. summer 49B, which is positioned between inverse transform unit 44 and summer 51, also receives the upsampled information from upsampler 45. Summer 49B adds the up sampled block of data back to the output of inverse transform unit 44) [Ye: para. 0056]); and the selected subblock subdivision is a coarsest subblock subdivision among the set of eligible subblock subdivisions which (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006], when transferred onto the co-located portion of the base layer signal ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]), subdivides the base layer signal into areas such that the base layer coding parameters are sufficiently similar to each other within each area ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043]). Ye, Cha, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). the selected subblock subdivision is a coarsest subblock subdivision among the set of possible subblock subdivisions. However, in the same field of endeavor Chen further discloses the claim limitations and the deficient claim limitations, as follows: the selected subblock subdivision is a coarsest subblock subdivision among the set of possible subblock subdivisions ((i.e. Video encoder 20 may select the block-based syntax to use based on a largest block, i.e., maximum block size, in the set of blocks for the coded unit. The maximum block size may correspond to the size of a largest macro block included in the coded unit.) [Chen: para. 0230]; (i.e. Video decoder 30 may select a block-based syntax decoder based on the indication in the coded unit syntax information of the largest block in the coded unit (314). For example, assuming that the coded unit syntax information indicated that the largest block in the coded unit) [Chen: para. 0230]). It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Cha, Zhai, Sole and Yi with Chen to program the system to implement the method of Chen. Therefore, the combination of Ye, Cha, Zhai, Sole and Yi with Chen will improve the coding efficiency [Chen: para. 0044]. Regarding claim 53, Ye meets the claim limitations as set forth in claim 52. Ye further meets the claim limitations as follow. predicting, for the predetermined block, enhancement layer coding parameters based on the base layer coding parameters being co-located to the predetermined block (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; andpredictively reconstructing the predetermined block (i.e. In addition, inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block. Summer 49B adds back the upsampled data from upsampler 45 (which represents an upsampled version of the base layer residual block), and summer 51 adds the final reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 37 to produce a reconstructed video block for storage in reference frame store 35) [Ye: para. 0058] using the enhancement layer coding parameters (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]. Regarding claim 59, Ye meets the claim limitations as set forth in claim 54. Ye further meets the claim limitations as follow. the base layer signal is reconstructed using base layer coding parameters ((i.e. a video decoder to reconstruct) [Ye: para. 0033]; (i.e. residual data blocks may be coded using reference blocks in the base layer) [Ye: para. 0006]; (i.e. the decoder can use this information to reconstruct the original video block or an approximation of the original video block.) [Ye: para. 0004]) spatially varying over the base layer signal ((i.e. video blocks reconstructed from previously encoded blocks) [Ye: para. 0052]; (i.e. Spatial prediction coding codes intra-coded blocks) [Ye: para. 0057] (i.e. summer 49B, which is positioned between inverse transform unit 44 and summer 51, also receives the upsampled information from upsampler 45. Summer 49B adds the up sampled block of data back to the output of inverse transform unit 44) [Ye: para. 0056]); and the selected subblock subdivision is a coarsest subblock subdivision among the set of eligible subblock subdivisions which (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006], when transferred onto the co-located portion of the base layer signal ((i.e. The base layer pixels involved in the interpolation process belong to different base layer coding blocks) [Ye: para. 0085]; (i.e. In FIG. 5, the upsampled pixel location and the location of the pixel before upsampling may be co-located; for example, the center pixel is labeled three times as B, E and X) [Ye: para. 0070]), subdivides the base layer signal into areas such that the base layer coding parameters are sufficiently similar to each other within each area ((i.e. In general, macro blocks (MBs) and the various sub-blocks may be generally referred to as video blocks. In addition, a slice may be considered to be a series of video blocks, such as MBs and/or sub-blocks. Each slice may be an independently decodable unit.) [Ye: para. 0044]; (i.e. Each slice may include a series of macro blocks, which may be arranged into sub-blocks) [Ye: para. 0043]). Ye, Cha, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). the selected subblock subdivision is a coarsest subblock subdivision among the set of possible subblock subdivisions. However, in the same field of endeavor Chen further discloses the claim limitations and the deficient claim limitations, as follows: the selected subblock subdivision is a coarsest subblock subdivision among the set of possible subblock subdivisions (i.e. Video encoder 20 may select the block-based syntax to use based on a largest block, i.e., maximum block size, in the set of blocks for the coded unit. The maximum block size may correspond to the size of a largest macro block included in the coded unit.) [Chen: para. 0230]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Cha, Zhai, Sole and Yi with Chen to program the system to implement the method of Chen. Therefore, the combination of Ye, Cha, Zhai, Sole and Yi with Chen will improve the coding efficiency [Wiegand: para. 0004; 0092]. Regarding claim 60, Ye meets the claim limitations as set forth in claim 59. Ye further meets the claim limitations as follow. predicting, for the predetermined block, enhancement layer coding parameters based on the base layer coding parameters being co-located to the predetermined block (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]; andpredictively reconstructing the predetermined block (i.e. In addition, inverse quantization unit 42 and inverse transform unit 44 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block. Summer 49B adds back the upsampled data from upsampler 45 (which represents an upsampled version of the base layer residual block), and summer 51 adds the final reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 37 to produce a reconstructed video block for storage in reference frame store 35) [Ye: para. 0058] using the enhancement layer coding parameters (i.e. In inter-layer prediction, enhancement layer video blocks may be coded using predictive techniques that are similar to motion estimation and motion compensation. In particular, enhancement layer video residual data blocks may be coded using reference blocks in the base layer. However, the base and enhancement layers have different spatial resolutions. Therefore, the base layer video data may be upsampled to the spatial resolution of the enhancement layer video data, e.g., to form reference blocks for generation of the enhancement layer residual data.) [Ye: para. 0006]. Claims 62 is rejected under 35 U.S.C. 103 as being unpatentable over are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al. (US Patent Application Publication 2008/0165848 A1), (“Ye”), in view of Zhai et al. (US Patent 7,847,861 B2), (“Zhai”), in view of Yi et al. (US Patent Application Publication 2012/0195364 A1), (“Yi”), in view of Cha et al. (US Patent Application Publication 2006/0233240 A1), (“Cha”), in view of Schwarz et al. (US Patent Application Publication 2008/0002767 A1), (“Schwarz”). Regarding claim 62, Ye, Zhai, and Yi meet the claim limitations as set forth in claim 21. Ye further meets the claim limitations as follow. the context model is selected from the plurality of context models based on information on a spectral decomposition of the base layer signal (i.e. base layer data) [Yi: 0036] or the base layer residual signal (i.e. the 8x8 residual block or 4x4 residual block) [Yi: 0044]. Ye, Zhai and Yi do not explicitly disclose the following claim limitations (Emphasis Added). the context model is selected from the plurality of context models based on information on a spectral decomposition. In addition, in the same field of endeavor Cha discloses deficient limitations as follows: wherein the context model is selected from a plurality of context models (The method according to the fifth exemplary embodiment includes selecting a context model that offers the highest coding efficiency among context models used in the first through fourth exemplary embodiments and performing arithmetic coding according to the selected model.) [Cha: para 0067; Fig. 5] based on information on a spectral decomposition (As shown in FIG. 1, in the temporally filtered hierarchical structure, slices in a high-pass frame are encoded in the order from the lowest temporal level to the highest temporal level while consecutively referring to a context model for a slice coded immediately before a given slice as an initial value of a context model for the given slice. Arrows shown in FIGS. 1 through 6 indicate directions in which context models are referred to. In other words, the context model for a slice coded immediately before a given slice is used as an initial value of a context model for the given slice.) [Cha: para 0060; Figs. 1-6] – Note: Cha discussed a video frame can be decomposed into low-pass temporal level and high-pass temporal level. Then the context models are referred to these data. In other words, Cha discloses the context models that are based on the spectral decomposition. Please see more details in Figs. 1-6). It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Sole and Yi with Cha to program the system to implement the Cha’s method. Therefore, the combination of Ye, Zhai, Sole and Yi with Cha will improve the efficiency of coding mode decision process [Cha: para. 0003, 0067, Abstract, Title]. In the same field of endeavor Schwarz discloses spectral decomposition as follows: based on information on a spectral decomposition (wherein the base encoder and the determiner are adapted such that an inverse spectral decomposition has to be performed to extract the residual information) [Schwarz: claim 17]. It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Ye, Zhai, Yi, Sole and Cha with Schwarz to program the system to implement the Schwarz’s method. Therefore, the combination of Ye, Zhai, Yi, Sole and Cha with Schwarz will improve the efficiency of coding mode decision process [Cha: para. 0020]. Reference Notice Additional prior arts, included in the Notice of Reference Cited, made of record and not relied upon is considered pertinent to applicant's disclosure. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip Dang whose telephone number is (408) 918-7529. The examiner can normally be reached on Monday-Thursday between 8:30 am - 5:00 pm (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip P. Dang/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jun 23, 2022
Application Filed
Jan 25, 2023
Non-Final Rejection — §103, §DP
May 31, 2023
Response Filed
Jul 20, 2023
Final Rejection — §103, §DP
Nov 27, 2023
Request for Continued Examination
Nov 28, 2023
Response after Non-Final Action
Dec 04, 2023
Examiner Interview (Telephonic)
Dec 12, 2023
Non-Final Rejection — §103, §DP
May 20, 2024
Response Filed
Jun 05, 2024
Final Rejection — §103, §DP
Aug 15, 2024
Interview Requested
Aug 22, 2024
Applicant Interview (Telephonic)
Aug 22, 2024
Examiner Interview Summary
Sep 13, 2024
Request for Continued Examination
Sep 25, 2024
Response after Non-Final Action
Oct 31, 2024
Examiner Interview Summary
Oct 31, 2024
Examiner Interview (Telephonic)
Dec 18, 2024
Non-Final Rejection — §103, §DP
Apr 24, 2025
Response Filed
Apr 30, 2025
Examiner Interview (Telephonic)
Apr 30, 2025
Examiner Interview Summary
May 19, 2025
Final Rejection — §103, §DP
May 19, 2025
Applicant Interview (Telephonic)
Jul 30, 2025
Interview Requested
Aug 05, 2025
Examiner Interview Summary
Aug 05, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Request for Continued Examination
Aug 15, 2025
Response after Non-Final Action
Aug 21, 2025
Non-Final Rejection — §103, §DP
Dec 03, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103, §DP
Apr 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602837
ON SUB-DIVISION OF MESH SEQUENCES
2y 5m to grant Granted Apr 14, 2026
Patent 12593116
IMAGING MEASUREMENT DEVICE USING GAS ABSORPTION IN THE MID-INFRARED BAND AND OPERATING METHOD OF IMAGING MEASUREMENT DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12581069
METHOD FOR ENCODING/DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12581106
IMAGE DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12574557
SCALABLE VIDEO CODING USING BASE-LAYER HINTS FOR ENHANCEMENT LAYER MOTION PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 470 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month