Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 8, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over TSAI et al. US Pub. No.: 20220086439 A1, “TSAI”, in view of KUO et al., `AHG12: Enhanced CCLM`, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29 26th Meeting, by teleconference, JVET-Z0140-v2, 27 April 2022, “Kuo”. (IDS)
Regarding claim 1, TSAI discloses a method for video decoding, comprising: obtaining, from a bitstream, a coding unit (as cited below, i.e. TSAI, claim 1) in a picture (TSAI, abstract), wherein the coding unit comprises a luma block (TSAI, as cited below, i.e., claim 1) and a chroma block (TSAI claim 1, receiving input data associated with a current block in a current picture, wherein the input data comprise pixel data to be encoded at an encoder side or compressed data at a decoder side, and wherein the current block comprises a luma block and a chroma block; if separate partition trees are allowed for the luma block and the chroma block, partitioning the luma block into one or more luma leaf blocks using a luma partition tree and partitioning the chroma block into one or more chroma leaf blocks using a chroma partition tree; if a cross-colour component prediction mode is allowed, determining whether to enable an LM (Linear Model) mode for a target chroma leaf block based on a first split type applied to an ancestor chroma node of the target chroma leaf block and a second split type applied to a corresponding ancestor luma node; and encoding or decoding the target chroma leaf block using the LM mode if the LM mode is enabled for the target chroma leaf block); obtaining a reconstructed luma sample in the luma block; wherein the block size is a size of the luma block or a size of a reconstructed neighbouring block located on a top of (as cited below, see also, TSAI, para. 16) or a left (TSAI, claim 2, i.e. wherein the corresponding ancestor luma node has a block size of 64×64 and the ancestor chroma node of the target chroma leaf block has a block size of 32×32 in a 4:2:0 colour format) of the luma block (see TSAI, para. [0017] According to the LM prediction mode, the chroma values are predicted from reconstructed luma values of a collocated block. The chroma components may have lower spatial resolution than the luma component. In order to use the luma signal for chroma Intra prediction, the resolution of the luma signal may have to be reduced to match with that of the chroma components. For example, for the 4:2:0 sampling format, the U and V components only have half of the number of samples in vertical and horizontal directions as the luma component. Therefore, 2:1 resolution reduction in vertical and horizontal directions has to be applied to the reconstructed luma samples. The resolution reduction can be achieved by down-sampling process or sub-sampling process. and claims 1-2).
It is noted that TSAI is silent about and determining one or more cross-component prediction models based upon a block size, applying at least one of the one or more cross-component prediction models to at least the reconstructed luma sample to predict a chroma sample in the chroma block. as claimed.
However KUO discloses determining one or more cross-component prediction models (as cited below, see also KUO, pg. 3, section 2.2) based upon a block size (as cited below, i.e. the correlation and gradient), applying at least one of the one or more cross-component prediction models to at least the reconstructed luma sample to predict a chroma sample in the chroma block (KUO, see pg. 1: “CCLM achieves significant coding performance improvement by exploiting strong correlation between luma/chroma components” and “Filter-based Linear Model (FLM) and GradientLinear Model (GLM), are proposed to tackle the situation when luma gradients are highly correlated to chroma values. The FLM extends the Simple Linear Regression (SLR) in the CCLM to Multiple Linear Regression (MLR), while the GLM keeps SLR and uses luma gradients to predict chroma samples.”).)
Both TSAI and KUO teach systems with video compression with cross color components, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the TSAI disclosure, differentiating the coding according to the size, as taught by KUO. Such inclusion would have increased the usefulness of the system by reducing artifacts when the luma system has higher spatial frequency, and would have been consistent with the rationale of combining prior art elements according to known methods to yield predictable results to show a prima facie case of obviousness (MPEP 2143(I)(A)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding claim 8, TSAI/KUO, for the same motivation of combination, further discloses an apparatus for video decoding, comprising: one or more processors; and a memory coupled to the one or more processors and configured to store instructions executable by the one or more processors, wherein the one or more processors, upon execution of the instructions, are configured to, individually or collectively, perform operations comprising: obtaining, from a bitstream, a coding unit in a picture (see rejection of claim 1), wherein the coding unit comprises a luma block and a chroma block; obtaining a reconstructed luma sample in the luma block (see rejection of claim 1); determining one or more cross-component prediction models based upon a block size (see rejection of claim 1), wherein the block size is a size of the luma block or a size of a reconstructed neighbouring block located on a top of or a left of the luma block (see rejection of claim 1); and applying at least one of the one or more cross-component prediction models to at least the reconstructed luma sample to predict a chroma sample in the chroma block (see rejection of claim 1).
Regarding claim 15, TSAI/KUO, for the same motivation of combination, discloses an non-transitory computer-readable storage medium for video decoding storing a bitstream to be decoded by operations comprising: obtaining, from a bitstream, a coding unit in a picture, wherein the coding unit comprises a luma block and a chroma block (see rejection of claim 1); obtaining a reconstructed luma sample in the luma block; determining one or more cross-component prediction models based upon a block size (see rejection of claim 1), wherein the block size is a size of the luma block or a size of a reconstructed neighbouring block located on a top of or a left of the luma block (see rejection of claim 1); and applying at least one of the one or more cross-component prediction models to at least the reconstructed luma sample to predict a chroma sample in the chroma block (see rejection of claim 1).
Claim(s) 7, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No.: 20220086439 A1, in view of KUO et al., `AHG12: Enhanced CCLM`, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29 26th Meeting, by teleconference, JVET-Z0140-v2, 27 April 2022, “Kuo” (IDS), in view of ASTOLA et al., `AHG12: Convolutional cross-component model (CCCM) for intra prediction`, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3and ISO/IEC JTC 1/SC 29 26th Meeting, by teleconference, JVET-Z0064-v1, 23 April 2022 ASTOLA “ASTOLA”
(IDS)
Regarding claim 7, TSAI/KUO, for the same motivation of combination, discloses the method of claim 1, wherein each of the plurality of sets of neighbouring samples is located on a top of or a left of the coding unit and comprises a neighbouring chroma sample and at least one neighbouring luma sample corresponding to the neighbouring chroma sample (see rejection of claim 1), and deriving the one or more cross-component prediction models further based on the plurality of sets of neighbouring samples (see rejection of claim 1).
It is noted that TSAI/KUO is silent about wherein determining the one or more cross-component prediction models comprises: determining at least one of filter parameters of a luma filter, wherein the filter parameters comprise a filter shape and a number of filter taps of the luma filter; obtaining a plurality of sets of neighbouring samples of the coding unit based on the at least one of the filter parameters, wherein each of the plurality of sets of neighbouring samples is located on a top of or a left of the coding unit and comprises a neighbouring chroma sample and at least one neighbouring luma sample corresponding to the neighbouring chroma sample, the at least one neighbouring luma sample being arranged in the picture in accordance with the filter shape of the luma filter; and deriving the one or more cross-component prediction models further based on the plurality of sets of neighbouring samples as claimed.
However, ASTOLA discloses wherein determining the one or more cross-component prediction models comprises: determining at least one of filter parameters of a luma filter (as cited below, a 7-tap fitler), wherein the filter parameters comprise a filter shape (as cited below, i.e. shape) and a number of filter taps (as cited below, i.e. number of taps) of the luma filter; obtaining a plurality of sets of neighbouring samples (as cited below, i.e. different location of the sample, see ASTOLA, fig. 1) of the coding unit based on the at least one of the filter parameters, (ASTOLA, pg. 1, section 1.2, Fig. 1) the at least one neighbouring luma sample being arranged in the picture in accordance with the filter shape of the luma filter (ASTOLA, pg. 1, section 1.2, Fig. 1).
Both TSAI/KUO and ASTOLA teach systems with video compression and the application of prediction based on color components, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the TSAI/KUO disclosure, using a multiple tap filter, as taught by ASTOLA. Such inclusion would have increased the usefulness of the system by achieving a better compression efficiency, and would have been consistent with the rationale of combining prior art elements according to known methods to yield predictable results to show a prima facie case of obviousness (MPEP 2143(I)(A)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding claim 14, TSAI/KUO/ASTOLA, for the same motivation of combination, further discloses the apparatus of claim 8, wherein determining the one or more cross-component prediction models comprises: determining at least one of filter parameters of a luma filter, wherein the filter parameters comprise a filter shape and a number of filter taps of the luma filter (see rejection of claim 7); obtaining a plurality of sets of neighbouring samples of the coding unit based on the at least one of the filter parameters (see rejection of claim 7), wherein each of the plurality of sets of neighbouring samples is located on a top of or a left of the coding unit and comprises a neighbouring chroma sample and at least one neighbouring luma sample corresponding to the neighbouring chroma sample (see rejection of claim 7), the at least one neighbouring luma sample being arranged in the picture in accordance with the filter shape of the luma filter (see rejection of claim 7); and deriving the one or more cross-component prediction models further based on the plurality of sets of neighbouring samples (see rejection of claim 7).
Allowable Subject Matter
Claim 2-6, 9-13, 16-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20210058637 A1 EFFICIENT AFFINE MERGE MOTION VECTOR DERIVATION
US 20210029356 A1 SUB-BLOCK MV INHERITANCE BETWEEN COLOR COMPONENTS
US 20210014479 A1 METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
US 10708591 B2 Enhanced deblocking filtering design in video coding
US 20200186805 A1 VIDEO SIGNAL PROCESSING METHOD AND APPARATUS
US 20190387226 A1 VIDEO SIGNAL PROCESSING METHOD AND APPARATUS
US 20190364278 A1 VIDEO SIGNAL PROCESSING METHOD AND DEVICE
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK F HUANG whose telephone number is (571)272-0701. The examiner can normally be reached Monday-Friday, 8:30 am - 6:00 pm (Eastern Time), Federal Alternative First Friday Off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571)272-2988.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK F HUANG/Primary Examiner, Art Unit 2485