DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “close” in claim 9 is a relative term which renders the claim indefinite. The term “close” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-10, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2024/0155134 A1 (“Jeon”) in view of US 2019/0230350 A1 (“Chen”).
Regarding claim 1, Jeon discloses a method of intra prediction for colour pictures, the method comprising: receiving input data associated with a current block comprising at least one colour block, wherein the input data comprise pixel data for the current block to be encoded at an encoder side or encoded data associated with said at least one colour block to be decoded at a decoder side, and wherein the current block is coded in an intra block mode (e.g. see coding units that include chroma block are obtained for intra prediction, paragraphs [0007]-[0010], in an encoder or decoder, e.g. as shown in Fig. 1 and Fig. 5, respectively); determining a blending predictor according to a weighted sum of at least two candidate predictions generated based on one or more first hypotheses of prediction, one or more second hypotheses of prediction, or both (e.g. see weighted combination of the first predictor and the second predictor, paragraphs [0138]-[0144]), wherein said one or more first hypotheses of prediction are generated based on one or more intra prediction modes comprising a DC mode, a planar mode or at least one angular modes (e.g. see available prediction modes illustrated for example in Fig. 3A such as planar mode, DC mode, etc., paragraphs [0147]-[0150]) and said one or more second hypotheses of prediction are generated based on one or more cross-component modes and a collocated block of said at least one colour block (e.g. see cross-component prediction modes, paragraphs [0147]-[0150] and Fig. 1 that also shows corresponding luma region/block), wherein weights for the weighted sum of said at least two candidate predictions are determined (e.g. see weights, paragraphs [0138]-[0144]); and encoding the input data associated with said at least one colour block using the blending predictor at the encoder side or decoding the input data associated with said at least one colour block using the blending predictor at the decoder side (e.g. see coding units that include chroma block are obtained for intra prediction, paragraphs [0007]-[0010], in an encoder or decoder, e.g. as shown in Fig. 1 and Fig. 5, respectively).
Although Jeon discloses weights for the weighted sum of said at least two candidate predictions are determined (e.g. see weights, paragraphs [0138]-[0144]), it is noted Jeon differs from the present invention in that it fails to particularly disclose determined according to a weight index being signalled or parsed. Chen however, teaches determined according to a weight index being signalled or parsed (e.g. see weight values arranged in a set, so each weight value can be indicated by an index value, that is weight_idx, paragraphs [0040]-[0041], [0046]; also see, weight index coding 506 in Fig. 5 and/or weight index decoding 902 in Fig. 9).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Jeon and Chen before him/her, to modify the Method and apparatus for video coding using improved cross-component linear model prediction of Jeon with the teachings of Chen in order to generate a rich variety of prediction signals by adjusting the weight value while limiting signaling overhead.
Regarding claim 2, Jeon further discloses wherein said one or more second hypotheses of prediction correspond to CCLM (Cross-Colour Linear Model) prediction, wherein model parameters a and b for the CCLM prediction are determined based on neighbouring reconstructed pixels of the collocated block and neighbouring reconstructed pixels of said at least one colour block, and a pixel value for said at least one colour block is predicted according to a*recL’ + b and recL’ corresponds to one down-sampled pixel value for the collocated block (e.g. see neighboring pixels referenced for CCLM prediction shown in Fig. 6 to generate a predictor according to equation 2, paragraphs [0121]-[0126]).
Regarding claim 4, Jeon further discloses wherein equal weights are used for the weighted sum of said at least two candidate predictions (e.g. see at least equal weights, paragraphs [0260]-[0261]).
Regarding claim 5, Jeon further discloses wherein weights for the weighted sum of said at least two candidate predictions are determined according to neighbouring coding information, sample position, block width, block height, block area, coding mode or a combination thereof (e.g. see at least width/height/area prediction mode, etc., paragraph [0255]).
Regarding claim 6, Jeon further discloses wherein, when more neighbouring blocks of the current block are coded with a target mode, a larger weight is used for one hypothesis of prediction associated with the target mode (e.g. see setting a value of the weight WCCLM using neighboring pixel correlation rC based on representative mode, paragraphs [0263]-[0276], and/or using luma pixel correlation RL based on representative mode, paragraphs [0277]-[0285]).
Regarding claim 7, Jeon further discloses wherein the target mode corresponds to one cross-component mode (e.g. see setting a value of the weight WCCLM for the CCLM mode using neighboring pixel correlation rC based on representative mode, paragraphs [0263]-[0276], and/or using luma pixel correlation RL based on representative mode, paragraphs [0277]-[0285]; also see paragraph [0327]).
Regarding claim 8, Jeon further discloses wherein the current block is partitioned into multiple regions and sample positions in a same region share a same weight (e.g. see grouping pixels and setting a weight for each group, paragraphs [0286]-[0297]).
Regarding claim 9, Jeon further discloses wherein if a current region is close to L-shaped reference neighbours, one first hypotheses of prediction uses a higher weight than one second hypotheses of prediction (e.g. see grouping pixels and setting a weight for each group, paragraphs [0286]-[0297], e.g. see at least Fig. 31).
Regarding claim 10, Jeon further discloses wherein said at least two candidate predictions comprise at least one first hypothesis of prediction and at least one second hypothesis of prediction (e.g. see weighted combination of the first predictor and the second predictor, paragraphs [0138]-[0144]).
Regarding claim 16, the claim recites analogous limitations to the claim above and is therefore rejected on the same premise.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon in view of Chen in further view of WO 2018/053293 A1 (“Wang”) (Note: Wang is included in the IDS).
Regarding claim 3, although Jeon discloses said one or more second hypotheses of prediction (e.g. see cross-component prediction modes, paragraphs [0147]-[0150] and Fig. 1 that also shows corresponding luma region/block), it is noted Jeon differs from the present invention in that it fails to particularly disclose wherein said one or more second hypotheses of prediction correspond to MMLM (Multiple Model CCLM) and one of two models of CCLM (Cross-Colour Linear Model) prediction is selected according to a reconstructed pixel value for the collocated block. Wang however, teaches wherein said one or more second hypotheses of prediction correspond to MMLM (Multiple Model CCLM) and one of two models of CCLM (Cross-Colour Linear Model) prediction is selected according to a reconstructed pixel value for the collocated block (e.g. see MMLM to use more than one linear model from luma components of the block, paragraphs [0044], [0128], [0152]-[0153]).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Jeon, Chen and Wang before him/her, to incorporate the teachings of Wang into the Method and apparatus for video coding using improved cross-component linear model prediction of Jeon as modified by Chen in order to improve coding gain with increase in encoding efficiency while improving the visual quality of video data encoded and decoded.
Claim(s) 11-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon in view of Chen in view of US 2023/0262223 A1 (“Ghaznavi”).
Regarding claim 11, although Jeon discloses wherein said blending predictor is applied to a region within the current block (e.g. see grouping pixels and setting a weight for each group, paragraphs [0286]-[0297], e.g. see at least Fig. 30 (g) and Fig. 31), it is noted Jeon differs from the present invention in that it fails to particularly disclose wherein said blending predictor is only applied to a region within the current block. Ghaznavi however, teaches wherein the said blending predictor is only applied to a region within the current block (e.g. see the final prediction may be calculated with one or more CCLM modes and one or more derived modes, paragraph [0222]; thus, for the right-bottom region such as illustrated in Fig. 31 and Fig. 30 (g) of Jeon, which has a weight of 1 for CCLM and weight 0 for intra, adding additional CCLM mode prediction as taught by Ghaznavi meet the limitations since the final prediction would be calculated based on two or more CCLM modes only).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Jeon, Chen and Ghaznavi before him/her, to incorporate the teachings of Ghaznavi into the Method and apparatus for video coding using improved cross-component linear model prediction of Jeon as modified by Chen in order to improve the prediction performance of the CCLM prediction and enable the storage/transmission of video information at a lower bitrate.
Regarding claim 12, Jeon in view of Ghaznavi further teaches wherein the region corresponds to a right-bottom region of the current block (Ghaznavi: e.g. see the final prediction may be calculated with one or more CCLM modes and one or more derived modes, paragraph [0222]; thus, for the right-bottom region such as illustrated in Fig. 31 and Fig. 30 (g) of Jeon, which has a weight of 1 for CCLM and weight 0 for intra, adding additional CCLM mode prediction as taught by Ghaznavi meet the limitations since the final prediction would be calculated based on two or more CCLM modes only). The motivation above in the rejection of claim 11 applies here.
Regarding claim 13, although Jeon discloses at least two candidate predictions (e.g. see weighted combination of the first predictor and the second predictor, paragraphs [0138]-[0144]), it is noted Jeon differs from the present invention in that it fails to particularly disclose wherein said at least two candidate predictions correspond to at least two second hypotheses of prediction. Ghaznavi however, teaches wherein said at least two candidate predictions correspond to at least two second hypotheses of prediction (e.g. see the final prediction may be calculated with one or more CCLM modes and one or more derived modes, paragraph [0222]). The motivation above in the rejection of claim 11 applies here.
Regarding claim 14, although Jeon discloses said at least one colour block, it is noted Jeon differs from the present invention in that it fails to particularly disclose wherein said at least one colour block corresponds to a Cr block and the collocated block corresponds to a Cb block, or said at least one colour block corresponds to the Cb block and the collocated block corresponds to the Cr block. Ghaznavi however, teaches wherein said at least one colour block corresponds to a Cr block and the collocated block corresponds to a Cb block, or said at least one colour block corresponds to the Cb block and the collocated block corresponds to the Cr block (e.g. see intra prediction mode derived from reference channel such as co-located block in the luma channel or alternatively chroma channels (e.g., Cb and Cr), paragraphs [0188], [0190]). The motivation above in the rejection of claim 11 applies here.
Regarding claim 15, although Jeon discloses one or more pre-defined cross-component modes (e.g. see three modes of CCLM, paragraphs [0120], [0126]), it is noted Jeon differs from the present invention in that it fails to particularly disclose wherein said at least two second hypotheses of prediction are generated from one or more pre-defined cross-component modes. Ghaznavi however, teaches wherein said at least two second hypotheses of prediction are generated from one or more pre-defined cross-component modes (e.g. see the final prediction may be calculated with one or more CCLM modes and one or more derived modes, paragraph [0222], and see three cross-component linear model modes, paragraph [0156]). The motivation above in the rejection of claim 11 applies here.
Response to Arguments
Regarding the prior art rejection, applicant’s arguments with respect to claim(s) 1-16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regarding the 112b rejection, applicant asserts on pages 7-8 of the Remarks that “the specification provides enough support to determine the scope of the term close”. However, the examiner respectfully disagrees. The claims do not define the term “close”. Further, paragraphs [0223]-[0228] of the published application was reviewed but they are not enough to determine the term “close”. These paragraphs merely disclose that if the current region is close to the reference L neighbor, the weight for prediction from other intra prediction modes is higher than the weight for prediction from CCLM; however, they do not specifically define what is ”close”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2024/0244195 A1, Wang et al., Method, device, and medium for video processing
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCIS G GEROLEO whose telephone number is (571)270-7206. The examiner can normally be reached M-F 7:00 am - 3:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anna M Momper can be reached on (571) 270-5788. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Francis Geroleo/Primary Examiner, Art Unit 3619