Prosecution Insights
Last updated: April 19, 2026
Application No. 19/011,195

CROSS-COMPONENT PREDICTION MODE

Non-Final OA §102§103§DP
Filed
Jan 06, 2025
Examiner
AYNALEM, NATHNAEL B
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
505 granted / 662 resolved
+18.3% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
694
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
39.5%
-0.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
21.6%
-18.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 662 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status This is in response to application no. 19/011,195 filed on January 06, 2025. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2, 6-16, 20 and 21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9, 11-13 and 20 of U.S. Patent No. US 12,219,128 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the current claims 2, 6-16, 20 and 21 are anticipated by claims 1-9, 11-13 and 20 of pat. ‘128. Table 1 below shows the comparison between the current claims and the claims of the cited pat. ‘128. TABLE 1 Current claims Patent No. US 12,219,128 B2 claims 2. A method for video decoding, the method comprising: receiving coded information of a current chroma block and a luma block that is collocated with the current chroma block; determining a feature value based on at least one of (i) neighboring reconstructed chroma samples of the current chroma block and (ii) neighboring reconstructed luma samples of the luma block that is collocated with the current chroma block; grouping chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample; determining a respective cross-component prediction mode for each of the plurality of groups by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value; and reconstructing the current chroma block based on the determined cross-component prediction modes of the plurality of groups. 1. A method of video decoding performed in a video decoder, the method comprising: receiving coded information of a current chroma block and a luma block that is collocated with the current chroma block; determining a feature value based on at least one of (i) neighboring reconstructed chroma samples of the current chroma block and (ii) neighboring reconstructed luma samples of the luma block that is collocated with the current chroma block; grouping chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample; determining a respective cross-component prediction mode for each of the plurality of groups by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value; and reconstructing the current chroma block based on the determined cross-component prediction modes of the plurality of groups, wherein a type of cross-component prediction mode of the determined respective cross-component prediction mode for each of the plurality of groups is different. 6. The method of claim 2, wherein the determining the feature value further comprises: determining the feature value as one of an average value of the neighboring reconstructed chroma samples of the current chroma block and an average value of the neighboring reconstructed luma samples of the collocated luma block. 2. The method of claim 1, wherein the determining the feature value further comprises: determining the feature value as one of an average value of the neighboring reconstructed chroma samples of the current chroma block and an average value of the neighboring reconstructed luma samples of the collocated luma block. 7. The method of claim 2, wherein the determining the feature value further comprises: determining the feature value as one of an average gradient value of the neighboring reconstructed chroma samples of the current chroma block and an average gradient value of the neighboring reconstructed luma samples of the collocated luma block. 3. The method of claim 1, wherein the determining the feature value further comprises: determining the feature value as one of an average gradient value of the neighboring reconstructed chroma samples of the current chroma block and an average gradient value of the neighboring reconstructed luma samples of the collocated luma block. 8. The method of claim 2, wherein the determining the feature value further comprises: determining the feature value as an average value of the neighboring reconstructed chroma samples of the current chroma block and the neighboring reconstructed luma samples of the collocated luma block. 4. The method of claim 1, wherein the determining the feature value further comprises: determining the feature value as an average value of the neighboring reconstructed chroma samples of the current chroma block and the neighboring reconstructed luma samples of the collocated luma block. 9. The method of claim 2, wherein the grouping further comprises: determining a characteristic value associated with each of the luma samples of the collocated luma block; determining whether the characteristic value associated with the respective luma sample of the luma samples of the collocated luma block is larger than the threshold of the feature value; grouping (i) the respective luma sample of the luma samples of the collocated luma block and (ii) a chroma sample of the chroma samples of the current chroma block corresponding to the respective luma sample into a first group based on the characteristic value associated with the respective luma sample being larger than the threshold of the feature value; and grouping (i) the respective luma sample of the luma samples of the collocated luma block and (ii) the chroma sample of the chroma samples of the current chroma block corresponding to the respective luma sample into a second group based on the characteristic value associated with the respective luma sample being smaller than the threshold of the feature value. 5. The method of claim 1, wherein the grouping further comprises: determining a characteristic value associated with each of the luma samples of the collocated luma block; determining whether the characteristic value associated with the respective luma sample of the luma samples of the collocated luma block is larger than the threshold of the feature value; grouping (i) the respective luma sample of the luma samples of the collocated luma block and (ii) a chroma sample of the chroma samples of the current chroma block corresponding to the respective luma sample into a first group based on the characteristic value associated with the respective luma sample being larger than the threshold of the feature value; and grouping (i) the respective luma sample of the luma samples of the collocated luma block and (ii) the chroma sample of the chroma samples of the current chroma block corresponding to the respective luma sample into a second group based on the characteristic value associated with the respective luma sample being smaller than the threshold of the feature value. 10. The method of claim 9, wherein the characteristic value of the respective luma sample includes one of (i) a luma sample value of the respective luma sample, and (ii) an average value of the neighboring reconstructed luma samples of the collocated luma block. 6. The method of claim 5, wherein the characteristic value of the respective luma sample includes one of (i) a luma sample value of the respective luma sample, and (ii) an average value of the neighboring reconstructed luma samples of the collocated luma block. 11. The method of claim 2, wherein the respective cross-component prediction mode includes one of cross-component linear model (CCLM), chroma from luma (CfL), convolutional cross-component model (CCCM), multiple filter linear model (MFLM), gradient linear model (GLM), a combination of CCLM, CfL, CCCM, and an angular intra prediction mode. 7. The method of claim 1, wherein the respective cross-component prediction mode includes one of cross-component linear model (CCLM), chroma from luma (CfL), convolutional cross-component model (CCCM), multiple filter linear model (MFLM), gradient linear model (GLM), a combination of CCLM, CfL, CCCM, and an angular intra prediction mode. 12. The method of claim 2, wherein the determining the respective cross- component prediction mode for each of the plurality of groups further comprises: determining the respective cross-component prediction mode based on a corresponding flag that is included in the coded information. 8. The method of claim 1, wherein the determining the respective cross-component prediction mode for each of the plurality of groups further comprises: determining the respective cross-component prediction mode based on a corresponding flag that is included in the coded information. 13. The method of claim 2, wherein the determining the respective cross- component prediction mode for each of the plurality of groups further comprises: determining whether each of the plurality of groups shares a cross-component prediction mode based on a flag that is included in the coded information; and determining the cross-component prediction mode for each of the plurality of groups when the flag indicates that each of the plurality of groups shares the same cross-component prediction mode. 9. The method of claim 1, wherein the determining the respective cross-component prediction mode for each of the plurality of groups further comprises: determining whether each of the plurality of groups shares a cross-component prediction mode based on a flag that is included in the coded information; determining the cross-component prediction mode for each of the plurality of groups in response to the flag indicating that each of the plurality of groups shares the cross-component prediction mode; 14. The method of claim 2, wherein the reconstructing the current chroma block further comprises: generating prediction samples for the chroma samples in each of the plurality of groups based on the respective cross-component prediction mode; and applying a filter on the prediction samples for the chroma samples in each of the plurality of groups. 11. The method of claim 1, wherein the reconstructing the current chroma block further comprises: generating prediction samples for the chroma samples in each of the plurality of groups based on the respective cross-component prediction mode; and applying a filter on the prediction samples for the chroma samples in each of the plurality of groups. 15. The method of claim 2, wherein the reconstructing the current chroma block further comprises: generating prediction samples for the chroma samples in each of the plurality of groups based on the respective cross-component prediction mode; and determining the prediction samples of the current chroma block as a weighted combination of the prediction samples for the chroma samples in each of the plurality of groups. 12. The method of claim 1, wherein the reconstructing the current chroma block further comprises: generating prediction samples for the chroma samples in each of the plurality of groups based on the respective cross-component prediction mode; and determining the prediction samples of the current chroma block as a weighted combination of the prediction samples for the chroma samples in each of the plurality of groups. 16. A method of video encoding, the method comprising: determining a feature value based on at least one of (i) neighboring reconstructed chroma samples of a current chroma block and (ii) neighboring reconstructed luma samples of a luma block that is collocated with the current chroma block; grouping chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample; determining a respective cross-component prediction mode for each of the plurality of groups by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value; and encoding the current chroma block based on the determined cross-component prediction modes of the plurality of groups. 13. A method of video encoding performed in a video encoder, the method comprising: determining a feature value based on at least one of (i) neighboring reconstructed chroma samples of a current chroma block and (ii) neighboring reconstructed luma samples of a luma block that is collocated with the current chroma block; grouping chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample; determining a respective cross-component prediction mode for each of the plurality of groups by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value; and encoding the current chroma block based on the determined cross-component prediction modes of the plurality of groups, wherein a type of cross-component prediction mode of the determined respective cross-component prediction mode for each of the plurality of groups is different. 20. The method of claim 16, wherein the determining the respective cross-component prediction mode includes determining that the respective cross-component prediction mode for each of the plurality of groups is the same; and the method includes encoding a flag indicating that each of the plurality of groups shares the same cross-component prediction mode. 9. The method of claim 1, wherein the determining the respective cross-component prediction mode for each of the plurality of groups further comprises: determining whether each of the plurality of groups shares a cross-component prediction mode based on a flag that is included in the coded information; determining the cross-component prediction mode for each of the plurality of groups in response to the flag indicating that each of the plurality of groups shares the cross-component prediction mode 21. A method of processing visual media data, the method comprising: processing a bitstream that includes the visual media data according to a format rule, wherein the bitstream includes coded information of a current chroma block and a luma block that is collocated with the current chroma block; and the format rule specifies that a feature value is determined based on at least one of (i) neighboring reconstructed chroma samples of the current chroma block and (ii) neighboring reconstructed luma samples of the luma block that is collocated with the current chroma block; chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block are grouped into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample; a respective cross-component prediction mode for each of the plurality of groups is determined by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value; and the current chroma block is reconstructed based on the determined cross- component prediction modes of the plurality of groups. 20. A method of processing visual media data, the method comprising: processing a bitstream that includes the visual media data according to a format rule, wherein the bitstream includes prediction information of a current block in a current picture, the prediction information being indicative of inter prediction; and the format rule specifies that a feature value is determined based on at least one of (i) neighboring reconstructed chroma samples of the current chroma block and (ii) neighboring reconstructed luma samples of the luma block that is collocated with the current chroma block; chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block are grouped into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample; a respective cross-component prediction mode for each of the plurality of groups is determined by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value; the current chroma block is processed based on the determined cross-component prediction modes of the plurality of groups; and a type of cross-component prediction mode of the determined respective cross-component prediction mode for each of the plurality of groups is different. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 2-12, 16-19 and 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ramasubramonian et al. (US 20200154115 A1). Regarding claim 2, Ramasubramonian teaches a method for video decoding, the method comprising: receiving coded information of a current chroma block and a luma block that is collocated with the current chroma block (Figs. 3-7, ¶0076, 0089: video decoder may derive the samples [luma samples and chroma samples] that are collocated to the current block in other pictures); determining a feature value based on at least one of (i) neighboring reconstructed chroma samples of the current chroma block and (ii) neighboring reconstructed luma samples of the luma block that is collocated with the current chroma block (¶0077, 0090: “Threshold” may be calculated as the average value of the neighboring reconstructed luma samples); grouping chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample (Fig. 3, ¶0076-0078: a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…the video coder may classify the neighboring samples into two groups. In this example, the first group may consist of those neighboring luma samples having values less than or equal to a threshold and the second group may consist of those neighboring luma samples having values greater than the threshold. ¶0089: video encoder and video decoder may derive the samples that are collocated to the current block in other pictures); determining a respective cross-component prediction mode for each of the plurality of groups by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value (¶0072, 0076-0078: CCLM mode, MMLM mode; FIG. 3 is a conceptual diagram of two linear models (e.g., prediction models) for neighboring coded luma samples that are classified into 2 groups…A neighboring sample with Rec′ L[x,y]<=Threshold is classified into group 1; while a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2. See equations (1)-(2) showing the relationship between a predicted value of a chroma sample and reconstructed value of a luma sample); and reconstructing the current chroma block based on the determined cross-component prediction modes of the plurality of groups (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. ¶0074, 0076-0078: In MMLM mode, a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…The parameters are then used by video encoder 200 and video decoder 300 to derive the chroma sample prediction of the current CU from the reconstructed luma samples of the current CU (e.g., derive predC(i,j), which is the predicted block)). Regarding claim 3, Ramasubramonian teaches the method of claim 2, wherein the determining the respective cross- component prediction mode comprises: determining that the respective cross-component prediction mode for each of the plurality of groups is the same (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. ¶0077: A neighboring sample with Rec′L[x,y]<=Threshold is classified into group 1; while a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2. See equation (2)). Regarding claim 4, Ramasubramonian teaches the method of claim 3, wherein first parameters of the cross-component prediction mode for a first group of the plurality of groups are different from second parameters of the cross-component prediction mode for a second group of the plurality of groups (See Fig. 3, “Model 1” [group 1] parameters α1= 1, β1=1, and “Model 2” [group 2] parameters α2= 1/2, β2= -1). Regarding claim 5, Ramasubramonian teaches the method of claim 4, wherein the cross-component prediction mode is a cross-component linear model (CCLM), and the first parameters of the CCLM for the first group of the plurality of groups are different from the second parameters of the CCLM for the second group of the plurality of groups (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. See Fig. 3, “Model 1” [group 1] parameters α1= 1, β1=1, and “Model 2” [group 2] parameters α2= 1/2, β2= -1). Regarding claim 6, Ramasubramonian teaches the method of claim 2, wherein the determining the feature value further comprises: determining the feature value as one of an average value of the neighboring reconstructed chroma samples of the current chroma block and an average value of the neighboring reconstructed luma samples of the collocated luma block (¶0077, 0087-0089: “Threshold” may be calculated as the average value of the neighboring reconstructed luma samples). Regarding claim 7, Ramasubramonian teaches the method of claim 2, wherein the determining the feature value further comprises: determining the feature value as one of an average gradient value of the neighboring reconstructed chroma samples of the current chroma block and an average gradient value of the neighboring reconstructed luma samples of the collocated luma block (¶0081, 0115, 0120: avgY and avgC be the average (mean) luma and chroma values of the luma and chroma reference samples). Regarding claim 8, Ramasubramonian teaches the method of claim 2, wherein the determining the feature value further comprises: determining the feature value as an average value of the neighboring reconstructed chroma samples of the current chroma block and the neighboring reconstructed luma samples of the collocated luma block (¶0077: “Threshold” may be calculated as the average value of the neighboring reconstructed luma samples. ¶0087-0089: the average luma value and the neutral Cb value (value of 512 when the Cb value is in the range of 0 to 1023) may be used to derive two such lines so that four classes may be defined). Regarding claim 9, Ramasubramonian teaches the method of claim 2, wherein the grouping further comprises: determining a characteristic value associated with each of the luma samples of the collocated luma block; determining whether the characteristic value associated with the respective luma sample of the luma samples of the collocated luma block is larger than the threshold of the feature value (¶0076-0078, 0089: a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2); grouping (i) the respective luma sample of the luma samples of the collocated luma block and (ii) a chroma sample of the chroma samples of the current chroma block corresponding to the respective luma sample into a first group based on the characteristic value associated with the respective luma sample being larger than the threshold of the feature value (¶0076-0078: a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2); and grouping (i) the respective luma sample of the luma samples of the collocated luma block and (ii) the chroma sample of the chroma samples of the current chroma block corresponding to the respective luma sample into a second group based on the characteristic value associated with the respective luma sample being smaller than the threshold of the feature value (¶0076-0078: Rec′L[x,y]<=Threshold is classified into group 1). Regarding claim 10, Ramasubramonian teaches the method of claim 9, wherein the characteristic value of the respective luma sample includes one of (i) a luma sample value of the respective luma sample, and (ii) an average value of the neighboring reconstructed luma samples of the collocated luma block (¶0076-0078: the first group may consist of those neighboring luma samples having values less than or equal to a threshold and the second group may consist of those neighboring luma samples having values greater than the threshold… “Threshold” may be calculated as the average value of the neighboring reconstructed luma samples). Regarding claim 11, Ramasubramonian teaches the method of claim 2, wherein the respective cross-component prediction mode includes one of cross-component linear model (CCLM), chroma from luma (CfL), convolutional cross-component model (CCCM), multiple filter linear model (MFLM), gradient linear model (GLM), a combination of CCLM, CfL, CCCM, and an angular intra prediction mode (¶0072: CCLM). Regarding claim 12, Ramasubramonian teaches the method of claim 2, wherein the determining the respective cross- component prediction mode for each of the plurality of groups further comprises: determining the respective cross-component prediction mode based on a corresponding flag that is included in the coded information (¶0168, 0192: intra-prediction unit may generate the prediction block according to an intra-prediction mode indicated by the prediction information syntax elements. ¶0072, 0100: the intra-prediction mode includes CCLM mode). Regarding claim 16, Ramasubramonian teaches a method of video encoding, the method comprising: determining a feature value based on at least one of (i) neighboring reconstructed chroma samples of a current chroma block and (ii) neighboring reconstructed luma samples of a luma block that is collocated with the current chroma block (¶0077, 0090: “Threshold” may be calculated as the average value of the neighboring reconstructed luma samples); grouping chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample (Fig. 3, ¶0076-0078: a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…the video coder may classify the neighboring samples into two groups. In this example, the first group may consist of those neighboring luma samples having values less than or equal to a threshold and the second group may consist of those neighboring luma samples having values greater than the threshold. ¶0089: video encoder and video decoder may derive the samples that are collocated to the current block in other pictures); determining a respective cross-component prediction mode for each of the plurality of groups by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value (¶0072, 0076-0078: CCLM mode, MMLM mode; FIG. 3 is a conceptual diagram of two linear models (e.g., prediction models) for neighboring coded luma samples that are classified into 2 groups…A neighboring sample with Rec′ L[x,y]<=Threshold is classified into group 1; while a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2. See equations (1)-(2) showing the relationship between a predicted value of a chroma sample and reconstructed value of a luma sample); and encoding the current chroma block based on the determined cross-component prediction modes of the plurality of groups (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. ¶0074, 0076-0078: In MMLM mode, a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…The parameters are then used by video encoder 200 and video decoder 300 to derive the chroma sample prediction of the current CU from the reconstructed luma samples of the current CU (e.g., derive predC(i,j), which is the predicted block)). Regarding claim 17, Ramasubramonian teaches the method of claim 16, wherein the determining the respective cross- component prediction mode comprises: determining that the respective cross-component prediction mode for each of the plurality of groups is the same (¶0076-0078: In MMLM mode, a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…FIG. 3 is a conceptual diagram of two linear models (e.g., prediction models) for neighboring coded luma samples that are classified into 2 groups). Regarding claim 18, Ramasubramonian teaches the method of claim 17, wherein first parameters of the cross-component prediction mode for a first group of the plurality of groups are different from second parameters of the cross-component prediction mode for a second group of the plurality of groups (See Fig. 3, “Model 1 “(group 1) parameters α1= 1, β1=1, and “Model 2 “(group 2) parameters α2= 1/2, β2= -1). Regarding claim 19, Ramasubramonian teaches the method of claim 18, wherein the cross-component prediction mode is a cross-component linear model (CCLM), and the first parameters of the CCLM for the first group of the plurality of groups are different from the second parameters of the CCLM for the second group of the plurality of groups (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. See Fig. 3, “Model 1 “(group 1) parameters α1= 1, β1=1, and “Model 2 “(group 2) parameters α2= 1/2, β2= -1). Regarding claim 21, Ramasubramonian teaches a method of processing visual media data, the method comprising: processing a bitstream that includes the visual media data according to a format rule (¶0043, 0068-0069: video encoder 200 may generate a bitstream including encoded video data, e.g., syntax elements describing partitioning of a picture into blocks (e.g., CUs) and prediction and/or residual information for the blocks…video decoder 300 may receive the bitstream and decode the encoded video data), wherein the bitstream includes coded information of a current chroma block and a luma block that is collocated with the current chroma block (Figs. 3-7, ¶0076,0089: video decoder may derive the samples [luma samples and chroma samples] that are collocated to the current block in other pictures); and the format rule specifies that a feature value is determined based on at least one of (i) neighboring reconstructed chroma samples of the current chroma block and (ii) neighboring reconstructed luma samples of the luma block that is collocated with the current chroma block (¶0077, 0090: “Threshold” may be calculated as the average value of the neighboring reconstructed luma samples); chroma samples of the current chroma block and luma samples of the luma block that is collocated with the current chroma block are grouped into a plurality of groups based on a threshold of the feature value, each of the plurality of groups including a respective chroma sample and a respective luma sample (Fig. 3, ¶0076-0078: a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…the video coder may classify the neighboring samples into two groups. In this example, the first group may consist of those neighboring luma samples having values less than or equal to a threshold and the second group may consist of those neighboring luma samples having values greater than the threshold. ¶0089: video encoder and video decoder may derive the samples that are collocated to the current block in other pictures); a respective cross-component prediction mode for each of the plurality of groups is determined by comparing the respective chroma sample and the respective luma sample of each respective group to the determined feature value (¶0072, 0076-0078: CCLM mode, MMLM mode; FIG. 3 is a conceptual diagram of two linear models (e.g., prediction models) for neighboring coded luma samples that are classified into 2 groups…A neighboring sample with Rec′ L[x,y]<=Threshold is classified into group 1; while a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2. See equations (1)-(2) showing the relationship between a predicted value of a chroma sample and reconstructed value of a luma sample); and the current chroma block is reconstructed based on the determined cross- component prediction modes of the plurality of groups (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. ¶0074, 0076-0078: In MMLM mode, a video coder may classify neighboring luma samples and neighboring chroma samples of the current block into several groups…The parameters are then used by video encoder 200 and video decoder 300 to derive the chroma sample prediction of the current CU from the reconstructed luma samples of the current CU (e.g., derive predC(i,j), which is the predicted block)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 13-15 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ramasubramonian et al. (US 20200154115 A1) in view of Kuo et al. (US 20250047886 A1). Regarding claim 13, Ramasubramonian does not disclose wherein the determining the respective cross- component prediction mode for each of the plurality of groups further comprises: determining whether each of the plurality of groups shares a cross-component prediction mode based on a flag that is included in the coded information; and determining the cross-component prediction mode for each of the plurality of groups when the flag indicates that each of the plurality of groups shares the same cross-component prediction mode. However, Kuo teaches wherein the determining the respective cross- component prediction mode for each of the plurality of groups further comprises: determining whether each of the plurality of groups shares a cross-component prediction mode based on a flag that is included in the coded information (¶0204, 0229: The proposed GLM can be combined with above discussed MMLM or ELM. When combined with classification, each group can share or have its own filter shape, with syntaxes indicating shape for each group. ¶0094: MMLM prediction mode, See Fig. 6 illustrating illustrates an example of classifying the neighboring samples into two groups based on the value Threshold); and determining the cross-component prediction mode for each of the plurality of groups when the flag indicates that each of the plurality of groups shares the same cross-component prediction mode (¶0204: The proposed GLM can be combined with above discussed MMLM or ELM. When combined with classification, each group can share or have its own filter shape, with syntaxes indicating shape for each group). Note that Kuo at ¶0204 discloses syntaxes indicating shape for each group. It is well-known in the art that syntaxes are used to convey information a decoder. In this case, the syntaxes are used to convey information whether each group can share or have its own filter shape. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Ramasubramonian by incorporating the teaching of Kuo as noted above, in order to improve the coding efficiency (Kuo:¶0181, 0210). Regarding claim 14, Ramasubramonian teaches the method of claim 2, wherein the reconstructing the current chroma block further comprises: generating prediction samples for the chroma samples in each of the plurality of groups based on the respective cross-component prediction mode (¶0077-0078: In FIG. 3, after neighboring samples are classified into two classes (i.e., class 340 and class 342), video encoder 200 and video decoder 300 may be configured to derive two independent linear models (e.g., prediction models), separately, based on the two classes as depicted in FIG. 3). Ramasubramonian does not explicitly disclose applying a filter on the prediction samples for the chroma samples in each of the plurality of groups. However, Kuo discloses applying a filter on the prediction samples for the chroma samples in each of the plurality of groups (¶0202-0204: the gradient filter used for deriving the gradient direction can be the same or different with the GLM filter in shape… The proposed GLM can be combined with above discussed MMLM or ELM. When combined with classification, each group can share or have its own filter shape…). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Ramasubramonian by incorporating the teaching of Kuo, in order to improve the coding efficiency (Kuo:¶0181, 0210). Regarding claim 15, Ramasubramonian teaches method of claim 2, wherein the reconstructing the current chroma block further comprises: generating prediction samples for the chroma samples in each of the plurality of groups based on the respective cross-component prediction mode (¶0072, 0077-0078: PredC[x,y] indicates a value of a chroma sample of a prediction block of the current block…video decoder may be configured to derive two independent linear models (e.g., prediction models), separately, based on the two classes as depicted in FIG. 3). Ramasubramonian does not explicitly disclose determining the prediction samples of the current chroma block as a weighted combination of the prediction samples for the chroma samples in each of the plurality of groups. However, Kuo discloses determining the prediction samples of the current chroma block as a weighted combination of the prediction samples for the chroma samples in each of the plurality of groups (¶0136: In the existing CCLM or MMLM design, the neighboring reconstructed luma-chroma sample pairs are classified into one or more sample groups based on the value Threshold. ¶0229-0234: since GLM can be taken as one special CCLM mode, the fusion design can be reused or have its own way. Multiple (two or more) weights can be applied to generation the final predictor… the designs for weights can be determined by the intra prediction mode of adjacent chroma blocks and shift is set equal to 2). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Ramasubramonian by incorporating the teaching of Kuo, in order to improve the coding efficiency (Kuo:¶0181, 0210). Regarding claim 20, Ramasubramonian discloses the method of claim 16, wherein the determining the respective cross-component prediction mode includes determining that the respective cross-component prediction mode for each of the plurality of groups is the same (¶0072: video encoder 200 and video decoder 300 may be configured to code blocks of video data using cross-component linear model (CCLM) mode. ¶0077: A neighboring sample with Rec′L[x,y]<=Threshold is classified into group 1; while a neighboring sample with Rec′L[x,y]>Threshold is classified into group 2. See equation (2)). Ramasubramonian does not explicitly disclose the method includes encoding a flag indicating that each of the plurality of groups shares the same cross-component prediction mode. However, Kuo teaches the method includes encoding a flag indicating that each of the plurality of groups shares the same cross-component prediction mode (¶0204: The proposed GLM can be combined with above discussed MMLM or ELM. When combined with classification, each group can share or have its own filter shape, with syntaxes indicating shape for each group). Note that Kuo at ¶0204 discloses syntaxes indicating shape for each group. It is well-known in the art that syntaxes are used to convey information a decoder. In this case, the syntaxes are used to convey information whether each group can share or have its own filter shape. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Ramasubramonian by incorporating the teaching of Kuo as noted above, in order to improve the coding efficiency (Kuo: ¶0181, 0210). The following are the prior art made of record and not relied upon are considered pertinent to applicant's disclosure. Kim et al. (US 20250150605 A1) describes “a video signal processing method and a device therefor, so as to increase the coding efficiency of a video signal.”¶0003 Zhang et al. (US 20210092396 A1) describes “SINGLE-LINE CROSS COMPONENT LINEAR MODEL PREDICTION MODE” Title Zhang et al. (Document :JVET-D0110) describes “Enhanced Cross-component Linear Model Intra-prediction” Title Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NATHNAEL AYNALEM whose telephone number is (571)270-1482. The examiner can normally be reached M-F 9AM-5:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH PERUNGAVOOR can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NATHNAEL AYNALEM/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jan 06, 2025
Application Filed
Jun 06, 2025
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §102, §103, §DP
Mar 03, 2026
Applicant Interview (Telephonic)
Mar 05, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600319
VEHICLE DOOR INTERFACE SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12587634
Disallowing Unnecessary Layers in Multi-Layer Video Bitstreams
2y 5m to grant Granted Mar 24, 2026
Patent 12581103
VIDEO ENCODING/DECODING METHOD AND DEVICE, AND BITSTREAM STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12581126
LOW COMPLEXITY NN-BASED IN LOOP FILTER ARCHITECTURES WITH SEPARABLE CONVOLUTION
2y 5m to grant Granted Mar 17, 2026
Patent 12572023
OPTICAL NAVIGATION DEVICE WITH INCREASED DEPTH OF FIELD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
90%
With Interview (+13.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 662 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month