Prosecution Insights
Last updated: April 19, 2026
Application No. 18/803,213

Multi-Hypothesis Cross Component Prediction Models

Non-Final OA §103
Filed
Aug 13, 2024
Examiner
HASAN, MAINUL
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
328 granted / 441 resolved
+16.4% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
27 currently pending
Career history
468
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
39.5%
-0.5% vs TC avg
§102
22.2%
-17.8% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 441 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. There are a total of 22 claims and claims 1-22 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 19, 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Jhu et al. (WO 2025/049984 A2) (See attached document) (Inventive concepts disclosed in 63/580129 dated 09/01/2023) in view of Chiang et al. (US PGPub 2025/0063155 A1). Regarding claim 1, Jhu et al. teach a method for decoding video data (Fig. 3; [0080]), comprising: receiving a video bitstream including a current coding block of a current image frame ([0083], L1-4; It teaches that the video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video frame and associated syntax elements), wherein the video bitstream includes a first syntax element for a multi-hypothesis cross-component prediction (MH-CCP) mode ([0202]; it teaches using multiple-hypothesis EIP model, wherein in [0101], it states that after receiving a bitstream generated by the video encoder 20, the video decoder 30 may parse the bitstream to obtain syntax elements from the bitstream. The video decoder 30 may reconstruct the frames of the video data based at least in part on the syntax elements obtained from the bitstream. It is to be noted that the EIP model as disclosed in the reference is equivalent to the CCLM model as stated in [0169]); based on the first syntax element, determining that the MH-CCP mode is enabled to reconstruct a first chroma sample of the current coding block based on at least a first luma sample and associated neighboring luma samples ([0121]; Eqn. 3; it teaches that predc(i,j) is the predicted chroma samples in the CU, recL'(i,j) represents down-sampled reconstructed luma samples of the CU which are obtained by performing downsampling on the reconstructed luma samples recL(i,j) and linear model parameters which are derived from at most four neighboring chroma samples and their corresponding down-sampled luma samples. In [0244], it also state explicitly about a flag indicating the multiple hypothesis mode), the first luma sample collocated with the first chroma sample ([0151]; it teaches a center (C) luma sample which is collocated with the chroma sample to be predicted); identifying the first luma sample and one or more neighboring luma samples in the current coding block ([0126]; Fig. 9 shows the 2Nx2N luma block and one or more neighboring luma samples); generating a plurality of nonlinear terms based on at least a subset of the first luma sample and the one or more neighboring luma samples ([0152]; It shows the non-linear terms P in terms of the center luma samples C, wherein the neighboring samples of the central luma sample C, are denoted by N, S, E, W, as described in [0155]); predicting the first chroma sample collocated with the first luma sample in the current coding block based on the plurality of nonlinear terms ([0151]-[0156]; It teaches predicting the chroma sample based on the non-linear terms and the center luma sample and the neighboring samples as shown in Fig. 11); and reconstructing the current image frame including the current coding block ([0101]; it teaches that the video decoder 30 may reconstruct the frames of the video data based at least in part on the syntax elements obtained from the bitstream and after reconstructing the coding blocks for each CU of a frame, video decoder 30 may reconstruct the frame). Although, Jhu et al. teach multiple hypothesis EIP mode which is similar to MHCCP mode, but it does not explicitly teach MHCCP mode. However, Chiang et al., in the same field of endeavor (Abstract), teach a decoding method where MHCCP mode is used in the same context as the claimed limitations (Chiang et al.; [0237]; it teaches that the MH-CCP mode can be enabled/disabled by explicit rules, e.g. syntax in block, slice, picture, SPS, PPS level. It also states that a block-level flag is signalled/parsed to indicate whether to apply the improvement on the one or more traditional intra prediction modes; here the improvement means the usage of MHCCP over traditional prediction). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Jhu et al’s invention of multiple hypotheses EIP mode to include Chiang et al's usage of multiple hypotheses CCLM mode, because for some complex features, the combined prediction of multiple hypothesis may result in better performance. Therefore, multiple-hypothesis CCLM is disclose to blend the predictions from multiple CCLM methods (Chiang et al.; [0169]). Regarding claim 2, Jhu et al. and Chiang et al. teach the method of claim 1, wherein each of the plurality of nonlinear terms includes one of: a square of the first luma sample; a square of a respective neighboring luma sample; the first luma sample raised to an M-th power, where M is an integer greater than 2 (Jhu et al.; [0152]; It teaches that the nonlinear term P is represented as power of two of the center luma sample C); the respective neighboring luma sample raised to the M-th power; a product of the first luma sample and a respective subset of one or more neighboring luma samples; and a product of a respective subset of two or more neighboring luma samples. Regarding claim 3, Jhu et al. and Chiang et al. teach the method of claim 1, wherein predicting the first chroma sample further comprising: combining the plurality of nonlinear terms and an offset term to generate the first chroma sample (Jhu et al.; [0151]-[0155], [0160]; it shows the equation for predChromaVal for calculating the chroma prediction value from non-linear terms P as well as the offset term B). Regarding claim 4, Jhu et al. and Chiang et al. teach the method of claim 1, wherein predicting the first chroma sample further comprises: combining a plurality of linear terms and the plurality of nonlinear terms to generate the first chroma sample, each linear term corresponding to a respective luma sample selected from a luma sample set including the first luma sample and the one or more neighboring luma samples (Jhu et al.; [0151]-[0155], [0160]; it shows the equation for predChromaVal for calculating the chroma prediction value from non-linear term P and offset term B, wherein in [0261], it teaches that the weights for both EIP coded block and intra or inter coded block are bigger than zero and less than one, or the weights for intra or inter coded block gradually change from one to zero from one area to another area in current block. Here the weights are linear terms when used along with multiple hypothesis chroma prediction modes). Regarding claim 19, Jhu et al. and Chiang et al. teach the method of claim 1, wherein a first nonlinear usage syntax element is signaled in the video bitstream at one of a block level, a superblock level, an image frame level, a slice level, a tile level, and an image sequence level for the current coding block, the nonlinear usage syntax element indicating whether to use at least one nonlinear term in the MH-CCP mode (Jhu et al.; [0236]; It teaches the syntax elements can be signalled in the SPS/DPS/VPS/SEI/APS/PPS/PH/SH/Region/CTU/CU/Subblock/Sample levels). Regarding claim 21, Jhu et al. teach a computing system (Fig. 33, reference numeral 3310), comprising: control circuitry (Fig. 33, reference numeral 3320); and memory (Fig. 33, reference numeral 3330) storing one or more programs (Fig. 33, reference numeral 3332) configured to be executed by the control circuitry (Fig. 33, reference numeral 3320), the one or more programs further comprising instructions for: receiving video data comprising a current coding block of a current image frame (Fig. 2; [0054]-[0055]; [0061]; It teaches that video encoder 20 may perform intra and inter predictive coding of video blocks within video frames); encoding the current image frame (Fig. 2; [0061]; It teaches that video encoder 20 may perform intra and inter predictive coding of video blocks within video frames); transmitting the encoded current image frame via a video bitstream (Fig. 1; [0054]-[0055]; it teaches that the encoded video data may be transmitted directly to the destination device 14 via the output interface 22 of the source device 12); and signaling, via the video bitstream, a first syntax element for a multi-hypothesis cross-component prediction (MH-CCP) mode indicating whether to reconstruct a first chroma sample of the current coding block based on a first luma sample and associated neighboring luma samples ([0121]; Eqn. 3; it teaches that predc(i,j) is the predicted chroma samples in the CU, recL'(i,j) represents down-sampled reconstructed luma samples of the CU which are obtained by performing downsampling on the reconstructed luma samples recL(i,j) and linear model parameters which are derived from at most four neighboring chroma samples and their corresponding down-sampled luma samples. In [0244], it also state explicitly about a flag indicating the multiple hypothesis mode), the first luma sample collocated with the first chroma sample ([0151]; it teaches a center (C) luma sample which is collocated with the chroma sample to be predicted); wherein when the MH-CCP mode is enabled, a plurality of nonlinear terms are determined based on at least a subset of the first luma sample and one or more neighboring luma samples of the first luma sample ([0152]; It shows the non-linear terms P in terms of the center luma samples C, wherein the neighboring samples of the central luma sample C, are denoted by N, S, E, W, as described in [0155]), and the first chroma sample collocated with the first luma sample is predicted based on the plurality of nonlinear terms ([0151]-[0156]; It teaches predicting the chroma sample based on the non-linear terms and the center luma sample and the neighboring samples as shown in Fig. 11). Although, Jhu et al. teach multiple hypothesis EIP mode which is similar to MHCCP mode, but it does not explicitly teach MHCCP mode. However, Chiang et al., in the same field of endeavor (Abstract), teach a decoding method where MHCCP mode is used in the same context as the claimed limitations (Chiang et al.; [0237]; it teaches that the MH-CCP mode can be enabled/disabled by explicit rules, e.g. syntax in block, slice, picture, SPS, PPS level. It also states that a block-level flag is signalled/parsed to indicate whether to apply the improvement on the one or more traditional intra prediction modes; here the improvement means the usage of MHCCP over traditional prediction). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Jhu et al’s invention of multiple hypotheses EIP mode to include Chiang et al's usage of multiple hypotheses CCLM mode, because for some complex features, the combined prediction of multiple hypothesis may result in better performance. Therefore, multiple-hypothesis CCLM is disclose to blend the predictions from multiple CCLM methods (Chiang et al.; [0169]). Regarding claim 22, Jhu et al. teach a non-transitory computer-readable storage medium storing one or more programs for execution by control circuitry of a computing system (Fig. 33, reference numerals 3310, 3320, 3330, 3332), the one or more programs comprising instructions for: obtaining a source video sequence including a current image frame having a current coding block (Fig. 2; [0061]; It teaches that video encoder 20 may perform intra and inter predictive coding of video blocks within video frames); and performing a conversion between the source video sequence and a video bitstream (Fig. 2 shows the video encoder 20 which converts the source video data sequence into coded video bitstream), wherein the video bitstream comprises: the current image frame having the current coding block (Fig. 21 shows the current coding block inside a current image frame); and a first syntax element for a multi-hypothesis cross-component prediction (MH-CCP) mode indicating whether to reconstruct a first chroma sample of the current coding block based on a first luma sample and associated neighboring luma samples ([0121]; Eqn. 3; it teaches that predc(i,j) is the predicted chroma samples in the CU, recL'(i,j) represents down-sampled reconstructed luma samples of the CU which are obtained by performing downsampling on the reconstructed luma samples recL(i,j) and linear model parameters which are derived from at most four neighboring chroma samples and their corresponding down-sampled luma samples. In [0244], it also state explicitly about a flag indicating the multiple hypothesis mode), the first luma sample collocated with the first chroma sample ([0151]; it teaches a center (C) luma sample which is collocated with the chroma sample to be predicted); wherein when the MH-CCP mode is enabled, a plurality of nonlinear terms are determined based on at least a subset of the first luma sample and one or more neighboring luma samples of the first luma sample ([0152]; It shows the non-linear terms P in terms of the center luma samples C, wherein the neighboring samples of the central luma sample C, are denoted by N, S, E, W, as described in [0155]), and the first chroma sample collocated with the first luma sample is predicted based on the plurality of nonlinear terms ([0151]-[0156]; It teaches predicting the chroma sample based on the non-linear terms and the center luma sample and the neighboring samples as shown in Fig. 11). Although, Jhu et al. teach multiple hypothesis EIP mode which is similar to MHCCP mode, but it does not explicitly teach MHCCP mode. However, Chiang et al., in the same field of endeavor (Abstract), teach a decoding method where MHCCP mode is used in the same context as the claimed limitations (Chiang et al.; [0237]; it teaches that the MH-CCP mode can be enabled/disabled by explicit rules, e.g. syntax in block, slice, picture, SPS, PPS level. It also states that a block-level flag is signalled/parsed to indicate whether to apply the improvement on the one or more traditional intra prediction modes; here the improvement means the usage of MHCCP over traditional prediction). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Jhu et al’s invention of multiple hypotheses EIP mode to include Chiang et al's usage of multiple hypotheses CCLM mode, because for some complex features, the combined prediction of multiple hypothesis may result in better performance. Therefore, multiple-hypothesis CCLM is disclose to blend the predictions from multiple CCLM methods (Chiang et al.; [0169]). Allowable Subject Matter Claims 5-18, 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “METHODS AND DEVICES FOR MULTI-HYPOTHESIS-BASED PREDICTION” – Jhu et al., US PGPub 2025/0039437 A1. “Method And Apparatus Of Cross-Component Linear Model Prediction In Video Coding System” – Tsai et al., US PGPub 2025/0071331 A1. “CONTEXT CODING FOR TRANSFORM SKIP MODE” – Zhu et al., US PGPub 2021/0385439 A1. “IMPROVED CROSS COMPONENT RESIDUAL PREDICTION” - WO 2025/042869 A1. "Enhanced Cross-Component Linear Model for Chroma Intra-Prediction in Video Coding" - Zhang et al., IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 8, AUGUST 2018. "Joint Cross-Component Linear Model For Chroma Intra Prediction" - Ghaznavi-Youvalari et al., 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP); 21-24 September 2020. "Cross-Component Prediction Boosted With Local and Non-Local Information in Video Coding" - Zhang et al., IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 33, 2024. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAINUL HASAN whose telephone number is (571)272-0422. The examiner can normally be reached on MON-FRI: 10AM-6PM, Alternate FRIDAYS, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAY PATEL can be reached on (571)272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mainul Hasan/ Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Aug 13, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598314
NEURAL NETWORK BASED FILTERING PROCESS FOR MULTIPLE COLOR COMPONENTS IN VIDEO CODING
2y 5m to grant Granted Apr 07, 2026
Patent 12598326
ENTROPY CODING FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593065
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING
2y 5m to grant Granted Mar 31, 2026
Patent 12581113
TEMPLATE-MATCHING BASED ADAPTIVE BLOCK VECTOR RESOLUTION (ABVR) IN IBC
2y 5m to grant Granted Mar 17, 2026
Patent 12581057
VIDEO PREDICTIVE CODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+24.9%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 441 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month