Prosecution Insights
Last updated: April 19, 2026
Application No. 18/720,890

MULTI-MODEL CROSS-COMPONENT LINEAR MODEL PREDICTION

Final Rejection §102§103
Filed
Jun 17, 2024
Examiner
ABOUZAHRA, HESHAM K
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
324 granted / 402 resolved
+22.6% vs TC avg
Minimal +2% lift
Without
With
+2.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
39 currently pending
Career history
441
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 402 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, and 12-14 have been amended. Claims 1-14 are pending for examination. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 12-14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ZHANG (US 20190045184 A1). Regarding claim 1, ZHANG teaches a video coding method comprising: receiving data for a block of pixels to be encoded or decoded as a current block of a current picture of a video (Figs. 15 & 16: Input data related to a current chroma block is received in step 1510 [0072]. Input data related to a current chroma block is received in step 1610 [0073].); constructing two or more chroma prediction models based on luma and chroma samples neighboring the current block, where the two or more chroma prediction models correspond to linear models (A mode group including at least two linear-model prediction modes (LM modes) are used for multi-phase Intra prediction [0018],); applying the two or more chroma prediction models to incoming or reconstructed luma samples of the current block to produce two or more model predictions (the mode group may include a first LM mode and a second LM mode, and the corresponding luma sample associated with each chroma sample corresponds to Y0 and Y1 for the first LM mode and the second LM mode respectively [0018].); computing predicted chroma samples by combining the two or more model predictions (a fusion mode can be included in an Intra prediction candidate list, where the fusion mode indicates that the first chroma Intra prediction mode and the second chroma Intra prediction mode are used and the combined Intra prediction is used for the encoding or decoding of the current chroma block. [0017]); and using the predicted chroma samples to reconstruct chroma samples of the current block or to encode the current block (Combined Intra prediction for encoding or decoding of the current chroma block is generated by combining first Intra prediction generated according to the first chroma Intra prediction mode and second Intra prediction generated according to the second chroma Intra prediction mode in step 1530. [0072]). Regarding claim 2, ZHANG teaches the video coding method of claim 1, wherein the predicted chroma samples is a weighted sum of the two or more model predictions ([0015] The combined Intra prediction can be generated using a weighted sum of the first Intra prediction and the second Intra prediction). Regarding claim 3, ZHANG teaches the video coding method of claim 2, wherein each of the two or more model predictions is weighted based on a position of the predicted sample in the current block (In one example, the weighting coefficient of the weighted sum is position dependent. [0015]). Regarding claim 5, ZHANG teaches the video coding method of claim 2, wherein the two or more model predictions are weighted according to corresponding two or more weighting factors, wherein the corresponding two or more weighting factors are assigned different values in different regions of the current block ([0029] FIG. 10 illustrates an example of the Fusion mode prediction process, where the Fusion mode prediction is generated by linearly combining mode L prediction and mode K prediction with respective weighting factors, w1 and w2.). Regarding claim 6, ZHANG teaches the video coding method of claim 2, wherein each of the two or more model predictions is weighted based on a similarity measure between boundary samples of the current block and reconstructed neighboring samples of the current block ([0015] The combined Intra prediction can be generated using a weighted sum of the first Intra prediction and the second Intra prediction.). Regarding claim 7, ZHANG teaches the video coding method of claim 1, wherein the two or more chroma prediction models comprises a first linear model that is derived based on neighboring reconstructed luma samples above the current block and a second linear model that is derived based on neighboring reconstructed luma samples left of the current block ([0009] Parameters a and b are derived based on previously decoded luma and chroma samples from top and left neighbouring area. Figs. 2-4). Regarding claim 8, ZHANG teaches the video coding method of claim 7, wherein the two or more chroma prediction models further comprises a third linear model that is derived based on neighboring reconstructed luma samples above the current block and left of the current block (a third mode used by a previous processed chroma component of the current chroma block. [claim 8]). Regarding claim 9, ZHANG teaches the video coding method of claim 1, wherein the predicted chroma samples in different regions of the current block are computed by different sets of linear models ([0052] In another example, mode K corresponds to the mode used by the luma component of any sub-block in the current block. FIG. 11 illustrates an exemplary sub-block 1110 in the current block 1120, where the Intra prediction mode of sub-block 1110 for the luma component is used as the mode K Intra prediction for deriving the Fusion mode prediction.). Regarding claim 10, ZHANG teaches the video coding method of claim 1, wherein the two or more chroma prediction models comprises a first plurality of linear models that are derived based on neighboring reconstructed luma samples above the current block and a second plurality of linear models that are derived based on neighboring reconstructed luma samples left of the current block ([0009] Parameters a and b are derived based on previously decoded luma and chroma samples from top and left neighbouring area. Figs. 2-4). Regarding claim 11, ZHANG teaches the video coding method of claim 1, wherein the predicted chroma samples is computed by further combining inter-prediction or intra-prediction of the current block with the two or more model predictions produced by the two or more chroma prediction models (a third mode used by a previous processed chroma component of the current chroma block. [claim 8]). Regarding claim 12, the electronic apparatus of claim 12 is rejected under the same arts evidence used to reject claim 1. Regarding claim 13, the video decoding method of claim 13 is rejected under the same arts evidence used to reject claim 1. Regarding claim 14, the video encoding method of claim 14 is rejected under the same arts evidence used to reject claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 is rejected under 35 U.S.C. 103 as being unpatentable over ZHANG in view of Bandyopadhyay (US 20220094940 A1). Regarding claim 4, ZHANG teaches the video coding method of claim 2. ZHANG does not teach the following limitations, however, in analogous art, Bandyopadhyay teaches wherein the two or more model predictions are weighted according to distances from the predicted sample to top and left boundaries of the current block ([0160] A distance between centroids of the two regions may be calculated. This distance may be calculated in various ways. As an example, for a first of the two clusters, a distance (D.sub.1) between data points and centroid in that cluster may be calculated. For the second of the two clusters, a distance (D.sub.2) between data points and centroid in that cluster may be calculated. A distance (D.sub.N) between the two centroids may be calculated. On condition that (D.sub.N≤λ*(D.sub.1+D.sub.2)), the two regions may be merged into a single region). It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of ZHANG and apply them to Bandyopadhyay. One would be motivated as such to improve linear model estimation for template-based video coding (Bandyopadhyay: [Abstract]) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HESHAM K ABOUZAHRA whose telephone number is (571)270-0425. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 57127227384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HESHAM K ABOUZAHRA/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jun 17, 2024
Application Filed
Jun 12, 2025
Non-Final Rejection — §102, §103
Sep 16, 2025
Response Filed
Jan 06, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594889
VEHICLE DISPLAY INCLUDING AN OFFSET CAMERA VIEW
2y 5m to grant Granted Apr 07, 2026
Patent 12593034
METHODS AND DEVICES FOR DECODER-SIDE INTRA MODE DERIVATION
2y 5m to grant Granted Mar 31, 2026
Patent 12593048
METHODS AND APPARATUS ON PREDICTION REFINEMENT WITH OPTICAL FLOW
2y 5m to grant Granted Mar 31, 2026
Patent 12587654
DETECTION OF AMOUNT OF JUDDER IN VIDEOS
2y 5m to grant Granted Mar 24, 2026
Patent 12581087
ENCODING METHOD, DECODING METHOD, BITSTREAM, ENCODER, DECODER AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
83%
With Interview (+2.3%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 402 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month