Prosecution Insights
Last updated: April 19, 2026
Application No. 18/712,014

LOCAL ILLUMINATION COMPENSATION WITH CODED PARAMETERS

Non-Final OA §103
Filed
May 21, 2024
Examiner
FEREJA, SAMUEL D
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
458 granted / 614 resolved
+16.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
66 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
64.1%
+24.1% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Currently, claims 1-18 are pending in the application. Claims 1, 2, 17, and 18 are amended. Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/10/2026 has been entered. Response to Arguments / Amendments Applicant’s arguments have been fully considered but are rendered moot in view of the new ground of rejection necessitated by amendments initiated by the applicant. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20240121423, hereinafter Chen) in view of Chen et al. (US 20210243465, hereinafter Chen_423). Regarding Claim 2, Chen discloses a video encoding method comprising: receiving samples for an original block of pixels to be encoded as a current block of a current picture of a video ([0055]-[0058], FIG. 1; [0115], FIG. 18, video information such as video data including a picture part is processed at step 1810 based on an affine motion model to produce motion compensation information); applying a linear model to a reference block to generate a prediction block for the current block ([0115], FIG. 18, at step 1820, a local illumination compensation (LIC) model is obtained applying a linear model parameters), wherein the linear model comprises ([0079], LIC tool is based on a linear model for illumination changes, using a scaling factor a and an offset b, which are called LIC parameters ;[0080] block-based Local Illumination Compensation (LIC) based on a model of illumination changes such as a first order linear model of illumination changes using a scaling factor a and an offset b); signaling the scale parameter and the offset parameter in a bitstream ([0079] Local Illumination Compensation (LIC) predicts a variation of illumination which can occur between a predicted block and its reference block employed through motion compensated prediction and , for each inter-mode coded CU, a LIC flag is signaled or implicitly derived to indicate the usage of the LIC); and encoding the current block by using the prediction block to reconstruct the current block ([0115], FIG. 18, at step 1830, the video information is encoded to produce encoded video information based on the motion compensation information and the LIC model; [0055]-[0058], FIG. 1). Chen does not explicitly disclose a first value derived using the scale parameter and a second value derived using the offset parameter. Chen_423 teaches wherein signaling the scale parameter and the offset parameter comprises signaling an index for selecting an entry from one or more entries of a history-based table, each entry of the history-based table comprising historical scale and offset parameter values that are used to encode a previous block ([0170], at least one model parameter of a linear model, the at least one model parameter comprises a pair of first and second model parameters corresponding to a scaling factor and an offset; [0177], a pair of first and second model parameters corresponding to a scaling factor and an offset, and processing the video information based on the linear model comprises processing the plurality of sub-blocks of the current coding unit based on the linear model using the scaling factor and the offset) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings a first value derived using and a second value derived as taught by Chen_423 ([0170]) into the encoding system of Chen in order to derive LIC parameters at a decoder side and an encoder side without requiring an extra bit to encode the parameters into a bit-stream without introducing additional burden on a bit rate and improving video coding efficiency (Chen_423, [0093]). Regarding Claim 1, video decoding method claim 1 of using the corresponding encoding method claimed in claims 2, and the rejections of which are incorporated herein for the same reasons as used above. Regarding Claim 3, Chen in view of Chen_423 discloses the video encoding method of claim 2, Chen discloses further comprising using the samples of the original block and samples from a reconstructed reference frame to derive the scale parameter and the offset parameter ([0080], FIG. 10, LIC parameters (a and b) can be estimated by comparing a set of reconstructed samples surrounding the current block (“current blk”), located in a neighborhood Vcur, with a set of reconstructed samples (the set can have various sizes depending on application), located in a neighborhood Vref(MV) of the reference block in the reference picture (“ref blk”). MV represents the motion vector between the current block and the reference block. Typically, Vcur and Vref(MV) comprise in samples located in the L-shape around (on the top, left and top-left side) the current block and reference block, respectively). Regarding Claim 4, Chen in view of Chen_423 discloses the video encoding method of claim 3, Chen discloses wherein the samples of the original block that are used to derive the scale and offset parameters are to be encoded as the current block ([0080], FIG. 10, LIC parameters (a and b) can be estimated by comparing a set of reconstructed samples surrounding the current block (“current blk”), located in a neighborhood Vcur, with a set of reconstructed samples (the set can have various sizes depending on application), located in a neighborhood Vref(MV) of the reference block in the reference picture (“ref blk”). MV represents the motion vector between the current block and the reference block. Typically, Vcur and Vref(MV) comprise in samples located in the L-shape around (on the top, left and top-left side) the current block and reference block, respectively). Regarding Claim 5, Chen in view of Chen_423 discloses the video encoding method of claim 3, Chen discloses wherein the samples of the reference frame used to derive the scale and offset parameters are referenced by a motion vector of the current block ([0080], FIG. 10, LIC parameters (a and b) can be estimated by comparing a set of reconstructed samples surrounding the current block (“current blk”), located in a neighborhood Vcur, with a set of reconstructed samples (the set can have various sizes depending on application), located in a neighborhood Vref(MV) of the reference block in the reference picture (“ref blk”). MV represents the motion vector between the current block and the reference block. Typically, Vcur and Vref(MV) comprise in samples located in the L-shape around (on the top, left and top-left side) the current block and reference block, respectively). Regarding Claim 6, Chen in view of Chen_423 discloses the video encoding method of claim 2, Chen discloses further comprising using (i) the samples of the original block and (ii) samples referenced by a plurality of motion vectors associated with a plurality of sub-blocks of the current block to derive the scale parameter and the offset parameter ([0106], FIG. 15, for the current top-left sub-block (“current sub-blk.sub.0”), its corresponding reconstructed samples in Vref(MV.sub.0) above the reference block MV.sub.0 are used as the top-left corner patch of the “quasi-L-shape”. Then the sub-blocks located in the first row of the CU (corresponding to reference blocks MV.sub.0, MV.sub.1, MV.sub.2, and MV.sub.3) generate the top patches of “quasi-L-shape” (Vref(MV.sub.0), Vref(MV.sub.1), Vref(MV.sub.2), Vref(MV.sub.3)). Further, the left patches of the “quasi-L shape” are formed by the sub-blocks in the first column (corresponding to reference blocks MV.sub.0, MV.sub.4, and MV.sub.5) by using the reconstructed samples to the left of the reference blocks for the sub-blocks. An additional “Vref(MV.sub.0)” is formed to the left of MV.sub.0 in the reference. Note that Vref(MV.sub.5) is shown as a double-block because the sub-blocks in the picture have the same motion vector MV.sub.5. Using the “quasi-L-shape”, the LIC parameters can be derived and then, for example, applied to the entire CU). Regarding Claim 7, Chen in view of Chen_423 discloses the video encoding method of claim 2, Chen discloses wherein values of the scale and offset parameters used by the linear model to generate the prediction block are selected from pre-defined sets of permissible values, wherein each set of permissible values has a finite range and is created by sub-sampling a consecutive sequence of numbers uniformly or non-uniformly ([0080], FIG. 10, LIC parameters (a and b) can be estimated by comparing a set of reconstructed samples surrounding the current block (“current blk”), located in a neighborhood Vcur, with a set of reconstructed sample). Regarding Claim 8, Chen in view of Chen_423 discloses the video encoding method of claim 7, Chen discloses wherein an index is used to select a value from a predefined set of permissible values ([0080], FIG. 10, LIC parameters (a and b) can be estimated by comparing a set of reconstructed samples surrounding the current block (“current blk”), located in a neighborhood Vcur, with a set of reconstructed sample). Regarding Claim 9, Chen discloses the video encoding method of claim 8, Chen discloses wherein the predefined set of permissible values are ordered with respect to the index according to probabilities of the permissible values in the set ([0080], FIG. 10, LIC parameters (a and b) can be estimated by comparing a set of reconstructed samples surrounding the current block (“current blk”), located in a neighborhood Vcur, with a set of reconstructed sample; [0178], model parameter of the linear model comprises a pair of first and second model parameters corresponding to a scaling factor and an offset, and processing the video information based on the linear model comprises processing the plurality of sub-blocks of the current coding unit based on the linear model using the scaling factor and the offset). Regarding Claim 10, Chen in view of Chen_423 discloses the video encoding method of claim 2, Chen discloses wherein the signaled scale and offset parameters are luma scale and offset parameters for generating a prediction block for a luma component, the method further comprising deriving and signaling chroma scale and offset parameters for generating one or more prediction blocks for one or more chroma components ([0079], LIC flag is signaled or implicitly derived to indicate the usage of the LIC [0121], for cross-component linear model (CCLM) in intra-coding, the luma samples are used to predict the corresponding chroma samples based on a linear model, and the parameters of the linear model can be derived or obtained in accordance with one or more aspects described herein in regard to LIC). Regarding Claim 11, Chen in view of Chen_423 discloses the video encoding method of claim 10, Chen discloses wherein: the signaled luma and chroma scale parameters are coded by one or more scale parameter indices that select values for the luma and chroma scale parameters, the signaled luma and chroma offset parameters are coded by one or more offset parameter indices that select values for the luma and chroma offset parameters ([0079], LIC flag is signaled or implicitly derived to indicate the usage of the LIC [0121], for cross-component linear model (CCLM) in intra-coding, the luma samples are used to predict the corresponding chroma samples based on a linear model, and the parameters of the linear model can be derived or obtained in accordance with one or more aspects described herein in regard to LIC). Regarding Claim 12, Chen in view of Chen_423 discloses the video encoding method of claim 11, Chen discloses wherein an offset parameter index that codes a signaled offset parameter specifies the absolute value and does not specify the sign of the signaled offset parameter ([0079], LIC flag is signaled or implicitly derived to indicate the usage of the LIC [0121], for cross-component linear model (CCLM) in intra-coding, the luma samples are used to predict the corresponding chroma samples based on a linear model, and the parameters of the linear model can be derived or obtained in accordance with one or more aspects described herein in regard to LIC).. Regarding Claim 17, Analogous rejection as the rejection of Claim 2 applies. Regarding Claim 18, video decoder claim 18 of using the corresponding encoding method claimed in claims 1, and the rejections of which are incorporated herein for the same reasons as used above. Claims 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20240121423, hereinafter Chen) in view of Chen et al. (US 20210243465, hereinafter Chen_423) and Hu et al. (US 20180063531, hereinafter Hu). Regarding Claim 13 Chen in view of Chen_423 discloses the video encoding method of claim 2, but does not explicitly disclose wherein signaling the scale parameter and the offset parameter comprises signaling an index for selecting an entry from one or more entries of a history-based table, each entry of the history-based table comprising historical scale and offset parameter values that are used to encode a previous block. Hu teaches wherein signaling the scale parameter and the offset parameter comprises signaling an index for selecting an entry from one or more entries of a history-based table, each entry of the history-based table comprising historical scale and offset parameter values that are used to encode a previous block ([0089], Offset1 and Offset2 are two offsets that can be applied to make a biased to some default value. How to select (n), C(n), Offset1 and Offset2 depends on whether the linear regression is applied for LIC, luma to chroma in CCLM or Cb to Cr in CCLM. For example, in LIC and CCLM, L(n) and C(n) are selected as the following table: PNG media_image1.png 176 410 media_image1.png Greyscale Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the scale parameter and the offset parameter comprises signaling an index as taught by Hu ([0089]) into the encoding system of Chen in order to reduce encoder complexity for effective compression and ensures that intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture and enables using loop filters to smooth pixel transitions and improve video quality(Hu, [0111]). Regarding Claim 14, Chen in view of Chen_423 and Hu discloses the video encoding method of claim 13, Hu teaches further comprising updating the history-based table with a new entry comprising a scale parameter value and an offset parameter value that are used by the linear model to generate the prediction block ([0091], derive the value of offset1 and offset2 depend on which method the linear regression process is used, e.g. the parameter t in equation (14). In one example, one specific way to derive the value of offset1 and offset2 is predefined for each tool the linear regression process is used. In another example, a few alternative ways to derive the value of offset1 and offset2 are predefined for each tool the linear regression process is used, and indices or flags are signaled in bitstream to indicate which way is applied, respectively for each tool). The same reason or rational of obviousness motivation applied as used above in claim 13. Regarding Claim 15, Chen in view of Chen_423 and Hu discloses the video encoding method of claim 14, Hu teaches further comprising signaling one or more delta values that are to be added to the historical scale and offset parameter values stored in the selected entry of the history-based table ([0092] when bi-prediction and LIC are both applied to a block, in equation (11), the right shifting by LIC to the predictor can be applied before the left shifting by c bits in LIC. When doing this, the offset applied in equation (10) will be added to b. In addition, in the unified process in (14), the input t can be used to define whether bi-prediction is applied together with LIC, see the following table as an example: PNG media_image2.png 124 404 media_image2.png Greyscale The same reason or rational of obviousness motivation applied as used above in claim 13. Regarding Claim 16, Chen in view of Chen_423 and Hu discloses the video encoding method of claim 15, Hu teaches wherein the signaled one or more delta values comprise separate delta values for different color components ([0092] when bi-prediction and LIC are both applied to a block, in equation (11), the right shifting by LIC to the predictor can be applied before the left shifting by c bits in LIC. When doing this, the offset applied in equation (10) will be added to b. In addition, in the unified process in (14), the input t can be used to define whether bi-prediction is applied together with LIC, see the following table as an example: PNG media_image2.png 124 404 media_image2.png Greyscale The same reason or rational of obviousness motivation applied as used above in claim 13. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samuel D Fereja whose telephone number is (469)295-9243. The examiner can normally be reached 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID CZEKAJ can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL D FEREJA/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
May 23, 2025
Non-Final Rejection — §103
Aug 27, 2025
Response Filed
Nov 11, 2025
Final Rejection — §103
Feb 10, 2026
Request for Continued Examination
Feb 26, 2026
Response after Non-Final Action
Mar 31, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597264
Method for Calibrating an Assistance System of a Civil Motor Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12598318
METHOD AND SYSTEM-ON-CHIP FOR PERFORMING MEMORY ACCESS CONTROL WITH LIMITED SEARCH RANGE SIZE DURING VIDEO ENCODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593018
SYSTEM AND METHOD FOR CONTROLLING PERCEPTUAL THREE-DIMENSIONAL ELEMENTS FOR DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12593036
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12591123
METHOD FOR DETERMINING SLOPE OF SLIDE IN SLIDE SCANNING DEVICE, METHOD FOR CONTROLLING SLIDE SCANNING DEVICE AND SLIDE SCANNING DEVICE USING THE SAME
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+11.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month