Prosecution Insights
Last updated: April 19, 2026
Application No. 19/009,847

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Non-Final OA §102§103
Filed
Jan 03, 2025
Examiner
NIRJHAR, NASIM NAZRUL
Art Unit
2896
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
379 granted / 512 resolved
+6.0% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
37 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
75.4%
+35.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 512 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the correspondence filled on 1/3/25. Claims 1-20 are presented for examination. IDS Considerations The information disclosure statement (IDS) submitted on 1/3/25 is/are being considered by the examiner as the submission is in compliance with the provisions of 37 CFR 1.97. Claim Rejections - 35 USC § 102 The following is a quotation of 35 U.S.C. 102(a)(1)/(a)(2) which forms the basis for all obviousness rejections set forth in this Office action: (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4 and 16-20 is/are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Esenlik (U.S. Pub. No. 20200260119 A1). Examiner’s Note: Video encoding and decoding is done using same opposite algorithm. Regarding to claim 1 and 18-20: 1. Esenlik teach a method for video processing, comprising: (Esenlik [0067] FIG. 1 is a schematic drawing is a block diagram showing an exemplary structure of an encoder for encoding video signals;) obtaining, for a conversion between a current video block of a video and a bitstream of the video, (Esenlik [0084] In an embodiment, an encoder is provided for encoding a signal into a bitstream carrying signal samples and control information relating to encoding of the signal samples) values for a set of adjusting parameters associated with values for a set of model parameters of a local illumination compensation (LIC) model for coding the current video block; (Esenlik [0172] Local illumination compensation is a tool which may be used during video encoding and decoding (cf. Section 2.3.5 of Algorithm description of Joint Exploration Test Model 7 (JEM7), JVET-G1001 by Jianle Chen et. al. available at website http://phenix.it-sudparis.eu/jvet/.). Local Illumination Compensation (LIC) is based on a linear model for illumination changes, using a scaling factor a and an offset b. It is enabled or disabled adaptively for each inter-mode coded coding unit (CU). When LIC applies for a CU, a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples.) updating the values for the set of model parameters based on the values for the set of adjusting parameters; and (Esenlik [0172] The IC parameters are derived and applied [updating the values] for each prediction direction separately. General formula for deriving weights a and b may be minimizing the following expression SUM(a*P_c+b−P_r) for all P_c and P_r shown in FIG. 13. FIG. 13 shows a current coding unit (CU) with the pixel positions (sampleS) P_c which are immediately adjacent and already encoded/decoded. P_c are pixels which are subsampled 2:1 meaning that only every second position is taken. P_r are the samples located in the corresponding positions around the reference block.) performing the conversion (Esenlik [0174] The encoding and decoding of the present disclosure may be applied to syntax element LIC_flag which switches LIC on and off. For instance, the following syntax and semantics may be defined: [0175] Table 1 defines the following association between the LIC_flag values and the LIC on/off control information: If the LIC flag has value 0, then the LIC is to be set off (not to be applied). If, on the other hand, the LIC_flag has value 1, then the LIC is to be applied (set on). [0176] Table 2 defines the following association between the LIC_flag values and the LIC on/off control information: If the LIC flag has value 1, then the LIC is to be set off (not to be applied). If, on the other hand, the LIC_flag has value 0, then the LIC is to be applied (set on).) based on the updated values for the set of model parameters. (Esenlik [0172] The IC parameters are derived and applied [updating the values] for each prediction direction separately. General formula for deriving weights a and b may be minimizing the following expression SUM(a*P_c+b−P_r) for all P_c and P_r shown in FIG. 13. FIG. 13 shows a current coding unit (CU) with the pixel positions (sampleS) P_c which are immediately adjacent and already encoded/decoded. P_c are pixels which are subsampled 2:1 meaning that only every second position is taken. P_r are the samples located in the corresponding positions around the reference block.) Regarding to claim 2: 2. Esenlik teach the method of claim 1, wherein the set of model parameters comprises at least one of the following: a scale of the LIC model, or an offset of the LIC model. (Esenlik [0172] Local illumination compensation is a tool which may be used during video encoding and decoding (cf. Section 2.3.5 of Algorithm description of Joint Exploration Test Model 7 (JEM7), JVET-G1001 by Jianle Chen et. al. available at website http://phenix.it-sudparis.eu/jvet/.). Local Illumination Compensation (LIC) is based on a linear model for illumination changes, using a scaling factor a and an offset b. It is enabled or disabled adaptively for each inter-mode coded coding unit (CU). When LIC applies for a CU, a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples.) Regarding to claim 3: 3. Esenlik teach the method of claim 2, wherein the set of adjusting parameters comprises at least one of the following: a scale adjusting parameter for updating a value for the scale, or an offset (Esenlik [0172] Local illumination compensation is a tool which may be used during video encoding and decoding (cf. Section 2.3.5 of Algorithm description of Joint Exploration Test Model 7 (JEM7), JVET-G1001 by Jianle Chen et. al. available at website http://phenix.itsudparis.eu/jvet/.). Local Illumination Compensation (LIC) is based on a linear model for illumination changes, using a scaling factor a and an offset b. It is enabled or disabled adaptively for each inter-mode coded coding unit (CU). When LIC applies for a CU, a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples.) adjusting parameter for updating a value for the offset. (Esenlik [0172] The IC parameters are derived and applied [updating the values] for each prediction direction separately. General formula for deriving weights a and b may be minimizing the following expression SUM (a*P_c+b−P_r) for all P_c and P_r shown in FIG. 13. FIG. 13 shows a current coding unit (CU) with the pixel positions (sampleS) P_c which are immediately adjacent and already encoded/decoded. P_c are pixels which are subsampled 2:1 meaning that only every second position is taken. P_r are the samples located in the corresponding positions around the reference block.) Regarding to claim 4: 4. Esenlik teach the method of claim 3, wherein updating the values for the set of model parameters comprises at least one of: updating a value for the scale based on a sum of the value for the scale and a value for the scale adjusting parameter, or updating a value for the offset based on a sum of the value for the offset and a value for the offset adjusting parameter. (Esenlik [0172] Local illumination compensation is a tool which may be used during video encoding and decoding (cf. Section 2.3.5 of Algorithm description of Joint Exploration Test Model 7 (JEM7), JVET-G1001 by Jianle Chen et. al. available at website http://phenix.it-sudparis.eu/jvet/.). Local Illumination Compensation (LIC) is based on a linear model for illumination changes, using a scaling factor a and an offset b. It is enabled or disabled adaptively for each inter-mode coded coding unit (CU). When LIC applies for a CU, a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples. More specifically, the subsampled (2:1 subsampling) neighbouring samples of the CU and the corresponding samples (identified by motion information of the current CU or sub-CU) in the reference picture are used. The IC parameters are derived and applied for each prediction direction separately. General formula for deriving weights a and b may be minimizing the following expression SUM(a*P_c+b−P_r) for all P_c and P_r [value for the offset adjusting parameter] shown in FIG. 13. FIG. 13 shows a current coding unit (CU) with the pixel positions (sampleS) P_c which are immediately adjacent and already encoded/decoded. P_c are pixels which are subsampled 2:1 meaning that only every second position is taken. P_r are the samples located in the corresponding positions around the reference block.) Regarding to claim 16: 16. Esenlik teach the method of claim 1, wherein the conversion includes encoding the current video block into the bitstream. (Esenlik [0067] FIG. 1 is a schematic drawing is a block diagram showing an exemplary structure of an encoder for encoding video signals) Regarding to claim 17: 17. Esenlik teach the method of claim 1, wherein the conversion includes decoding the current video block from the bitstream. (Esenlik [0068] FIG. 2 is a block diagram showing an exemplary structure of a decoder for decoding video signals) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-10 and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Esenlik (U.S. Pub. No. 20200260119 A1), in view of Rusanovskyy (U.S. Pub. No. 20200288158 A1) Regarding to claim 5: 5. Esenlik teach the method of claim 1, wherein performing the conversion based on the updated values comprises: obtaining target values for the set of model parameters by shifting or clipping at least one of the updated values; and performing the conversion based on the target values, or (Part pf OR condition. No rejection is required for other parts) Esenlik do not explicitly teach wherein the current video block is coded with one of the following: an advanced motion vector prediction (AMVP) mode, an affine-AMVP mode, a merge mode, or a sub-block merge mode or wherein the set of adjusting parameters comprises a first adjusting parameter, and obtaining the values for the set of adjusting parameters comprises: selecting a value for the first adjusting parameter from a plurality of candidate values for the first adjusting parameter based on at least one of the following :a size of the current video block, or a quantization parameter (QP) value for coding the current video block. However Rusanovskyy teach wherein the current video block is coded with one of the following: an advanced motion vector prediction (AMVP) mode, an affine-AMVP mode, a merge mode, or a sub-block merge mode. (Rusanovskyy [0025] FIG. 10B is a conceptual diagram illustrating sub-blocks where OBMC applies for sub-blocks in advanced motion vector prediction (AMVP) mode. [0076] Video encoder 200 encodes data representing the prediction mode for a current block. For example, for inter-prediction modes, video encoder 200 may encode data representing which of the various available inter-prediction modes is used, as well as motion information for the corresponding mode. For uni-directional or bi-directional inter-prediction, for example, video encoder 200 may encode motion vectors using advanced motion vector prediction (AMVP) or merge mode. Video encoder 200 may use similar modes to encode motion vectors for affine motion compensation mode. Claim 5 has many or conditions. Rejection is required for the following OR conditions: However Rusanovskyy teach: or wherein the set of adjusting parameters comprises a first adjusting parameter, and obtaining the values for the set of adjusting parameters comprises: selecting a value for the first adjusting parameter from a plurality of candidate values for the first adjusting parameter based on at least one of the following :a size of the current video block, or a quantization parameter (QP) value for coding the current video block. (Rusanovskyy [0109] Transform processing unit 206 applies one or more transforms to the residual block to generate a block of transform coefficients (referred to herein as a “transform coefficient block”). Transform processing unit 206 may apply various transforms to a residual block to form the transform coefficient block. For example, transform processing unit 206 may apply a discrete cosine transform (DCT), a directional transform, a Karhunen-Loeve transform (KLT), or a conceptually similar transform to a residual block. In some examples, transform processing unit 206 may perform multiple transforms to a residual block, e.g., a primary transform and a secondary transform, such as a rotational transform. In some examples, transform processing unit 206 does not apply transforms to a residual block. [0110] Quantization unit 208 may quantize the transform coefficients in a transform coefficient block, to produce a quantized transform coefficient block. Quantization unit 208 may quantize transform coefficients of a transform coefficient block according to a quantization parameter (QP) value associated with the current block. Video encoder 200 (e.g., via mode selection unit 202) may adjust the degree of quantization applied to the coefficient blocks associated with the current block by adjusting the QP value associated with the CU. Quantization may introduce loss of information, and thus, quantized transform coefficients may have lower precision than the original transform coefficients produced by transform processing unit 206. Part pf OR condition. No rejection is required for other parts) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Esenlik, further incorporating Rusanovskyy in video/camera technology. One would be motivated to do so, to incorporate the current video block is coded with one of the following: an advanced motion vector prediction (AMVP) mode, an affine-AMVP mode, a merge mode, or a sub-block merge mode. This functionality will improve efficiency with predictable results. Regarding to claim 6: 6. Esenlik teach the method of claim 5, Esenlik do not explicitly teach wherein the number of candidate values in the plurality of candidate values is dependent on at least one of the following: the size of the current video block, or the QP value for coding the current video block. However Rusanovskyy teach wherein the number of candidate values in the plurality of candidate values is dependent on at least one of the following: the size of the current video block, or the QP value for coding the current video block. (Rusanovskyy [0110] Quantization unit 208 may quantize the transform coefficients in a transform coefficient block, to produce a quantized transform coefficient block. Quantization unit 208 may quantize transform coefficients of a transform coefficient block according to a quantization parameter (QP) value associated with the current block. Video encoder 200 (e.g., via mode selection unit 202) may adjust the degree of quantization applied to the coefficient blocks associated with the current block by adjusting the QP value associated with the CU. Quantization may introduce loss of information, and thus, quantized transform coefficients may have lower precision than the original transform coefficients produced by transform processing unit 206) Regarding to claim 7: 7. Esenlik teach the method of claim 1, wherein the values for the set of adjusting parameters are dependent on a level of the current video block, (Part pf OR condition. No rejection is required) or wherein the video further comprises a first video block coded with an LIC mode before the current video block, and obtaining the values for the set of adjusting parameters comprises: determining the values for the set of adjusting parameters based on statistics of the first video block and values for the set of adjusting parameters for coding the first video block, (Part pf OR condition. No rejection is required) or wherein the video is associated with a plurality of color planes, and values for the set of model parameters for coding video blocks associated with at least one of the plurality of color planes are updated, or wherein the set of adjusting parameters comprises a second adjusting parameter, a value for the second adjusting parameter or an indication for the value for the second adjusting parameter is indicated in the bitstream at one of the following: a picture level, a slice level, a tile level, a coding tree unit (CTU) level, a coding unit (CU) level, a prediction unit (PU) level, or a transform unit (TU) level. (Part pf OR condition. No rejection is required) Esenlik do not explicitly teach or wherein the video further comprises a second video block different from the current video block, and information regarding whether to update values for the set of model parameters for coding the second video block is dependent on at least one of the following: a sequence of the second video block, a frame of the second video block, a size of the second video block, or a QP value for coding the second video block. However Rusanovskyy teach or wherein the video further comprises a second video block different from the current video block, and information regarding whether to update values for the set of model parameters for coding the second video block is dependent on at least one of the following: a sequence of the second video block, a frame of the second video block, a size of the second video block, or a QP value for coding the second video block, (Rusanovskyy [0110] Quantization unit 208 may quantize the transform coefficients in a transform coefficient block, to produce a quantized transform coefficient block. Quantization unit 208 may quantize transform coefficients of a transform coefficient block according to a quantization parameter (QP) value associated with the current block. Video encoder 200 (e.g., via mode selection unit 202) may adjust the degree of quantization applied to the coefficient blocks associated with the current block by adjusting the QP value associated with the CU. Quantization may introduce loss of information, and thus, quantized transform coefficients may have lower precision than the original transform coefficients produced by transform processing unit 206) Regarding to claim 8: 8. Esenlik teach the method of claim 1, at least one additional candidate is added into the MV candidate list. (Esenlik [0135] The lists L0 and L1 may be defined in the standard and fixed. However, more flexibility in coding/decoding may be achieved by signaling them at the beginning of the video sequence. Accordingly, the encoder may configure the lists L0 and L1 with particular reference pictures ordered according to the index. The L0 and L1 lists may have the same fixed size. There may be more than two lists in general. The motion vector may be signaled directly by the coordinates in the reference picture. Alternatively, as also specified in H.265, a list of candidate motion vectors may be constructed and an index associated in the list with the particular motion vector can be transmitted. The motion vectors, the lists, the type of prediction and the like are all syntax elements for the encoding of which the present disclosure may also be applied.) Esenlik do not explicitly teach wherein if a motion vector (MV) candidate list for the current video block comprises at least one candidate coded with an LIC mode. However Rusanovskyy teach wherein if a motion vector (MV) candidate list for the current video block comprises at least one candidate coded with an LIC mode. (Rusanovskyy [0249] A LIC flag may be included as a part of motion information in addition to MVs and reference indices. However, when a merge candidate list is constructed, a video decoder (e.g., video encoder 200 or video decoder 300) may inherit the LIC flag from the neighbor blocks for merge candidates. The video coder may not use LIC for motion vector pruning.) Regarding to claim 9: 9. Esenlik teach the method of claim 8, a merge mode with motion vector difference (MMVD) mode, a combined inter and intra prediction (CIIP) mode, an affine-merge mode, a subblock merge mode, or a template matching merge mode, or wherein the MV candidate is one of the following: a merge list, an affine-merge list, or a template matching merge list, or wherein the at least one additional candidate comprises a first additional candidate corresponding to a first candidate in the at least one candidate, and the first additional candidate is determined based on information of the first candidate and reference values for the set of model parameters, the reference values being obtained by adjusting values for the set of model parameters for coding the first candidate with the set of adjusting parameters. (Part pf OR condition. No rejection is required) Esenlik do not explicitly teach wherein the current video block is coded with one of the following: a merge mode. However Rusanovskyy teach wherein the current video block is coded with one of the following: a merge mode. (Rusanovskyy [0076] Video encoder 200 encodes data representing the prediction mode for a current block. For example, for inter-prediction modes, video encoder 200 may encode data representing which of the various available inter-prediction modes is used, as well as motion information for the corresponding mode. For uni-directional or bi-directional inter-prediction, for example, video encoder 200 may encode motion vectors using advanced motion vector prediction (AMVP) or merge mode. Video encoder 200 may use similar modes to encode motion vectors for affine motion compensation mode.) Regarding to claim 10: 10. Esenlik teach the method of claim 9, wherein values for the set of adjusting parameters are selected and used in a way similar to the values for the set of adjusting parameters for updating the values for the set of model parameters, (Esenlik [0172] The IC parameters are derived and applied [updating the values] for each prediction direction separately. General formula for deriving weights a and b may be minimizing the following expression SUM(a*P_c+b−P_r) for all P_c and P_r shown in FIG. 13. FIG. 13 shows a current coding unit (CU) with the pixel positions (sampleS) P_c which are immediately adjacent and already encoded/decoded. P_c are pixels which are subsampled 2:1 meaning that only every second position is taken. P_r are the samples located in the corresponding positions around the reference block.) or wherein the at least one candidate comprises a plurality of candidates coded with the LIC mode, and the at least one additional candidate is generated based on a part of the plurality of candidates, or wherein the current video block is coded with a merge mode, and values for the set of adjusting parameters are selected based on a type of the merge mode, or wherein the first additional candidate is added into the MV candidate list after the first candidate, or wherein the first additional candidate immediately follows the first candidate in the MV candidate list, or wherein the at least one additional candidate comprises a plurality of additional candidates determined based on the first additional candidate with different values for the set of adjusting parameters, or wherein the at least one candidate comprises a plurality of candidates coded with an LIC mode, and one or more additional candidates are determined for each of the plurality of candidates and added into the MV candidate list. (Part pf OR condition. No rejection is required) Regarding to claim 13: 13. Esenlik teach the method of claim 1, Esenlik do not explicitly teach wherein information regarding at least one of the following is dependent on a dimension of the current video block or a QP for coding the current video block: whether to indicate a first syntax element in the bitstream, the first syntax element indicating first information regarding whether the values for the set of adjusting parameters are used for coding the current video block, or how to indicate the first syntax element in the bitstream. However Rusanovskyy teach wherein information regarding at least one of the following is dependent on a dimension of the current video block or a QP for coding the current video block: whether to indicate a first syntax element in the bitstream, (Rusanovskyy [0126] Entropy decoding unit 302 may receive encoded video data from the CPB and entropy decode the video data to reproduce syntax elements. Prediction processing unit 304, inverse quantization unit 306, inverse transform processing unit 308, reconstruction unit 310, and filter unit 312 may generate decoded video data based on the syntax elements extracted from the bitstream) the first syntax element indicating first information regarding whether the values for the set of adjusting parameters are used for coding the current video block, or how to indicate the first syntax element in the bitstream. (Rusanovskyy [0110] Quantization unit 208 may quantize the transform coefficients in a transform coefficient block, to produce a quantized transform coefficient block. Quantization unit 208 may quantize transform coefficients of a transform coefficient block according to a quantization parameter (QP) value associated with the current block. Video encoder 200 (e.g., via mode selection unit 202) may adjust the degree of quantization applied to the coefficient blocks associated with the current block by adjusting the QP value associated with the CU. Quantization may introduce loss of information, and thus, quantized transform coefficients may have lower precision than the original transform coefficients produced by transform processing unit 206.) Regarding to claim 14: 14. Esenlik teach the method of claim 13, or wherein the first information is dependent on whether the current video block is uni-predicted or bi- predicted, or wherein the current video block is coded with a merge prediction mode, and the first information is dependent on a type of the merge prediction mode, or wherein the first syntax element is indicated in the bitstream for at least one of the following: a Y color plane, a U color plane, or a V color plane. (Part pf OR condition. No rejection is required) Esenlik do not explicitly teach wherein the first syntax element is indicated in the bitstream, or wherein the first information is dependent on whether the current video block is coded with an AMVP mode or a merge mode. However Rusanovskyy teach wherein the first syntax element is indicated in the bitstream, or wherein the first information is dependent on whether the current video block is coded with an AMVP mode or a merge mode. (Rusanovskyy [0076] Video encoder 200 encodes data representing the prediction mode for a current block. For example, for inter-prediction modes, video encoder 200 may encode data representing which of the various available inter-prediction modes is used, as well as motion information for the corresponding mode. For uni-directional or bi-directional inter-prediction, for example, video encoder 200 may encode motion vectors using advanced motion vector prediction (AMVP) or merge mode. Video encoder 200 may use similar modes to encode motion vectors for affine motion compensation mode.) Regarding to claim 15: 15. Esenlik teach the method of claim 13, or wherein if a sum of a width and a height of the current video block is larger than a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the current video block is smaller than a predefined size, the first syntax element is indicated in the bitstream, or wherein if a sum of a width and a height of the current video block is smaller than a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the current video block is larger than or equal to a predefined size, the first syntax element is indicated in the bitstream, or wherein if a sum of a width and a height of the current video block is larger than or equal to a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the current video block is smaller than or equal to a predefined size, the first syntax element is indicated in the bitstream, or wherein if a sum of a width and a height of the current video block is smaller than or equal to a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the QP is greater than a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the QP is smaller than a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the QP is larger than or equal to a predefined threshold, the first syntax element is indicated in the bitstream, or wherein if the QP is smaller than or equal to a predefined threshold, the first syntax element is indicated in the bitstream. (Part pf OR condition. No rejection is required) Esenlik do not explicitly teach wherein if the current video block is larger than a predefined size, the first syntax element is indicated in the bitstream. However Rusanovskyy teach wherein if the current video block is larger than a predefined size, the first syntax element is indicated in the bitstream. (Rusanovskyy [0136] In this manner, video decoder 300 represents an example of a video decoding device including a memory configured to store video data, and one or more processing units implemented in circuitry and configured to: generate prediction information for a current block; determine a filter index value based on a syntax element for the video data; determine, based on one or more of a height of the current block or a width of the current block and based on the filter index value, a filter type; and filter the prediction information using a filter corresponding to the filter type to generate filtered prediction information. In this way, video decoder may inverse binarize the filter type differently depending on a block size parameter (e.g., ratio of height to width, block size, etc.). Configuring video decoder 300 (e.g., prediction processing unit 304) for block dependent signaling may reduce an amount of information signaled in a bitstream compared to video decoders that do not use block dependent signaling.) Allowable subject matter Regarding to claim 11-12: Claims 11-12 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the limitations of these dependent claims are not obvious from the prior art search when all the limitations of independent and intervening claims are taken into account. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NASIM N NIRJHAR/Primary Examiner, Art Unit 2896
Read full office action

Prosecution Timeline

Jan 03, 2025
Application Filed
Jan 11, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598324
DEPTH DIFFERENCES IN PLACE OF MOTION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12593131
VELOCITY MATCHING IMAGING OF A TARGET ELEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12593074
SYSTEMS AND METHODS OF BUFFERING IMAGE DATA BETWEEN A PIXEL PROCESSOR AND AN ENTROPY CODER
2y 5m to grant Granted Mar 31, 2026
Patent 12587662
METHOD, APPARATUS AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587628
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 512 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month