Prosecution Insights
Last updated: April 19, 2026
Application No. 18/615,916

METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING

Final Rejection §103
Filed
Mar 25, 2024
Examiner
CHIO, TAT CHI
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
3 (Final)
73%
Grant Probability
Favorable
4-5
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
610 granted / 836 resolved
+15.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
49 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 836 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3 and 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1). Consider claim 1, Xu teaches a method for video processing, comprising: determining, during a conversion between a target video block of a video and a bitstream of the video ([0065], [0090], Fig. 6-7), an intra block copy (IBC)-based mode to be applied for the target video block ([0102] – [0106], [0112] – [0119]); and performing the conversion based on the IBC-based mode ([0102] – [0106], [0112] – [0119]). However, Xu does not explicitly teach the IBC-based mode being on an intra template matching for IBC mode (TM_IBC), wherein a derived block vector (BV) by the TM_IBC is used as a BV prediction candidate for IBC non-merge mode. Lim teaches the IBC-based mode being on an intra template matching for IBC mode (TM_IBC) ([0842] – [0850]), wherein a derived block vector (BV) by the TM_IBC is used as a BV prediction candidate for IBC non-merge mode ([0842] – [0850]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of using a BV prediction candidate derived by TM_IBC for IBC non-merge mode because such incorporation would improve the performance of motion prediction upon constructing a merge list for a target block. [0012]. Consider claim 2, Xu teaches the method, wherein the derived BV by the TM_IBC is the only candidate for the IBC non-merge mode if the derived BV is available, or wherein the derived BV by the TM_IBC is the k-th candidate for the IBC non-merge mode if the derived BV is available, and wherein the k-th candidate is the first candidate, or wherein a syntax element indicates whether the derived BV by the TM_IBC is used as a BV prediction candidate for the IBC non-merge mode, and wherein the syntax element is indicated only if the TM_IBC mode is applied, or wherein the IBC-based mode is further based on at least one of the following: an IBC mode based on affine motion compensated prediction (Affine_IBC) (affine models can be applied in the motion compensation where the reference picture is a previously coded (encoded/decoded) picture, and the affine models can be applied in the intra block copy mode (also referred to as intra picture block compensation). When performing intra picture block compensation, a reference block from the available reference area of the same picture will apply affine transformation before using as the predictor for a current coding block. [0102] – [0119]), an affine IBC merge mode with block vector difference (MBVD), an intra template matching for IBC mode (TM_IBC), wherein a derived block vector (BV) by the TM_IBC is used as based candidates for the MBVD, an IBC prediction mode based on multi-hypothesis, an IBC mode based on overlapped block motion compensation (OBMC), an IBC mode based on geometric partitioning with the MBVD, or an IBC mode based on geometric partitioning with template matching (TM). Consider claim 3, Xu teaches wherein an affine motion model is utilized to predict the target video block from the reconstructed samples or pixels within the same picture, or wherein the Affine_IBC mode is to be applied, and wherein an affine motion field of the target video block is described by motion information of two control points or three control points, wherein the affine motion field of the target video block is described by the 4 parameter affine model or 6 parameter affine model, or wherein the Affine_IBC mode comprises at least one of an affine IBC merge mode or an affine IBC advanced motion vector prediction (AMVP) mode, wherein the affine IBC merge mode is performed similar as an affine merge mode, and wherein the affine IBC AMVP mode is performed similar as an affine AMVP mode, or BV for a pixel of the target video block or for a sub-block of the target video block derived from an affine model is rounded or clipped to the integer precision (The resolution of a block vector, in some implementations, is restricted to integer positions. [0103] – [0108], [0124] – [0127]), or wherein a BV prediction of a control point inherited from a neighbouring video block of the target video block or derived from an affine model is rounded or clipped to the integer precision (The resolution of a block vector, in some implementations, is restricted to integer positions. [0103] – [0108], [0124] – [0127]), or wherein a prediction refinement with optical flow is used for the IBC mode based on affine motion compensated prediction, or wherein the Affine_IBC mode is a merge mode wherein no BV difference (BVD) is coded, or wherein the Affine_IBC mode is an inter mode wherein an indication of a BV difference (BVD) is coded, or wherein the Affine_IBC mode is a merge mode wherein indications of BV differences (BVDs) within an affine BVD candidate list are coded or derived, and wherein an index of a BVD is coded. Consider claim 18, Xu teaches the conversion includes encoding the target video block into the bitstream ([0065], [0090], Fig. 6-7), or wherein the conversion includes decoding the target video block from the bitstream ([0093] – [0099], Fig. 8). Consider claim 19, claim 19 recites an apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon ([0100], [0145], [0149] – [0151]), wherein the instructions upon execution by the processor, cause the processor to perform the method recited in claim 1 (see rejection of claim 1). Consider claim 20, claim 20 recites a non-transitory computer-readable storage medium storing instructions that cause a processor ([0100], [0145], [0149] – [0151]) to perform the method recited in claim 1 (see rejection of claim 1). Consider claim 21, Xu teaches storing the bitstream in a non-transitory computer-readable recording medium ([0044], [0048], [0073]). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1) and Liang et al. (US 2022/0014754 A1). Consider claim 4, Xu teaches all the limitations in claim 1 and a BV for a pixel of the target video block or for a sub-block of the target video block derived from an affine model is rounded or clipped to an integer precision (The resolution of a block vector, in some implementations, is restricted to integer positions. [0103] – [0108], [0124] – [0127]) or wherein a BV prediction of a control point inherited from a neighbouring block of the target video block or derived from an affine model is rounded or clipped to the integer precision (The resolution of a block vector, in some implementations, is restricted to integer positions. [0103] – [0108], [0124] – [0127]), but does not explicitly teach block vectors of control points used in the Affine_IBC mode may be derived according to one or more BVs derived from an affine BV candidate list and one or more BV differences (BVDs) selected from a given affine BVD candidate list, and wherein the affine IBC MBVD mode is to be applied, and wherein an affine IBC merge candidate is selected, and BVs of control points are further refined by indications of BVD information, and wherein the BVD information for the BVs of all control points are the same, or wherein the BVD information for the BVs of at least two control points are different, or wherein a BV for a pixel of the target video block or for a sub-block of the target video block derived from an affine model is rounded or clipped to an integer precision, or wherein a BV prediction of a control point inherited from a neighbouring block of the target video block or derived from an affine model is rounded or clipped to the integer precision, or wherein the BVD information for the BVs of control points in the Affine_IBC_MBVD mode is different from that utilized for a translational MBVD methods, or wherein the affine BVD candidate list only includes integer BVD candidates. Liang teaches block vectors of control points used in the Affine_IBC mode may be derived according to one or more BVs derived from an affine BV candidate list and one or more BV differences (BVDs) selected from a given affine BVD candidate list (the first creation unit is configured to, when the coding mode is the first affine motion mode, sequentially access neighbor blocks of the coding block according to a first preset sequence to obtain a first candidate set, the first candidate set comprising two neighbor blocks, and the two neighbor blocks being coded in the first affine motion mode or the second affine motion mode, and create a first candidate list corresponding to the first affine motion mode based on the first candidate set, a first reference block in the first candidate list being obtained from the first candidate set. In an example, the first estimation unit is configured to, when the coding mode is the first affine motion mode, traverse the first candidate list to acquire Motion Vector Predictors (MVPs) of at least two control points corresponding to each first reference block in the first candidate list, perform affine motion estimation calculation on the coding block by taking the MVPs of the at least two control points corresponding to each first reference block as starting points of affine search to obtain Motion Vectors (MVs) of control points at corresponding positions of the coding block, and acquire a first coding parameter corresponding to the coding block from the MVs, the first coding parameter representing a group of coding parameters that are obtained in the first affine motion mode for the coding block and have a minimum coding prediction cost. The coding prediction unit is configured to perform predictive coding on the coding block based on the first coding parameter corresponding to the coding block. [0050] – [0056]. In the iteration process, a Block Vector with Affine Model (BVAffi) of the coding block in the first affine motion mode needs to meet the following condition: a reference point (represented by refPoint) found for the control point according to BVAffi, and reference points (represented by refPoint) found for other corner points (samples except the control point in four left-top, right-top, left-bottom and right-bottom points) according to MVs deduced through formula (1) or formula (2) have been coded. In addition, for reducing the complexity in coding, the reference points may also be in the same Coding Tree Unit (CTU) as the coding block. After affine motion estimation is tried using the four-parameter affine model and the six-parameter affine model, coding prediction costs corresponding to the coding block are calculated from these MVs respectively. An obtained calculation result is represented by cost. In the embodiment of the present disclosure, cost may be equal to a sum of a predicted residual and the number of bits needed for transmission of a group of BVAffi of the coding block. Then, the group of coding parameters corresponding to the minimum cost are selected according to the calculation results and stored. In the first affine motion mode, the first coding parameter corresponding to the coding block includes the value of Affine_Flag, the value of Affine Type_Flag, the value of Merge_Flag, an MVP index, a Motion Vector Difference (MVD) between BVAffi and the MVP, the predicted residual, etc. [0130] – [0149]. [0178].), and wherein the affine IBC MBVD mode is to be applied ([0027], [0050] – [0056], [0098] – [0107]), and wherein an affine IBC merge candidate is selected, and BVs of control points are further refined by indications of BVD information, and wherein the BVD information for the BVs of all control points are the same, or wherein the BVD information for the BVs of at least two control points are different, or wherein the BVD information for the BVs of control points in the Affine_IBC_MBVD mode is different from that utilized for a translational MBVD methods, or wherein the affine BVD candidate list only includes integer BVD candidates. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of deriving block vectors of control points used in the Affine_IBC mode according to one or more BVs from an affine BV candidate list because such incorporation would reduce the number of the coded bits and improve the coding rate. [0095]. Claim(s) 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1) and Huang et al. (US 2022/0311997 A1). Consider claim 6, Xu teaches all the limitations in claim 1 but does not explicitly teach the IBC prediction mode based on multi-hypothesis is to be applied, and wherein one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal, or wherein the one or more additional prediction signals for motion-compensating are indicated or derived and different from that in a case where only uni-prediction signal is used, and wherein if one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal, a resulting overall prediction signal is derived by a sample-wise weighted superposition, and wherein the resulting overall prediction signal is accumulated iteratively with each additional prediction signal as: PNG media_image1.png 20 273 media_image1.png Greyscale wherein resulting overall prediction signal is derived as the last weighted prediction signal having the largest index (n+1), and wherein one or two additional prediction signals are used, or wherein a weighting factor for the sample-wise weighted superposition is predefined, and wherein the weighting factor is set to ½, or wherein a weighting factor for the sample-wise weighted superposition is selected from a predefined set, and wherein the predefined set comprises one of the following: {½, ¼}, {¼, −⅛}, or {½, ¼, −⅛}, and wherein the weighting factor is specified by an index, or wherein a simplified Rate Distortion (RD) cost using Hadamard distortion measure and approximated bit rate is used for determining the best weighting factor, or wherein a weighting factor for the sample-wise weighted superposition is position-dependent for each sample, and wherein the weighting factor is set to 1, 0, or ½, or wherein a weighting factor for the sample-wise weighted superposition is indicated from encoder to decoder. Huang teaches the IBC prediction mode based on multi-hypothesis is to be applied, and wherein one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal ([0068] – [0075]), or wherein the one or more additional prediction signals for motion-compensating are indicated or derived and different from that in a case where only uni-prediction signal is used, and wherein if one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal, a resulting overall prediction signal is derived by a sample-wise weighted superposition, and wherein the resulting overall prediction signal is accumulated iteratively with each additional prediction signal as: PNG media_image1.png 20 273 media_image1.png Greyscale wherein resulting overall prediction signal is derived as the last weighted prediction signal having the largest index (n+1), and wherein one or two additional prediction signals are used ([0068] – [0075]), or wherein a weighting factor for the sample-wise weighted superposition is predefined, and wherein the weighting factor is set to ½, or wherein a weighting factor for the sample-wise weighted superposition is selected from a predefined set, and wherein the predefined set comprises one of the following: {½, ¼}, {¼, −⅛}, or {½, ¼, −⅛}, and wherein the weighting factor is specified by an index (In equation (2), p.sub.3 represents a resulting prediction signal, p.sub.uni/bi represents a first prediction block from a first prediction hypothesis, h3 represents a second prediction block from a second prediction hypothesis, and a represents a weight value. In VVC, the weighting factor α is specified by the syntax element add_hyp_weight_idx, according to the following mapping: TABLE-US-00001 add_hyp_weight_idx α 0 ¼ 1 −⅛. [0068] – [0075]), or wherein a simplified Rate Distortion (RD) cost using Hadamard distortion measure and approximated bit rate is used for determining the best weighting factor, or wherein a weighting factor for the sample-wise weighted superposition is position-dependent for each sample, and wherein the weighting factor is set to 1, 0, or ½, or wherein a weighting factor for the sample-wise weighted superposition is indicated from encoder to decoder. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of deriving one or more additional prediction signals for motion compensating in addition to a conventional uni-prediction signal because such incorporation would form a more accurate prediction block for a current block and reduce a bitrate associated with coding data used to form the prediction block. [0086]. Consider claim 7, Huang teaches if one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal, motion parameters of each additional prediction hypothesis are indicated by a first indicating mode in which the motion parameters of each additional prediction hypothesis are indicated explicitly by specifying a block vector predictor index and a block vector difference ([0081] – [0087]), or a second indicating mode in which the motion parameters of each additional prediction hypothesis are indicated implicitly by specifying a merge index, and wherein the first indicating mode and the second indicating mode are distinguished with each other by a separate multi-hypothesis IBC merge flag ([0081] – [0087]), or wherein multi-hypothesis motion estimation is performed in the first indicating mode, and wherein additional IBC prediction hypotheses are searched for a predefined number (N) of IBC modes having first N lowest Hadamard Rate Distortion (RD) costs, or wherein the additional IBC prediction hypotheses are searched for two of IBC modes having first two lowest Hadamard RD costs, or wherein a motion estimation with a restricted search range is performed for the searching, and wherein the restricted search range is set to 16, or wherein a simplified Rate Distortion (RD) cost using Hadamard distortion measure and approximated bit rate is used for determining the best weighting factor, or wherein the additional prediction signals are explicitly indicated or implicitly inherited for a normal IBC merge mode ([0081] – [0087]), and wherein the explicitly indicated additional prediction signals use a same IBC advanced motion vector prediction (AMVP) candidate list which is generated for a first explicitly indicated additional prediction signal ([0081] – [0087]), or wherein the additional prediction signals are explicitly indicated or implicitly inherited except for an IBC SKIP mode, or wherein the additional prediction signals are explicitly indicated or implicitly inherited for the MBVD mode, and wherein the additional prediction signals are explicitly indicated or implicitly inherited except for an MBVD SKIP mode, and wherein there is no inheritance or merging of the additional prediction signals from merging candidates, or wherein all explicitly indicated additional prediction signals use the same advanced motion vector prediction (AMVP) candidate list which is generated for the first explicitly signaled additional prediction signal ([0068] – [0075]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of deriving one or more additional prediction signals for motion compensating in addition to a conventional uni-prediction signal because such incorporation would form a more accurate prediction block for a current block and reduce a bitrate associated with coding data used to form the prediction block. [0086]. Consider claim 8, Huang teaches if one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal, the additional prediction signals are explicitly indicated or implicitly inherited for a sub-block IBC merge mode ([0064], [0068] – [0075]), and wherein the additional prediction signals are explicitly indicated or implicitly inherited except for a sub-block IBC SKIP mode, and wherein there is no inheritance or merging of the additional prediction signals from merging candidates, or wherein all explicitly indicated additional prediction signals use the same advanced motion vector prediction (AMVP) candidate list which is generated for the first explicitly signaled additional prediction signal ([0068] – [0075]), or wherein the additional prediction signals are explicitly indicated or implicitly inherited for a non-affine IBC AMVP mode, and wherein only one IBC AMVP candidate list is to be constructed, and wherein the only one IBC AMVP candidate list is to be constructed for a non-additional prediction signal, or wherein the IBC AMVP candidate list is reused for the additional prediction signals ([0068] – [0075]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of deriving one or more additional prediction signals for motion compensating in addition to a conventional uni-prediction signal because such incorporation would form a more accurate prediction block for a current block and reduce a bitrate associated with coding data used to form the prediction block. [0086]. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1), Huang et al. (US 2022/0311997 A1) and Yan et al. (US 2024/0171735 A1). Consider claim 9, the combination of Xu and Huang teaches all the limitations in claim 6 and if one or more additional prediction signals for motion-compensating are indicated or derived, in addition to a conventional uni-prediction signal, the additional prediction signals are explicitly indicated or implicitly inherited for an affine IBC AMVP mode ([0059] – [0063]), and wherein the additional prediction signals only support translational prediction signals, or wherein an IBC AMVP candidate list is to be constructed ([0068] – [0075]), and wherein the IBC AMVP candidate list is to be constructed for a non-additional prediction signal, or wherein the IBC AMVP candidate list is reused for the additional prediction signals ([0068] – [0075]) but does not explicitly teach wherein an affine IBC top left control point motion vector (MV) predictor is used as a MV predictor for the additional translational prediction signals, or wherein an affine IBC top right or bottom left control point motion vector (MV) predictor is used as an MV predictor for the additional translational prediction signals. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of deriving one or more additional prediction signals for motion compensating in addition to a conventional uni-prediction signal because such incorporation would form a more accurate prediction block for a current block and reduce a bitrate associated with coding data used to form the prediction block. [0086]. Yan teaches wherein an affine IBC top left control point motion vector (MV) predictor is used as a MV predictor for the additional translational prediction signals, or wherein an affine IBC top right or bottom left control point motion vector (MV) predictor is used as an MV predictor for the additional translational prediction signals ([0086], [0087], [0174]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of using an affine IBC top left, top right, or bottom left control point motion vector predictor as a MV predictor for the additional translational prediction signals because such incorporation would improve the coding efficiency of affine motion compensated prediction. [0023] Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1) and Ikai (US 2019/0191171 A1). Consider claim 11, Xu teaches all the limitations in claim 1 but does not explicitly teach the IBC mode based on the OBMC is to be applied, and wherein a motion type of the target video block and a neighboring video block used for the OBMC are the same for at least one of the following: a coding unit (CU) boundary OBMC, or a subblock boundary OBMC, or wherein the motion type is IBC or regular inter, and wherein the IBC mode based on the OBMC is to be applied, or wherein a motion type of the target video block and a neighboring video block used for the OBMC are different for at least one of the following: a coding unit (CU) boundary OBMC, or a subblock boundary OBMC, and wherein one motion type is IBC and the other motion type is regular inter, or wherein when and/or how to apply the OBMC for IBC coded blocks is different from those for non-IBC coded blocks, and wherein a setting of weights for the OBMC for the IBC coded blocks is different from that for the non-IBC coded blocks, or wherein the BV derived from a candidate list for an IBC coded block with geometry or triangle partitions are further refined before being used to derive the prediction signal, or wherein the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein additional block vector differences (BVDs) are further applied on top of an existing GPM_IBC merge candidate. Ikai teaches the IBC mode based on the OBMC is to be applied, and wherein a motion type of the target video block and a neighboring video block used for the OBMC are the same for at least one of the following: a coding unit (CU) boundary OBMC, or a subblock boundary OBMC, or wherein the motion type is IBC or regular inter, and wherein the IBC mode based on the OBMC is to be applied, or wherein a motion type of the target video block and a neighboring video block used for the OBMC are different for at least one of the following: a coding unit (CU) boundary OBMC, or a subblock boundary OBMC, and wherein one motion type is IBC and the other motion type is regular inter ([0281] – [0296]), or wherein when and/or how to apply the OBMC for IBC coded blocks is different from those for non-IBC coded blocks, and wherein a setting of weights for the OBMC for the IBC coded blocks is different from that for the non-IBC coded blocks, or wherein the BV derived from a candidate list for an IBC coded block with geometry or triangle partitions are further refined before being used to derive the prediction signal, or wherein the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein additional block vector differences (BVDs) are further applied on top of an existing GPM_IBC merge candidate. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of using different coding unit boundary OBMC or a subblock boundary OBMC because such incorporation would decrease the memory bandwidth. [0326]. Claim(s) 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1), Ikai (US 2019/0191171 A1) and Chen et al. (US 2023/0034458 A1). Consider claim 12, the combination of Xu and Ikai teaches all the limitations in claim 11 but does not explicitly teach if a setting of weights for the OBMC for the IBC coded blocks is different from that for the non-IBC coded blocks, the additional BVDs are indicated in a same manner as the MBVD, and wherein two flags separately indicate whether an additional BVD is applied to each GPM_IBC partition, and wherein one single flag is indicated to jointly control whether the additional BVD is applied to each GPM_IBC partition, or wherein if a flag of one GPM_IBC partition is true, a BVD corresponding to the GPM_IBC partition is indicated in a same way as the MBVD, and wherein the BVD is indicated by one distance index plus one direction index. Chen teaches if a setting of weights for the OBMC for the IBC coded blocks is different from that for the non-IBC coded blocks, the additional BVDs are indicated in a same manner as the MBVD, and wherein two flags separately indicate whether an additional BVD is applied to each GPM_IBC partition, and wherein one single flag is indicated to jointly control whether the additional BVD is applied to each GPM_IBC partition ([0154] – [0156], [0162] – [0168], [0172] – [0175]), or wherein if a flag of one GPM_IBC partition is true, a BVD corresponding to the GPM_IBC partition is indicated in a same way as the MBVD, and wherein the BVD is indicated by one distance index plus one direction index. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chen into the combination of Xu and Ikai because such incorporation would reduce complexity and signaling overhead. [0151]. Consider claim 13, Chen teaches the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein merge indices of two GPM_IBC partitions are allowed to be the same if the block vector differences (BVDs) that are applied to the two partitions are not identical ([0154] – [0156], [0162] – [0163], [0168] – [0178]), or wherein the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein an BV pruning procedure is introduced to construct a GPM_IBC merge candidate list if the GPM_IBC with MBVD is applied, and wherein the BV pruning procedure may be based on a threshold, and wherein if differences of horizontal and vertical components for two BVs are both smaller than a threshold, one of them is removed from the GPM_IBC merge candidate list, or wherein if horizontal and vertical components for two BVs are both the same, one of them is removed from the GPM_IBC merge candidate list, and wherein the threshold is decided by a size of the target video block, or wherein the threshold is predefined ([0181] – [0184]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chen into the combination of Xu and Ikai because such incorporation would reduce complexity and signaling overhead. [0151]. Claim(s) 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1) and Chen et al. (US 2021/0160528 A1) (hereinafter “Chen II”). Consider claim 14, Xu teaches all the limitations in claim 1 but does not explicitly teach the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein a distance index specifies motion magnitude information and indicates a pre-defined offset from a starting point, and wherein the pre-defined offset comprises an offset added to at least one of the following: a horizontal component of a starting BV, or a vertical component of the starting BV, and wherein the predefined offset is one of 1 pixel, 2 pixels, 4 pixels, 8 pixels, 16 pixels, or 32 pixels, or wherein the predefined offset is one of 1 pixel, 2 pixels, 4 pixels, 8 pixels, 16 pixels, 32 pixels, 64 pixels, or 128 pixels, or wherein the predefined offset is one of 1 pixel, 2 pixels, 3 pixels, 4 pixels, 6 pixels, 8 pixels, or 16 pixels, or wherein the predefined offset is one of 1 pixel, 2 pixels, 3 pixels, 4 pixels, 6 pixels, 8 pixels, 16 pixels, 32 pixels, or 64 pixels. Chen II teaches the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein a distance index specifies motion magnitude information and indicates a pre-defined offset from a starting point, and wherein the pre-defined offset comprises an offset added to at least one of the following: a horizontal component of a starting BV, or a vertical component of the starting BV ([0033], [0149] – [0150], [0184] – [0185]), and wherein the predefined offset is one of 1 pixel, 2 pixels, 4 pixels, 8 pixels, 16 pixels, or 32 pixels ([0033], [0149] – [0150], [0184] – [0185]), or wherein the predefined offset is one of 1 pixel, 2 pixels, 4 pixels, 8 pixels, 16 pixels, 32 pixels, 64 pixels, or 128 pixels, or wherein the predefined offset is one of 1 pixel, 2 pixels, 3 pixels, 4 pixels, 6 pixels, 8 pixels, or 16 pixels, or wherein the predefined offset is one of 1 pixel, 2 pixels, 3 pixels, 4 pixels, 6 pixels, 8 pixels, 16 pixels, 32 pixels, or 64 pixels. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chen II into the method of Xu because such incorporation would facilitate the increase in coding efficiency of motion vector coding. [0056]. Consider claim 15, Chen II teaches the IBC mode based on geometric partitioning (GPM_IBC) with MBVD is to be applied, and wherein a direction index represents a direction of the block vector difference (BVD) relative to a starting point, and wherein the direction index represents a predefined number of BVD directions, or wherein the predefined number is set to 4, or wherein 2 horizontal directions and 2 vertical directions are used for the BVD directions, or wherein 4 diagonal directions are used for the BVD directions, or wherein the predefined number is set to 8, or wherein 4 diagonal directions plus 2 horizontal directions and 2 vertical directions are used for the BVD directions ([0033], [0149] – [0150], [0184] – [0185]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chen II into the method of Xu because such incorporation would facilitate the increase in coding efficiency of motion vector coding. [0056]. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1) and Chen et al. (US 2023/0034458 A1). Consider claim 16, Xu teaches all the limitations in claim 1 but does not explicitly teach the IBC mode based on geometric partitioning (GPM_IBC) with TM is to be applied, and wherein if the GPM_IBC mode is enabled for a coding unit (CU), a CU-level flag indicates whether TM is applied to both geometric partitions, or wherein the IBC mode based on geometric partitioning (GPM_IBC) with TM is to be applied, and wherein if the GPM_IBC mode is enabled for a coding unit (CU), two CU-level flag indicates whether TM is applied to each geometric partition, or wherein the IBC mode based on geometric partitioning (GPM_IBC) with TM is to be applied, and wherein motion information for at least one geometric partition is refined using the TM. Chen teaches the IBC mode based on geometric partitioning (GPM_IBC) with TM is to be applied, and wherein if the GPM_IBC mode is enabled for a coding unit (CU), a CU-level flag indicates whether TM is applied to both geometric partitions, or wherein the IBC mode based on geometric partitioning (GPM_IBC) with TM is to be applied, and wherein if the GPM_IBC mode is enabled for a coding unit (CU), two CU-level flag indicates whether TM is applied to each geometric partition, or wherein the IBC mode based on geometric partitioning (GPM_IBC) with TM is to be applied, and wherein motion information for at least one geometric partition is refined using the TM ([0120] – [0124], [0148] – [0149]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chen into the combination of Xu and Ikai because such incorporation would reduce complexity and signaling overhead. [0151]. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1), Chen et al. (US 2023/0034458 A1) and Chang et al. (US 2022/0329822 A1). Consider claim 17, the combination of Xu and Chen teaches all the limitations in claim 16 but does not explicitly teach when a CU-level flag indicates whether TM is applied to both geometric partitions, wherein if only above template is available for the target video block, the GPM_IBC_TM mode uses the above template, or wherein if only left template is available for the target video block, the GPM_IBC_TM mode uses the left template, or wherein if both above and left templates are available for the target video block, the GPM_IBC_TM mode uses at least one of the following: the left template, the above template, or both above and left templates. Chang teaches when a CU-level flag indicates whether TM is applied to both geometric partitions, wherein if only above template is available for the target video block, the GPM_IBC_TM mode uses the above template, or wherein if only left template is available for the target video block, the GPM_IBC_TM mode uses the left template, or wherein if both above and left templates are available for the target video block, the GPM_IBC_TM mode uses at least one of the following: the left template, the above template, or both above and left templates ([0120] – [0124], [0128], [0165] – [0168], [0173] – [0187], [0204]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Chang into the combination of Xu and Chen because such incorporation would improve operation of video coding technologies. [0005]. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (US 2019/0246110 A1) in view of Lim et al. (US 2024/0372982 A1) and Li et al. (US 2023/0075788 A1). Consider claim 5, Xu teaches all the limitations in claim 1 but does not explicitly teach if the derived BV by the TM_IBC is used as the base candidates for the MBVD, the BV is further refined by indicated block vector difference (BVD) information, and wherein BVDs are indicated in a same manner as MBVD, or wherein the BVDs are signaled in the same manner as the IBC non-merge mode, or wherein a syntax element indicates whether the derived BV by the TM_IBC is further refined by the MBVD, and wherein the syntax element is indicated only if the TM_IBC mode is applied, wherein the derived BV by the TM_IBC is the only candidate for the IBC non-merge mode if the derived BV is available, or wherein the derived BV by the TM_IBC is the k-th candidate for the IBC non-merge mode if the derived BV is available, and wherein the k-th candidate is the first candidate, or wherein a syntax element indicates whether the derived BV by the TM_IBC is used as a BV prediction candidate for the IBC non-merge mode, and wherein the syntax element is indicated only if the TM_IBC mode is applied. Li teaches if the derived BV by the TM_IBC is used as the base candidates for the MBVD, the BV is further refined by indicated block vector difference (BVD) information ([0107], [0166] – [0168], [0172] – [0176]), and wherein BVDs are indicated in a same manner as MBVD, or wherein the BVDs are signaled in the same manner as the IBC non-merge mode, or wherein a syntax element indicates whether the derived BV by the TM_IBC is further refined by the MBVD, and wherein the syntax element is indicated only if the TM_IBC mode is applied ([0107], [0166] – [0168], [0172] – [0176]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Li into the method of Xu because such incorporation would improve coding efficiency. [0094]. Allowable Subject Matter Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAT C CHIO/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
May 12, 2025
Non-Final Rejection — §103
Aug 15, 2025
Response Filed
Sep 17, 2025
Non-Final Rejection — §103
Dec 19, 2025
Response Filed
Mar 31, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587653
Spatial Layer Rate Allocation
2y 5m to grant Granted Mar 24, 2026
Patent 12549764
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12549845
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
2y 5m to grant Granted Feb 10, 2026
Patent 12546657
METHODS AND SYSTEMS FOR REMOTE MONITORING OF ELECTRICAL EQUIPMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12549710
MULTIPLE HYPOTHESIS PREDICTION WITH TEMPLATE MATCHING IN VIDEO CODING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+16.6%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 836 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month