Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,236

CANDIDATE REORDERING AND MOTION VECTOR REFINEMENT FOR GEOMETRIC PARTITIONING MODE

Non-Final OA §103
Filed
Feb 16, 2024
Examiner
HAQUE, MD NAZMUL
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
531 granted / 641 resolved
+24.8% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
66.0%
+26.0% vs TC avg
§102
4.5%
-35.5% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 641 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. There are a total of 14 claims and claims 1-14 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/02/2026 has been entered. Response to Arguments Applicant's arguments, filed on 01/02/2026 with respect to claims 1-14 in the remarks, have been considered but are moot in view of the new ground(s) of rejection necessitated by the new limitations added to claims 1 and 14. See the rejection below of claims 1 and 14 for relevant citations found in Chen disclosing the newly added limitations. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al. (US 2022/0329822 A1; provisional app# 63/176798, filed on Apr. 19,2021) in view of Chen et al. (2020/0068218 A1). Regarding claim 1, Chang discloses a video coding method comprising([para 0002]-video encoding and decoding): receiving data to be encoded or decoded as a current block of a current picture of a video([Fig. 1-2]-input video data for encoding and decoding), wherein the current block is partitioned into first and second partitions by a bisecting line defined by an angle-distance pair([see in Fig. 8A and abstract]- determine that a current block of the video data is inter-predicted in a combined inter-intra prediction (CIIP) mode or a geometric partitioning mode (GPM)); identifying a list of candidate prediction modes for coding the first and second partitions([see in Fig. 8A, [0004]; and abstract]- processing circuitry is configured to determine that a current block of the video data is inter-predicted in a combined inter-intra prediction (CIIP) mode or a geometric partitioning mode (GPM), determine that template matching is enabled for the current block, generate a motion vector for the current block based on template matching); computing a template matching (TM) cost for each candidate prediction mode in the list([para 0082-0083]- cost function in template matching; The matching cost C of template matching is calculated as follows: C=SAD+w*(|MVx−MV.sup.sx|+|MVu−MV.sup.sy|; para 0083 also discloses w is a weighting factor which can be set to an integer number such as 0, 1, 2, 3 or 4, and MV and MV.sup.s indicate the currently testing MV and the initial MV (e.g., a MVP candidate in AMVP mode or merged motion in merge mode), respectively. SAD is used as the matching cost of template matching); receiving or signaling a selection of a candidate prediction mode in the reordered list based on an index that is assigned to the selected candidate prediction mode ([para 0082-0085]- cost function in template matching; The matching cost C of template matching is calculated as follows: C=SAD+w*(|MVx−MV.sup.sx|+|MVu−MV.sup.sy|; para 0083 also discloses w is a weighting factor which can be set to an integer number such as 0, 1, 2, 3 or 4, and MV and MV.sup.s indicate the currently testing MV and the initial MV (e.g., a MVP candidate in AMVP mode or merged motion in merge mode; [para 0167]- if the merge index is smaller than a threshold, TM refinement is applied. In another example, if the merge index is larger than a threshold, TM refinement is applied. In some examples, the merge list may be different from the merge list when the first flag is false. In some examples, the merge list is constructed by picking a subset of candidates from the merge list that is used when the first flag is false. Some reordering of the candidates may be applied. The reordering may be based on the TM cost of the merge candidates). However, Chang does not explicitly disclose reconstructing the current block by using the selected candidate prediction mode to predict the first and second partitions; reordering the candidate prediction modes in the list based on the TM cost for each candidate prediction mode in the list. In an analogous art, Chen discloses reconstructing the current block by using the selected candidate prediction mode to predict the first and second partitions ([see in Fig. 15A]- FIG. 15A illustrates an example of candidate reordering based on partition for a current block, where the solid lines correspond to quadtree partition and the dashed lines correspond to binary-tree partition) ;reordering the candidate prediction modes in the list based on the TM cost for each candidate prediction mode in the list([see in Fig 8, step 940 in fig 9, step 1640 in fig 16)]- In step 910, multiple candidates in the candidate list are selected for reordering. In step 920, the costs for the selected candidates are estimated. In step 930, an optional step for cost adjustment is performed). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Chen to the modified system of Chang and Zhang reordering of candidates in a candidate list utilizing the template of a current block and the template of one or more reference blocks to improve coding performance [Chen; para 0002]. Regarding claim 2, Chang discloses wherein the TM cost of a candidate prediction mode is computed by matching a current template of the current block with a combined template of a first reference template of the first partition and a second reference template of the second partition ([para 0031]- The video decoder may generate a first motion vector for the first partition using template matching, and generate a second motion vector for the second partition using template matching. The video decoder may determine a first prediction partition based on the first motion vector, and determine a second prediction partition based on the second motion vector. The video decoder may combine the first prediction partition and the second prediction partition to determine the prediction block). Regarding claim 3, Chang discloses wherein different candidate prediction modes in the list correspond to different bisecting lines that are defined by different angle-distances pairings([see in Fig. 6]- With an AMVP candidate selected based on initial matching error, the MVP (motion vector predictor) for the AMVP candidate is refined by template matching. With a merge candidate indicated by signaled merge index, the merged MVs of the merge candidate corresponding to L0 (reference picture list 0) and L1 (reference picture list 1) are refined independently by template matching and then the less accurate one is further refined again with the better one as a prior). Regarding claim 4, Chang discloses wherein different candidate prediction modes in the list correspond to different motion vectors, wherein the selected candidate prediction mode corresponds to a candidate motion vector that is selected from the list to generate an inter-prediction for reconstructing the first partition or the second partition of the current block([see in Fig. 6]- With an AMVP candidate selected based on initial matching error, the MVP (motion vector predictor) for the AMVP candidate is refined by template matching. With a merge candidate indicated by signaled merge index, the merged MVs of the merge candidate corresponding to L0 (reference picture list 0) and L1 (reference picture list 1) are refined independently by template matching and then the less accurate one is further refined again with the better one as a prior; [see also para 0082-0085]). Regarding claim 5, Chen discloses wherein the candidate motion vectors in the list are sorted according to the computed TM costs of the candidate motion vectors([para 0105]- The reference block for estimating the cost may correspond to the block pointed by integer part of motion vector of the current merge candidate. The L-shape template of the reference block has several embodiments. In one embodiment, all pixels of L-shape template are outside the reference block for estimating the cost as shown in FIG. 11A, where the L-shaped area 1112 for the current block 1110 and the L-shaped area 1122 for the reference block 1120 are indicated. The reference block 1120 is located based on the motion vector 1140 pointing from the current block 1110. In another embodiment, all pixels of L-shape template 1123 are inside the reference block for estimating the cost. In yet another embodiment, some pixels of L-shape template are outside the reference block for estimating the cost and some pixels of L-shape template are inside the reference block for estimating the cost. FIG. 11B shows another embodiment of template similar to FIG. 11A. However, the template of the current block comprises a number of pixel rows 1114 above the current block 1110 and a number of pixel columns 1116 to the left of the current block 1110). Regarding claim 6, Chang discloses wherein the list of candidate prediction modes comprises (i) only uni-prediction candidates and no bi-prediction candidates when the current block is greater than a threshold size([para 0064]- video encoder 200 may predict the current CU using uni-directional prediction or bi-directional prediction; [ see also para 0085;0193]) and (ii) merge candidates when the current block is less than a threshold size([para 0081]- an AMVP candidate selected based on initial matching error, the MVP (motion vector predictor) for the AMVP candidate is refined by template matching. With a merge candidate indicated by signaled merge index, the merged MVs of the merge candidate corresponding to L0 (reference picture list 0) and L1 (reference picture list 1) are refined independently by template matching and then the less accurate one is further refined again with the better one as a prior. Reference picture list 0 and reference picture list 1 refer to the reference picture lists constructed by video encoder 200 and video decoder 300 to identify the reference picture used for inter-prediction). Regarding claim 7, Chang discloses wherein the first partition is coded by inter-prediction that references samples in a reference picture and the second partition is to be coded by intra-prediction that references neighboring samples of the current block in the current picture([abstract]- processing circuitry is configured to determine that a current block of the video data is inter-predicted in a combined inter-intra prediction (CIIP) mode or a geometric partitioning mode (GPM)). Regarding claim 8, Chang discloses wherein the first and second partitions are coded by inter-prediction that uses first and second motion vectors from the list to reference samples in first and second reference pictures([para 0004]- coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (e.g., a video picture or a portion of a video picture) may be partitioned into video blocks, which may also be referred to as coding tree units (CTUs), coding units (CUs) and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture are encoded using spatial prediction with respect to reference samples in neighboring blocks in the same picture). Regarding claim 9, Chang discloses wherein reconstructing the current block comprises using refined motion vectors to generate predictions for the first and second partitions, wherein a refined motion vector is identified by searching for a motion vector having a lowest TM cost based on an initial motion vector([see in Fig. 6]- decoder 300 may determine reference templates within search area 608. As one example, FIG. 6 illustrates reference templates 610 and 612. Video decoder 300 may determine which reference templates within search area 608 substantially match (e.g., having lowest value of a cost function) current templates 602 and 604 within a current picture that includes current block 600). Regarding claim 10, Chang discloses wherein searching for the motion vector having the lowest TM cost comprises iteratively applying a search pattern centered at a motion vector identified as having a lowest TM cost from a previous iteration([para 0085]- the search method in template matching. MV refinement is a pattern-based MV search with the criterion of template matching cost and a hierarchical structure. Two search patterns are supported—a diamond search and a cross search for MV refinement. The hierarchical structure specifies an iterative process to refine MV, starting at a coarse MVD precision (e.g., quarter-pel) and ending at a fine one (e.g., ⅛-pel). The MV is directly searched at quarter luma sample MVD precision with the diamond pattern, followed by quarter luma sample MVD precision with cross pattern, and then followed by one-eighth luma sample MVD refinement with cross pattern). Regarding claim 11, Chang discloses wherein searching for the motion vector having the lowest TM cost comprises applying different search patterns at different resolutions in different iterations([para 0085]- the search method in template matching. MV refinement is a pattern-based MV search with the criterion of template matching cost and a hierarchical structure. Two search patterns are supported—a diamond search and a cross search for MV refinement. The hierarchical structure specifies an iterative process to refine MV, starting at a coarse MVD precision (e.g., quarter-pel) and ending at a fine one (e.g., ⅛-pel). The MV is directly searched at quarter luma sample MVD precision with the diamond pattern, followed by quarter luma sample MVD precision with cross pattern, and then followed by one-eighth luma sample MVD refinement with cross pattern). Regarding claim 12, Chang discloses wherein the list of candidate prediction modes comprises one or more merge candidates, wherein the TM cost of a merge candidate is computed by matching a current template of the current block with a reference template of a block of pixels reference by the merge candidate([para 0103]- decoder 300 may receive an index into the merge candidate list that identifies the motion information (e.g., motion vector information). For generating the motion vector for the current block based on template matching, video decoder 300 may determine a search area in a reference picture based on the initial motion vector, determine reference templates within the search area that substantially match current templates within a current picture that includes the current block, and determine the motion vector for the current block based on the determined reference templates). Regarding claim 13, Chang discloses wherein the list of candidate prediction modes further comprises one or more geometric prediction mode (GPM) candidates, wherein the TM cost of a GPM candidate is computed by matching a current template of the current block with a combined template of a first reference template of the first partition and a second reference template of the second partition([para 0114]- determine a prediction block for the current block based on the motion vector in accordance with GPM. For instance, video decoder 300 may determine the prediction block based on the first motion vector, or based on the first motion vector and the second motion vector, where the first motion vector and the second motion vector are generated using template matching. As an example, video decoder 300 may determine a first prediction partition based on the first motion vector, determine a second prediction partition based on the second motion vector, and combine the first prediction partition and the second prediction partition to determine the prediction block). Regarding claim 14, the claim is interpreted and rejected for the same reason as set forth in claim 1. Hence; all limitations for claim 14 have been met in claim 1. Citation of Pertinent Prior Art The prior art are made of record and not relied upon but considered pertinent to applicant’s disclosure: 1. KARCZEWICZ et al., US 2011/0002388 A1, discloses video coding techniques that use template matching motion prediction. 2. LIU et. al., US 2017/0353730 A1, discloses techniques to reduce the complexity or improve the coding efficiency associated with template-based Intra prediction. 3. Panusopone et al., US 2017/0339404 A1, discloses a template matching scheme for coding with intra prediction in JVET. 4. Park et al., US 2009/0232215 A1, discloses to use template matching adaptively according to a template region range. 5. Zhang et al., US 2024/0267533 A1, discloses refinement of coding data 6. Park et al., US 2021/0037238 A1, discloses an image decoding method and a device therefor in an image coding system according to inter-prediction. 7. Jung et al., US 2021/0014522 A1, discloses a video signal processing method and apparatus for encoding or decoding a video signal. 8. Xiu et al., US 2020/0374513 A1, discloses Video coding methods are described for reducing latency in template-based inter coding. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MD NAZMUL HAQUE whose telephone number is (571)272-5328. The examiner can normally be reached IFW. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 5712727327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MD N HAQUE/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
May 06, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Sep 02, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 16, 2026
Response after Non-Final Action
Mar 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603999
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12593040
METHOD AND APPARATUS FOR IMPROVING PERFORMANCE OF NEURAL NETWORK FILTER BASED VIDEO CODING
2y 5m to grant Granted Mar 31, 2026
Patent 12581074
CHROMA COMPONENT CODING
2y 5m to grant Granted Mar 17, 2026
Patent 12569121
SLEEVE ASSEMBLY AND ENDOSCOPE DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12568220
METHOD OF REMOVING DEBLOCKING ARTIFACTS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+15.7%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 641 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month