Prosecution Insights
Last updated: April 19, 2026
Application No. 18/857,329

EXTENDED TEMPLATE MATCHING FOR VIDEO CODING

Non-Final OA §103
Filed
Oct 16, 2024
Examiner
PEREZ FUENTES, LUIS M
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
66%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
573 granted / 688 resolved
+25.3% vs TC avg
Minimal -18% lift
Without
With
+-17.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
719
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 688 resolved cases

Office Action

§103
Detailed Office Action 1. This communication is being filed in response to the submission having a mailing date of (10/16/2024) in which a (3) month Shortened Statutory Period for Response has been set. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Acknowledgements 3. Upon initial entry, claims (1 -17) appear pending for examination, of which (1, 13, 14, 15 and 16) are the five (5) parallel running independent claims on record. Information Disclosure Statement 4. The Information Disclosure Statement (IDS) that was/were submitted on (10/16/2024) is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Specification 5. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which applicant may become aware in the specification. Drawings 6. The submitted Drawings on date (10/16/2024) has been accepted and considered under the 37 CFR 1.121 (d). 35 USC § 103 rejection 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7.1. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. 7.2. Claims (1 -17) are rejected under 35 U.S.C. 103 as being unpatentable over Lin; et al. (“Enhanced Template Matching in FRUC Mode”; hereafter “Lin”) in view of Chen; et al US 10,701,366 B2 hereafter “Chen”. [Claim 1]. Lin discloses the invention substantially as claimed - A video coding method comprising: (e.g. a contribution to JEM-4.0 employing bi-prediction motion vector template matching (TM) based on frame distortion; finding the best match between current block (CB) and reference picture (RP), as shown in Figs. (1 -2)). receiving data for a block of pixels to be encoded or decoded as a current block of a current picture of a video, (e.g. see Figs (2) based on standard codec; [page 1]); wherein the current block is associated with first and second motion vectors that reference prediction samples in first and second reference pictures; (e.g. see Figs (2) based on standard codec; [page 1]); generating a template based on an average of prediction samples referenced by the first and second motion vectors, (e.g. see Figs (2), average distortion is similarly determined; [page 2-3]); wherein the template and the current block are different in size or shape; (e.g. see enhanced TM for templates of equal sizes; [page 2]); searching the first reference picture to refine the first motion vector based on a matching cost between samples referred by the refined first motion vector and the samples of the template; (e.g. see MV search, disclosed in Figs. (2); [sect. 1; pages 1-2]); searching the second reference picture to refine the second motion vector (e.g. see RP search technique disclosed in Figs. (2); [sect. 1; pages 1-2]); based on a matching cost between samples referred by the refined second motion vector and the samples of the template; (e.g. see cost determination of the same in Figs. 2; [page 3]); and using the refined first and second motion vectors to encode or decode the current block; (e.g. see MV refinement output for codec processing, Figs. 2; [page 3]). One skilled and the status of the Art would assume that the difference between Lin’s “Enhanced TM” and the instant “Extended TM” is associated with the size of the template used (i.e. codec standard definitions.). It is also note that no codec architecture disclosed in Lin’s paper. Taken above into account, Lin teaches TM for “same size templates” and even when the papers in details disclosed and enabled the principles of “TM technique”, it clearly fails to disclose the particularities of the “Extended TM” as claimed. For the purpose of additional clarification and in the same field of endeavor, Chen discloses – (e.g. a codec ecosystem of the same (Fig. 1), including encoder (20, Fig. 2) and decoder (30, Fig. 3), where bi-prediction motion vector template matching (TM) for deriving MV of the target block (CB), as shown in at least Figs. (7 -9); [Chen; Summary]; with extended template methodology (Figs. 14-15) [Chen; 28: 65]). Chen similarly teaches (e.g. how to calculate weighted average of the MVs; in at least [Chen; 4: 15; Cols. 13 -14]; the use of adaptive dependent size; [31: 53;] and different size templates used, as illustrated in at least Figs. (17 -19); [Chen; 33: 15].) Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention, to modify the papers of Lin with the full codec architecture of Chen, in order to provide (e.g. codec efficiency and quality improvement when using any of [Template Matching, Bilateral Template matching, etc) in the process; [Chen; 13: 25].) [Claim 2] Lin/Chen discloses - The video coding method of claim 1, wherein the template comprises a first section based on reconstructed samples neighboring the current block in the current picture and a second section based an average of the initial prediction samples from the first reference picture and the initial prediction samples from the second reference picture; (e.g. see Fig. 2 [Lin] and Figs. (9, 20) [Chen] about to template matching algorithm execution, including MC section and cost averaging section, emphasis added; the same motivation applies herein.) [Claim 3] Lin/Chen discloses - The video coding method of claim 1, wherein the template corresponds to an area in the current picture that encompasses the first current block; (e.g. see similar in Fig. 1 [Lin] and Fig. 7; [Chen]; the same motivation applies herein.) [Claim 4] Lin/Chen discloses - The video coding method of claim 1, wherein the template corresponds to an area in the current picture that is a sub-portion of the current block; (e.g. see similar in Fig. 1 [Lin] and Fig. 7; [Chen]; the same motivation applies herein.) [Claim 5] Lin/Chen discloses - The video coding method of claim 1, wherein the template corresponds to an area in the current picture that is partly inside the current block and partly outside the current block, wherein the current block is partly outside of the area; (e.g. see similar in Fig. 1 [Lin] and Fig. 7; [Chen]; and extended template in Figs. 14-15; [Chen]; the same motivation applies herein.) [Claim 6] Lin/Chen discloses - The video coding method of claim 1, where the template comprises a first template section and a second template section, (e.g. see extended templates similarly used in Figs. 14 -15; [Chen]) wherein the first template section is used to generate a first candidate refinement of the first motion vector and the second template section is used to generate a second candidate refinement of the first motion vector, wherein the first motion vector is refined based on the first and second candidate refinements; (e.g. see Fig. 2 [Lin] and Figs. (9, 20) [Chen] about to template matching algorithm execution, including MC section and cost averaging section, emphasis added; the same motivation applies herein.) [Claim 7] Lin/Chen discloses - The video coding method of claim 6, wherein one of the first and second candidate refinement of the first motion vector is selected as the refined first motion vector; (See also MV refinement and cost determination in Figs. (2) [Lin] and Fig. 9 [Chen; 4: 33; 24: 35] respectively; the same motivation applies herein.) [Claim 8] Lin/Chen discloses - The video coding method of claim 1, wherein the template comprises two or more different template sections, wherein refining the first motion vector comprises computing a cost of the refined first motion vector based on weights assigned to the different template sections; (The same rationale and motivation apply as given to Claim 1 above. See also MV refinement and cost determination in Figs. (2) [Lin] and Fig. 9 [Chen; 4: 33; 24: 35] respectively.) [Claim 9] Lin/Chen discloses - The video coding method of claim 1, further comprising receiving or signaling a selection of a configuration from a plurality of possible configurations for the template, wherein the template is generated according to the selected configuration; (e.g. see similar configuration, also signaled at SPE/PPS slice header; [Chen; 13: 15]; the same motivation applies herein.) [Claim 10] Lin/Chen discloses - The video coding method of claim 1, further comprising scaling the refined motion vectors according to a format of a chroma component and using the scaled motion vectors to fetch prediction samples of the chroma component; (e.g. see scaling/resampling of the MV according to pixel/color data; [Chen; 13: 65; 38: 13]; the same motivation applies herein.) [Claim 11] Lin/Chen discloses - The video coding method of claim 1, wherein refining the first and second motion vectors comprises iteratively updating the first and second motion vectors according to the template and regenerating the template based on the updated first or second motion vectors; (e.g. see vector update/regenerated and replace, during best match selection in at least Fig. 9; [Chen; 24: 60; 25: 05; 27: 65]; the same motivation applies herein.) [Claim 12] Lin/Chen discloses - The video coding method of claim 11, wherein the template is regenerated based on the updated first motion vector and the regenerated template is used to update the second motion vector; (e.g. see vector update/regenerated and replace, during best match selection in at least Fig. 9; [Chen; 24: 60; 25: 05; 27: 65]; the same motivation applies herein.) [Claim 13] Lin/Chen discloses - A video decoding method comprising: receiving data for a block of pixels to be decoded as a current block of a current picture of a video, wherein the current block is associated with first and second motion vectors that reference prediction samples in first and second reference pictures; generating a template based on an average of prediction samples referenced by the first and second motion vectors, wherein the template and the current block are different in size or shape; searching the first reference picture to refine the first motion vector based on a matching cost between samples referred by the refined first motion vector and the samples of the template; searching the second reference picture to refine the second motion vector based on a matching cost between samples referred by the refined second motion vector and the samples of the template; and using the refined first and second motion vectors to reconstruct the current block. (Current lists all the same elements as recite in Claim 1 above, but in “decoder method form” instead, and is/are therefore on the same premise.) [Claim 14] Lin/Chen discloses - A video encoding method comprising: receiving data for a block of pixels to be encoded as a current block of a current picture of a video, wherein the current block is associated with first and second motion vectors that reference prediction samples in first and second reference pictures; generating a template based on an average of prediction samples referenced by the first and second motion vectors, wherein the template and the current block are different in size or shape; searching the first reference picture to refine the first motion vector based on a matching cost between samples referred by the refined first motion vector and the samples of the template; searching the second reference picture to refine the second motion vector based on a matching cost between samples referred by the refined second motion vector and the samples of the template; and using the refined first and second motion vectors to encode the current block. (Current lists all the same elements as recite in Claim 1 above, but in “encoder method form” instead, and is/are therefore on the same premise.) [Claim 15] Lin/Chen discloses - An electronic apparatus comprising: a video coder circuit configured to perform operations comprising: receiving data for a block of pixels to be encoded or decoded as a current block of a current picture of a video, wherein the current block is associated with first and second motion vectors that reference prediction samples in first and second reference pictures; generating a template based on an average of prediction samples referenced by the first and second motion vectors, wherein the template and the current block are different in size or shape; searching the first reference picture to refine the first motion vector based on a matching cost between samples referred by the refined first motion vector and the samples of the template; searching the second reference picture to refine the second motion vector based on a matching cost between samples referred by the refined second motion vector and the samples of the template; and using the refined first and second motion vectors to encode or decode the current block. (Current lists all the same elements as recite in Claim 1 above, but in “coding apparatus form” instead, and is./are therefore on the same premise.) [Claim 16] Lin/Chen discloses - A video coding method comprising: receiving data for a block of pixels to be encoded or decoded as a current block of a current picture of a video, wherein the current block is associated with first and second motion vectors that reference prediction samples in first and second reference pictures; generating a template based on an average of prediction samples referenced by the first and second motion vectors; iteratively searching the first and second reference pictures to refine the first and second motion vectors, wherein in each iteration the first and second motion vectors are updated and the template is regenerated according to the updated first and second motion vectors; and using the refined first and second motion vectors to encode or decode the current block. (Current lists all the same elements as recite in Claim 1 above, but in “coding method form” instead, and is./are therefore on the same premise.) [Claim 17] Lin/Chen discloses - The video coding method of claim 16, wherein the template is regenerated based on the updated first motion vector and the regenerated template is used to update the second motion vector; (e.g. see vector update/regenerated and replace, during best match selection in at least Fig. 9; [Chen; 24: 60; 25: 05; 27: 65]; the same motivation applies herein.) Prior Art Citations 11. The following List of prior art, made of record and not relied upon, is/are considered pertinent to applicant's disclosure: 11.1. Patent literature: US 2011/0176611 A1 Huang; et al. H04N19/46; H04N19/56; H04N19/523 US 10,701,366 B2 Chen; et al. H04N19/44; H04N19/105; H04N19/573; US 10,491,917 B2 Chen; et al. H04N19/44; H04N19/159; H04N19/513; 11.2. Non-Patent literature: _ Enhanced Template Matching in FRUC Mode; Jan-2017. _ Decoder-Side Motion Vector derivation with switchable template matching; July-2010. CONCLUSIONS 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS PEREZ-FUENTES (luis.perez-fuentes@uspto.gov) whose telephone number is (571) 270 -1168. The examiner can normally be reached on Monday-Friday 8am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, WILLIAM VAUGHN can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is (571) 272 -1168. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated system, please call (800) 786 -9199 (USA OR CANADA) or (571) 272 -1000. /LUIS PEREZ-FUENTES/ Primary Examiner, Art Unit 2481.
Read full office action

Prosecution Timeline

Oct 16, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603987
METHOD, SYSTEM AND PROGRAM FOR DATA PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12598291
VIDEO SIGNAL PROCESSING METHOD USING OUT-OF-BOUNDARY BLOCK AND APPARATUS THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12598295
INTRA PREDICTION-BASED VIDEO ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593060
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, IMAGE DECODING APPARATUS, AND IMAGE CODING AND DECODING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12587675
Pixel-Level Video Prediction with Improved Performance and Efficiency
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
66%
With Interview (-17.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 688 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month