Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,783

Method and Apparatus for Hardware-Friendly Template Matching in Video Coding System

Non-Final OA §103
Filed
Feb 19, 2024
Examiner
GEROLEO, FRANCIS
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
MediaTek Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
92%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
418 granted / 573 resolved
+20.9% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
49 currently pending
Career history
622
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 573 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/25 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-11, 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0007238 A1 (“Chen”) in view of US 2024/0244222 A1 (“Deng”) in further view of US 2019/0208223 A1 (“Galpin”). Regarding claim 1, Chen discloses a method of video coding, the method comprising: receiving input data associated with a current block of a video unit in a current picture (e.g. see at least providing motion vectors to motion compensation associated with a current block in a current picture, paragraphs [0134]-[0135]; also see at least reference pictures from DPB, 218 in Fig. 9 and 314 in Fig. 10, are retrieved, paragraphs [0137], [0157]); applying motion compensation to the current block according to an initial motion vector (MV) to obtain initial pixel predictors of the current block (e.g. see at least motion compensation, e.g. see 224 in Fig. 9 and 316 in Fig. 10, to generate a prediction block using motion vectors, paragraphs [0136]-[0142]); applying template-matching MV refinement to the current block to obtain a refined MV for the current block (e.g. see at least template matching of a current CU shown in Fig. 4 to refine MV, paragraphs [0089]-[0094], [0136]-[0137]); and encoding or decoding the current block using information including the refined MV (e.g. see video encoder in Fig. 9 or video decoder in Fig. 10). Although Chen discloses applying template-matching MV refinement to the current block, to obtain a refined MV for the current block, it is noted Chen differs from the present invention in that it fails to particularly disclose after said applying the motion compensation to the current block. Deng however, teaches after said applying the motion compensation to the current block (e.g. see at least motion refinement after motion compensation, paragraph [0290]). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Chen and Deng before him/her, to modify the using unrefined motion vectors for performing decoder-side motion vector derivation of Chen with Deng in order to increase efficiency to improve for higher coding gain. Further, although Chen discloses encoding or decoding the current block using information including the refined MV, it is noted Chen differs from the present invention in that it fails to particularly disclose wherein said encoding or decoding the current block comprises adjusting the initial pixel predictors based on information including an MV difference (MVD) between the refined MV and the initial MV to generate adjusted pixel predictors. Galpin however, teaches wherein said encoding or decoding the current block comprises adjusting the initial pixel predictors (e.g. see at least motion compensated predictors in 827 in Fig. 8, paragraphs [0106], [0109]-[0110]) based on information including an MV difference (MVD) between the refined MV and the initial MV (e.g. see at least MVDrefine in 825 in Fig. 8, paragraphs [0106], [0109]-[0110]) to generate adjusted pixel predictors (e.g. see at least motion compensation in 885 in Fig. 8, paragraphs [0106], [0109]-[0110]). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Chen, Deng and Galpin before him/her, to incorporate the teachings of Galpin into the using unrefined motion vectors for performing decoder-side motion vector derivation of Chen as modified Deng in order to improve motion accuracy and compression efficiency. Regarding claim 6, Chen further discloses wherein a bounding box in a reference picture is selected to restrict the template-matching MV refinement (e.g. see at least search range constraint imposed on DMVD techniques such as TM to limit reference samples fetched from reference frames within a bounding box, paragraph [0123] and Fig. 11) and/or the motion compensation to use only reference pixels within the bounding box (e.g. see motion compensation determines a bounding box to retrieve reference samples of reference pictures, paragraph [0137] and Fig. 11). Regarding claim 7, Chen further discloses wherein the bounding box is equal to a region required for the motion compensation (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see at least overlapped area in Fig. 11, paragraphs [0195]-[0199]). Regarding claim 8, Chen further discloses wherein the bounding box is larger than a region required for the motion compensation (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see at least actual search range in Fig. 11, paragraphs [0195]-[0199]). Regarding claim 9, Chen further discloses wherein the bounding box is larger than the region by a pre-defined size (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see at least actual search range in Fig. 11, paragraphs [0195]-[0199]). Regarding claim 10, Chen further discloses wherein if a target reference pixel for the template-matching MV refinement and/or the motion compensation is outside the bounding box, a padded value is used for the target reference pixel (e.g. see at least reference sample padding, paragraphs [0194], [0199]). Regarding claim 11, Chen further discloses wherein if a target reference pixel for the template-matching MV refinement and/or the motion compensation is outside the bounding box, the target reference pixel is skipped (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see Fig. 11, paragraphs [0195]-[0199]; the reference pixel outside the bounding box is not used or skipped). Regarding claim 13, Chen further discloses wherein the initial MV corresponds to a non-refined MV (e.g. see at least unrefined motion vector, paragraphs [0136]-[0142]). Regarding claim 14, the claim recites analogous limitations to the claim above and is therefore rejected on the same premise. Response to Arguments Applicant's arguments filed 12/10/25 have been fully considered but they are not persuasive. Applicant asserts on pages 6-8 of the Remarks that Galpin fails to teach “adjusting the initial pixel predictors based on information including an MV difference (MVD) between the refined MV and the initial MV to generate adjusted pixel predictors.” However, the examiner respectfully disagrees. At least Fig. 8 and paragraphs [0106], [0109]-[0110] of Galpin teaches “adjusting the initial pixel predictors based on information including an MV difference (MVD) between the refined MV and the initial MV to generate adjusted pixel predictors” as mapped above. That is, in step 885, initial pixel predictors, e.g. without refined MV in 852, are adjusted based on MV refinement in 825 if RDcost(MV*) is less than RDcost(MV1). Thus, the adjusted pixel predictors, i.e. the pixel predictors in 885 to compute the residuals, will be according to 827, effectively adjusting the initial pixel predictors, e.g. in 852. Therefore, the limitations are met by the cited prior art as a whole in the broadest reasonable sense. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2024/0380922 A1, Deng et al., GPM Motion Refinement US 2022/0201315 A1 – Zhang et al., Multi-Pass Decoder-Side Motion Vector Refinement US 2023/0171421 A1, Galpin et al., Motion Refinement Using a Deep Neural Network Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCIS G GEROLEO whose telephone number is (571)270-7206. The examiner can normally be reached M-F 7:00 am - 3:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anna M Momper can be reached on (571) 270-5788. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Francis Geroleo/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Feb 19, 2024
Application Filed
Apr 28, 2025
Non-Final Rejection — §103
Jul 31, 2025
Response Filed
Aug 11, 2025
Final Rejection — §103
Dec 10, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591065
DISTANCE MEASUREMENT DEVICE AND DISTANCE MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12581109
METHOD FOR ENCODING AND DECODING IMAGE INFORMATION AND DEVICE USING SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12574501
METHOD, AND APPARATUS FOR REFERENCE FRAME SELECTION, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12568223
RESTRICTIONS ON DECODER SIDE MOTION VECTOR DERIVATION BASED ON CODING INFORMATION
2y 5m to grant Granted Mar 03, 2026
Patent 12563202
METHOD AND APPARATUS FOR VIDEO INTRA PREDICTION INVOLVING FILTERING REFERENCE SAMPLES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
92%
With Interview (+19.3%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 573 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month