DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/25 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6-11, 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0007238 A1 (“Chen”) in view of US 2024/0244222 A1 (“Deng”) in further view of US 2019/0208223 A1 (“Galpin”).
Regarding claim 1, Chen discloses a method of video coding, the method comprising: receiving input data associated with a current block of a video unit in a current picture (e.g. see at least providing motion vectors to motion compensation associated with a current block in a current picture, paragraphs [0134]-[0135]; also see at least reference pictures from DPB, 218 in Fig. 9 and 314 in Fig. 10, are retrieved, paragraphs [0137], [0157]); applying motion compensation to the current block according to an initial motion vector (MV) to obtain initial pixel predictors of the current block (e.g. see at least motion compensation, e.g. see 224 in Fig. 9 and 316 in Fig. 10, to generate a prediction block using motion vectors, paragraphs [0136]-[0142]); applying template-matching MV refinement to the current block to obtain a refined MV for the current block (e.g. see at least template matching of a current CU shown in Fig. 4 to refine MV, paragraphs [0089]-[0094], [0136]-[0137]); and encoding or decoding the current block using information including the refined MV (e.g. see video encoder in Fig. 9 or video decoder in Fig. 10).
Although Chen discloses applying template-matching MV refinement to the current block, to obtain a refined MV for the current block, it is noted Chen differs from the present invention in that it fails to particularly disclose after said applying the motion compensation to the current block. Deng however, teaches after said applying the motion compensation to the current block (e.g. see at least motion refinement after motion compensation, paragraph [0290]).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Chen and Deng before him/her, to modify the using unrefined motion vectors for performing decoder-side motion vector derivation of Chen with Deng in order to increase efficiency to improve for higher coding gain.
Further, although Chen discloses encoding or decoding the current block using information including the refined MV, it is noted Chen differs from the present invention in that it fails to particularly disclose wherein said encoding or decoding the current block comprises adjusting the initial pixel predictors based on information including an MV difference (MVD) between the refined MV and the initial MV to generate adjusted pixel predictors. Galpin however, teaches wherein said encoding or decoding the current block comprises adjusting the initial pixel predictors (e.g. see at least motion compensated predictors in 827 in Fig. 8, paragraphs [0106], [0109]-[0110]) based on information including an MV difference (MVD) between the refined MV and the initial MV (e.g. see at least MVDrefine in 825 in Fig. 8, paragraphs [0106], [0109]-[0110]) to generate adjusted pixel predictors (e.g. see at least motion compensation in 885 in Fig. 8, paragraphs [0106], [0109]-[0110]).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Chen, Deng and Galpin before him/her, to incorporate the teachings of Galpin into the using unrefined motion vectors for performing decoder-side motion vector derivation of Chen as modified Deng in order to improve motion accuracy and compression efficiency.
Regarding claim 6, Chen further discloses wherein a bounding box in a reference picture is selected to restrict the template-matching MV refinement (e.g. see at least search range constraint imposed on DMVD techniques such as TM to limit reference samples fetched from reference frames within a bounding box, paragraph [0123] and Fig. 11) and/or the motion compensation to use only reference pixels within the bounding box (e.g. see motion compensation determines a bounding box to retrieve reference samples of reference pictures, paragraph [0137] and Fig. 11).
Regarding claim 7, Chen further discloses wherein the bounding box is equal to a region required for the motion compensation (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see at least overlapped area in Fig. 11, paragraphs [0195]-[0199]).
Regarding claim 8, Chen further discloses wherein the bounding box is larger than a region required for the motion compensation (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see at least actual search range in Fig. 11, paragraphs [0195]-[0199]).
Regarding claim 9, Chen further discloses wherein the bounding box is larger than the region by a pre-defined size (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see at least actual search range in Fig. 11, paragraphs [0195]-[0199]).
Regarding claim 10, Chen further discloses wherein if a target reference pixel for the template-matching MV refinement and/or the motion compensation is outside the bounding box, a padded value is used for the target reference pixel (e.g. see at least reference sample padding, paragraphs [0194], [0199]).
Regarding claim 11, Chen further discloses wherein if a target reference pixel for the template-matching MV refinement and/or the motion compensation is outside the bounding box, the target reference pixel is skipped (e.g. see at least bounding box size, paragraphs [0137]-[0139], and see Fig. 11, paragraphs [0195]-[0199]; the reference pixel outside the bounding box is not used or skipped).
Regarding claim 13, Chen further discloses wherein the initial MV corresponds to a non-refined MV (e.g. see at least unrefined motion vector, paragraphs [0136]-[0142]).
Regarding claim 14, the claim recites analogous limitations to the claim above and is therefore rejected on the same premise.
Response to Arguments
Applicant's arguments filed 12/10/25 have been fully considered but they are not persuasive.
Applicant asserts on pages 6-8 of the Remarks that Galpin fails to teach “adjusting the initial pixel predictors based on information including an MV difference (MVD) between the refined MV and the initial MV to generate adjusted pixel predictors.”
However, the examiner respectfully disagrees. At least Fig. 8 and paragraphs [0106], [0109]-[0110] of Galpin teaches “adjusting the initial pixel predictors based on information including an MV difference (MVD) between the refined MV and the initial MV to generate adjusted pixel predictors” as mapped above. That is, in step 885, initial pixel predictors, e.g. without refined MV in 852, are adjusted based on MV refinement in 825 if RDcost(MV*) is less than RDcost(MV1). Thus, the adjusted pixel predictors, i.e. the pixel predictors in 885 to compute the residuals, will be according to 827, effectively adjusting the initial pixel predictors, e.g. in 852. Therefore, the limitations are met by the cited prior art as a whole in the broadest reasonable sense.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2024/0380922 A1, Deng et al., GPM Motion Refinement
US 2022/0201315 A1 – Zhang et al., Multi-Pass Decoder-Side Motion Vector Refinement
US 2023/0171421 A1, Galpin et al., Motion Refinement Using a Deep Neural Network
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCIS G GEROLEO whose telephone number is (571)270-7206. The examiner can normally be reached M-F 7:00 am - 3:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anna M Momper can be reached on (571) 270-5788. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Francis Geroleo/Primary Examiner, Art Unit 3619