DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 40-46, 48, 49, 52-55 and 57-58 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al (US 20190014342).
As to claim 40, Li discloses a video decoding device (FIG. 1), comprising:
a processor (see [0071]) configured to:
obtain motion information associated with a first block (FIGS. 15-16, step 702 and Block 1; see [0042], [0124]);
partially reconstruct the first block based on the motion information associated with the first block (see [0123], [0176]), wherein the motion information associated with the first block used to partially reconstruct the first block is unrefined motion information (see [0112], initial motion vector; see FIG. 11B and [0114]);
refine the unrefined motion information associated with the first block after partially reconstructing the first block (see [0113], [0135]); and
reconstruct the first block based on the refined motion information (see [0179]);
obtain a template of a second block based on the partially reconstructed first block (FIG. 16, Block 2; see [0178]); and
reconstruct the second block using a template-based coding tool based on the obtained template of the second block (FIG. 15, Full Reconstruction; see [0168], [0180], template matching).
As to claim 41, Li further discloses wherein the template of the second block comprises partially reconstructed samples of the first block that neighbor the second block (see [0123]).
As to claim 42, Li further discloses wherein partially reconstructing the first block comprises:
obtaining a prediction of the first block based on the motion information associated with the first block, wherein the template of the second block comprises a plurality of predicted samples of the first block (FIG. 15 and [0123]).
As to claim 43, Li further discloses wherein partially reconstructing the first block comprises: obtaining a prediction of the first block based on the motion information associated with the first block, wherein the template of the second block comprises a plurality of predicted samples of the first block, and the second block is decoded using the template-based coding tool based on the plurality of predicted samples of the first block (FIG. 15 and [0123]), and
wherein the processor is further configured to:
reconstruct the first block based on the prediction of the first block and a residual of the first block (see [0123]) wherein the reconstruction of the second block is performed in parallel with the reconstruction of the first block (FIG. 16; see [0042]).
As to claim 44, Li further discloses wherein reconstructing the second block using the template-based coding tool based on the obtained template of the second block is initiated before the first block is fully reconstructed (FIG. 16; see [0042], [0126]-[0127]).
As to claim 45, Li further discloses wherein the reconstruction of the second block is performed in parallel with the reconstruction of the first block (FIG. 16; see [0042]).
As to claim 46, Li further discloses wherein the processor is further configured to: fully reconstruct the first block, wherein the second block is reconstructed independent of the fully reconstructed first block (FIG. 15; see [0179]).
As to claim 48, Li discloses a video encoding device (FIG. 1), comprising:
a processor (see [0071]) configured to:
obtain motion information associated with a first block (FIGS. 15-16, step 702 and Block 1; see [0042], [0124]);
partially reconstruct the first block based on the motion information associated with the first block (see [0123], [0184]), wherein the motion information associated with the first block used to partially reconstruct the first block is unrefined motion information (see [0112], initial motion vector; see FIG. 11B and [0114]);
refine the unrefined motion information associated with the first block after partially reconstructing the first block (see [0113], [0135]);
encode the first block based on the refined motion information (see [0156], [0187]);
obtain a template of a second block based on the partially reconstructed first block (FIG. 16, Block 2; see [0186]); and
encode the second block using a template-based coding tool based on the obtained template of the second block (see [0156], [0187], template matching).
As to claim 49, Li further discloses wherein the processor is further configured to: determine whether to perform fast template reconstruction for the second block (see [0172], [0181]); and
based on a determination to perform fast template reconstruction for the second block, include an indication that indicates to enable fast template reconstruction in video data (see [0172], [0187]).
As to claim 52, Li further discloses wherein the processor is further configured to:
obtain a prediction of the first block based on the motion information associated with the first block, wherein the template of the second block comprises a plurality of predicted samples of the first block and the second block is encoded using the template-based coding tool based on the plurality of predicted samples of the first block in the template of the second block (FIG. 15 and [0123]); and
encode the first block based on the prediction of the first block and a residual of the first block (see [0123]), wherein the encoding of the second block is performed in parallel with the reconstruction of the first block (FIG. 16; see [0042]).
As to claim 53, Li further discloses wherein the template of the second block comprises partially reconstructed samples of the first block that neighbor the second block (see [0123]), and wherein encoding the second block using the template-based coding tool based on the obtained template of the second block is initiated before the first block is fully reconstructed (FIG. 16; see [0042], [0126]-[0127]).
As to claim 54, Li discloses a method (FIG. 21), the method comprising:
obtaining motion information associated with a first block (FIGS. 15-16, step 702 and Block 1; see [0042], [0124]);
partially reconstructing the first block based on the motion information associated with the first block (see [0123], [0184]), wherein the motion information associated with the first block used to partially reconstruct the first block is unrefined motion information (see [0112], initial motion vector; see FIG. 11B and [0114]);
refining the unrefined motion information associated with the first block after partially reconstructing the first block (see [0113], [0135]);
encoding the first block based on the refined motion information (see [0156], [0187]);
obtaining a template of a second block based on the partially reconstructed first block (FIG. 16, Block 2; see [0186]); and
encoding the second block using a template-based coding tool based on the obtained template of the second block (see [0156], [0187], template matching).
As to claim 55, Li further discloses further comprising:
determining whether to perform fast template reconstruction for the second block (see [0172], [0181]); and
based on a determination to perform fast template reconstruction for the second block, including an indication that indicates to enable fast template reconstruction in video data (see [0172], [0187]).
As to claim 57, Li further discloses wherein the method further comprises:
obtaining a prediction of the first block based on the motion information associated with the first block, wherein the template of the second block comprises a plurality of predicted samples of the first block and the second block is encoded using the template-based coding tool based on the plurality of predicted samples of the first block in the template of the second block (FIG. 15 and [0123]); and
encoding the first block based on the prediction of the first block and a residual of the first block (see [0123]).
As to claim 58, Li further discloses wherein the template of the second block comprises partially reconstructed samples of the first block that neighbor the second block (see [0123]), wherein encoding the second block using the template-based coding tool based on the obtained template of the second block is initiated before the first block is fully reconstructed (FIG. 16; see [0042], [0126]-[0127]), and wherein the encoding of the second block is performed in parallel with the reconstruction of the first block (FIG. 16; see [0042]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 47, 51 and 56 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US 20190014342) in view of Chang et al (US 12316868).
As to claim 47, Li further discloses wherein the processor is further configured to:
obtain a motion vector candidate list for the first block, wherein the motion information associated with the first block is obtained based on the motion vector candidate list (see [0078], [0112]).
Li fails to explicitly disclose re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC); and
reconstruct the first block based on the re-ordered motion vector candidate list for the first block.
However, Chang teaches re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC) (col. 21, lines 10-19); and
reconstruct the first block based on the re-ordered motion vector candidate list for the first block (col. 40, lines 42-50).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Li using Chang’s teachings to re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC); and reconstruct the first block based on the re-ordered motion vector candidate list for the first block in order to improve video coding (Chang; col. 1, lines 52-58).
As to claim 51, Li further discloses wherein the processor is further configured to: obtain a motion vector candidate list for the first block, wherein the motion information associated with the first block is obtained based on the motion vector candidate list (see [0078], [0112]).
Li fails to explicitly disclose re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC); and
encode the first block based on the re-ordered motion vector candidate list for the first block.
However, Chang teaches re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC) (col. 21, lines 10-19); and
encode the first block based on the re-ordered motion vector candidate list for the first block (col. 36, lines 43-46).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Li using Chang’s teachings to re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC); and encode the first block based on the re-ordered motion vector candidate list for the first block in order to improve video coding (Chang; col. 1, lines 52-58).
As to claim 56, Li further discloses wherein the method further comprises: obtaining a motion vector candidate list for the first block, wherein the motion information associated with the first block is obtained based on the motion vector candidate list (see [0078], [0112]).
Li fails to explicitly disclose re-ordering the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC); and
encoding the first block based on the re-ordered motion vector candidate list for the first block.
However, Chang teaches re-ordering the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC) (col. 21, lines 10-19); and
encoding the first block based on the re-ordered motion vector candidate list for the first block (col. 36, lines 43-46).
At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Li using Chang’s teachings to re-order the motion vector candidate list for the first block based on adaptive reordering of merge candidates (ARMC); and encode the first block based on the re-ordered motion vector candidate list for the first block in order to improve video coding (Chang; col. 1, lines 52-58).
Response to Arguments
Applicant's arguments filed on 01/08/2026 have been fully considered but they are not persuasive.
Applicant argues that Li does not disclose “the motion information associated with the first block used to partially reconstruct the first block is unrefined motion information, and reconstruct the first block based on the refined motion information.” Applicant specifically argues that there is no teaching or suggestion in Li of using unrefined motion information associated with the first block to partially reconstruct the first block to obtain a template for a second block, while using refined motion information to reconstruct the first block itself. The examiner respectfully disagrees.
Li discloses in [0112]-[0113], an initial motion vector is first derived for the whole CU based on bilateral matching or template matching … Then a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU … refine the motion information at the sub-block level with the derived CU motion vectors as the starting points. Li discloses in [0135], different weighting can be used to the top and the left templates when one of them employs partial reconstruction samples … such a weight can be applied to the SAD value of the left template to further refine the motion vector. Li discloses in [0176], video decoder 30 applies motion compensation to motion vector information for the neighboring block to generate a partial reconstruction of the neighboring block without applying bi-directional optical or OBMC [i.e. unrefined motion information]. Li discloses in FIG. 15 and [0123], video encoder 20 and/or video decoder 30 may apply motion compensation to motion vector information for the neighboring block to generate the partial reconstruction of the neighboring block … determine the template based on residual sample values for the neighboring block and the partial reconstruction of the neighboring block. Li discloses in [0178], Video decoder 30 determines a template for the current block based on the partial reconstruction of the neighboring block (810). Li further discloses in FIG. 15 and [0179], video decoder 30 may fully reconstruct the neighboring block by applying motion compensation, bi-directional optical, and overlapped block motion compensation [i.e. refined motion information].
Therefore, Li discloses “the motion information associated with the first block used to partially reconstruct the first block is unrefined motion information, and reconstruct the first block based on the refined motion information.”
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOUBACAR ABDOU TCHOUSSOU whose telephone number is (571)272-7625. The examiner can normally be reached M-F 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 5712727331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BOUBACAR ABDOU TCHOUSSOU/Primary Examiner, Art Unit 2482