Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to the remark entered on 10/15/2025.
Claims 1-3, 5-8, 10 & 12-13 are pending in the instant application.
Claims 1, 5-6 & 13 are amended.
Claims 4, 9 & 11 are cancelled.
Response to Arguments
Applicant's remarks filed 10/15/2025, pages 8-9, regarding the objection to the Abstract, rejection of claim 13 under 35 USC 101 and 102(a)(1) have been fully considered, and are persuasive. The above are withdrawn.
Applicant's remarks filed 10/15/2025, pages 9-11, regarding the rejection of claim 1, and similarly claims 6 & 13 under 35 USC 103 have been fully considered, and are moot upon further consideration and a new ground(s) of rejection made under 35 U.S.C. § 103 as being unpatentable over De LaGrange et al. (WO 2020/123442 A1) (hereinafter LaGrange) in view of Zhang et al., “EE2: Bilateral and template matching AMVP-merge mode (test3.3).,” JVET-X0083-V1, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, 24th Meeting, pp. 1-3, 30 September 2021 (hereinafter Zhang), and further in view of Liu et al. (US 2018/0332312 A1) (hereinafter Liu) as outlined below.
In response to Applicant’s remark that Examiner’s previously-cited references do not show the Applicant’s newly-recited claim limitations, the Examiner directs Applicant’s attention to the rejection of claims 1, 6 & 13 below, wherein Applicant’s newly-recited limitations are addressed by Liu and are rejected for the reasons as outlined below.
Furthermore, Applicant asserts that Zhang does not teach of the merge index is signaled. However this point is moot because the Examiner depends on LaGrange to disclose of the merge index is signaled. In Pg. 9, ll. 2-5, Fig. 5, LaGrange discloses that the merge index identifying one candidate is signaled, and further discloses in pg. 27, ll. 5-17, that one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder, for example, a signal can be formatted to carry the bitstream of a described embodiment including syntax elements including the merge index. Thus LaGrange discloses of the merge index is signaled. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant’s remarks filed 10/15/2025, pages 11, with respect to the rejection of claims 2-3, 5, 7-8, 10 & 12 under 35 USC 103 have been fully considered, but they are not persuasive.
Applicant first relies on the patentability of the claims from which these claims depend to traverse the rejection without prejudice to any further basis for patentability of these claims based on the additional elements recited.
Examiner cannot concur with the Applicant because the combination of LaGrange, Zhang, and Liu teach independent claim 1, 6 & 13 as outlined below. Thus, claims 2-3, 5, 7-8, 10 & 12 are also rejected for the similar reasons as outlined below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-8, 10 & 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over De LaGrange et al. (WO 2020/123442 A1) (hereinafter LaGrange) in view of Zhang et al., “EE2: Bilateral and template matching AMVP-merge mode (test3.3).,” JVET-X0083-V1, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, 24th Meeting, pp. 1-3, 30 September 2021 (hereinafter Zhang), and further in view of Liu et al. (US 2018/0332312 A1) (hereinafter Liu).
Regarding claim 1, LaGrange discloses a method performed by a video decoding device for generating a prediction block of a current block [Pgs. 2-3, ll. 5-34, Pg. 19, ll. 14-27, Fig. 12, Decoder decodes video bitstream to obtain predicted block], the method comprising:
generating a prediction block in an advanced motion vector prediction mode (AMVP mode) [Pg. 9, ll. 2-31, Fig. 5, P1 inter prediction is uni-prediction AMVP];
decoding, for a merge mode, a merge index of the merge mode from a bitstream [Pg. 9, ll. 2-5, Fig. 5, P0 inter prediction is merge index identifying one candidate];
generating a merging candidate list of the merge mode [Pg. 9, ll. 2-5, Fig. 5, performed in merge mode (where a list of merge candidates is built)];
deriving, by using the merge index, a reference picture in the merge mode and a motion vector from the merging candidate list [Pg. 9, ll. 2-5, Fig. 5, Pg. 19, ll. 14-27, Fig. 12, merge mode (where a list of merge candidates {reference index, motion values} is built and a merge index identifying one candidate is signaled to acquire motion information for the motion compensated inter prediction)];
generating a prediction block in the merge mode by using the reference picture in the merge mode and the motion vector [Pg. 9, ll. 2-5, Fig. 5, Pg. 19, ll. 14-27, Fig. 12, P0 inter prediction merge mode is generated from a merge index identifying one candidate with reference index and motion vector is signaled to acquire motion information for the motion compensated inter prediction]; and
combining the prediction block in the AMVP mode with the prediction block in the merge mode to generate a prediction block of the current block [Pg. 9, ll. 2-5, Fig. 5, Pg. 19, ll. 14-27, Fig. 12, the final prediction as prediction block of the current block, is the weighted average, as combining, of the merge indexed prediction P0 and the prediction generated by the other prediction mode (inter/AMVP) P1].
However, LaGrange does not explicitly disclose of an AMVP-MERGE mode, and wherein generating the merging candidate list includes: constructing the merging candidate list using merge candidates having a prediction direction oppose to a prediction direction in the AMVP mode.
Zhang teaches of an AMVP-MERGE mode, and wherein generating the merging candidate list includes: constructing the merging candidate list using merge candidates having a prediction direction oppose to a prediction direction in the AMVP mode [2 Proposed Methods, When the selected merge predictor and the AMVP predictor satisfy DMVR condition, which is there is at least one reference picture from the past and one reference picture from the future relatively to the current picture and the distances from two reference pictures to the current picture are the same, the bilateral matching MV refinement is applied for the merge MV candidate and AMVP MVP as a starting point. Otherwise, if template matching functionality is enabled, template matching MV refinement is applied to the merge predictor or the AMVP predictor which has a higher template matching cost.].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
However, LaGrange and Zhang do not explicitly disclose of excluding merge candidates that have only a same prediction direction as the prediction direction in the AMVP mode.
Liu teaches of excluding merge candidates that have only a same prediction direction as the prediction direction in the AMVP mode [Paragraph [0035], in the merging candidate list construction process, any merging candidate that has the same inter prediction direction, reference frame(s) and motion vector(s) as those used by the first N−1 coded CUs are excluded to be added into the merging candidate list].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the merging candidate list construction process as described in Liu as above, for better video processing methods to reduce signal redundancy to meet the demand for higher resolutions, more complex graphical content, and faster transmission time increases (Liu, Paragraph [0005]).
Regarding claim 2, LaGrange, Zhang, and Liu disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein generating the prediction block in the AMVP mode includes:
decoding, for the AMVP mode, a reference index and a motion vector difference of the AMVP mode from the bitstream; generating a candidate list of the AMVP mode; obtaining a motion vector predictor index; deriving, by using the motion vector predictor index, a motion vector predictor from the candidate list of the AMVP mode; generating a motion vector in the AMVP mode by summing the motion vector predictor and the motion vector difference; and using the motion vector in the AMVP mode to generate the prediction block in the AMVP mode from a reference picture indicated by the reference index of the AMVP mode [2 Proposed Methods, AMVP part is signaled as a regular uni-direction AMVP, i.e., reference index and MVD are signaled, matched within candidate list, and obtaining AMVP predictor] and refined for AMVP mode].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 3, LaGrange, Zhang, and Liu disclose the method of claim 2, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein obtaining the motion vector predictor index includes: decoding the motion vector predictor index from the bitstream when a template matching is not used; and deriving the motion vector predictor index when a template matching is used [2 Proposed Methods, MVP index signaled when template matching is disabled, and MVP index if template matching is used].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 5, LaGrange, Zhang, and Liu disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein the merge index is determined by a video encoding device based on a cost for rate-distortion optimization on the current block and the prediction block of the current block [2 Proposed Methods, When the selected merge predictor and the AMVP predictor satisfy DMVR condition, which is there is at least one reference picture from the past and one reference picture from the future relatively to the current picture and the distances from two reference pictures to the current picture are the same, the bilateral matching MV refinement is applied for the merge MV candidate and AMVP MVP as a starting point. Otherwise, if template matching functionality is enabled, template matching MV refinement is applied to the merge predictor or the AMVP predictor which has a higher template matching cost.].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 6, LaGrange discloses a method performed by a video decoding device for generating a prediction block of a current block [Pgs. 2-3, ll. 5-34, Pg. 19, ll. 14-27, Fig. 12, Decoder decodes video bitstream to obtain predicted block], the method comprising:
generating a prediction block in an advanced motion vector prediction mode (AMVP mode) [Pg. 9, ll. 2-31, Fig. 5, P1 inter prediction is uni-prediction AMVP];
generating, for a merge mode, a merging candidate list of the merge mode [Pg. 9, ll. 2-5, Fig. 5, performed in merge mode (where a list of merge candidates is built)];
generating a prediction block in the merge mode by using the merging candidate list [Pg. 9, ll. 2-5, Fig. 5, Pg. 19, ll. 14-27, Fig. 12, merge mode (where a list of merge candidates {reference index, motion values} is built and a merge index identifying one candidate is signaled to acquire motion information for the motion compensated inter prediction)];
combining the prediction block in the AMVP mode with the prediction block in the merge mode to generate a prediction block of the current block [Pg. 9, ll. 2-5, Fig. 5, Pg. 19, ll. 14-27, Fig. 12, P0 inter prediction merge mode is generated from a merge index identifying one candidate with reference index and motion vector is signaled to acquire motion information for the motion compensated inter prediction]; and
encoding a merge index of the merge mode [Pg. 9, ll. 2-5, Fig. 5, P0 inter prediction is merge index identifying one candidate is signaled, pg. 19, ll. 1-5, syntax elements are entropy coded (145) to output a bitstream, pg. 27, ll. 5-17, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various embodiments. For example, a signal can be formatted to carry the bitstream of a described embodiment.].
However, LaGrange does not explicitly disclose of an AMVP-MERGE mode, and wherein generating the merging candidate list includes: constructing the merging candidate list using merge candidates having a prediction direction opposite to a prediction direction in the AMVP mode.
Zhang teaches of an AMVP-MERGE mode, and wherein generating the merging candidate list includes: constructing the merging candidate list using merge candidates having a prediction direction opposite to a prediction direction in the AMVP mode [2 Proposed Methods, When the selected merge predictor and the AMVP predictor satisfy DMVR condition, which is there is at least one reference picture from the past and one reference picture from the future relatively to the current picture and the distances from two reference pictures to the current picture are the same, the bilateral matching MV refinement is applied for the merge MV candidate and AMVP MVP as a starting point. Otherwise, if template matching functionality is enabled, template matching MV refinement is applied to the merge predictor or the AMVP predictor which has a higher template matching cost].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
However, LaGrange and Zhang do not explicitly disclose of excluding merge candidates that have only a same prediction direction as the prediction direction in the AMVP mode.
Liu teaches of excluding merge candidates that have only a same prediction direction as the prediction direction in the AMVP mode [Paragraph [0035], in the merging candidate list construction process, any merging candidate that has the same inter prediction direction, reference frame(s) and motion vector(s) as those used by the first N−1 coded CUs are excluded to be added into the merging candidate list].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the merging candidate list construction process as described in Liu as above, for better video processing methods to reduce signal redundancy to meet the demand for higher resolutions, more complex graphical content, and faster transmission time increases (Liu, Paragraph [0005]).
Regarding claim 7, LaGrange, Zhang, and Liu disclose the method of claim 6, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein generating the prediction block in the AMVP mode includes: generating the prediction block in the AMVP mode for the current block; determining a motion vector and a reference index of the AMVP mode;
generating a candidate list of the AMVP mode; obtaining a motion vector predictor index by using the candidate list of the AMVP mode; and generating a motion vector difference by subtracting a motion vector predictor from the motion vector in the AMVP mode [2 Proposed Methods, AMVP part is signaled as a regular uni-direction AMVP, i.e., reference index and MVD are signaled, matched within candidate list, and obtaining AMVP predictor] and refined for AMVP mode].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 8, LaGrange, Zhang, and Liu disclose the method of claim 7, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein obtaining the motion vector predictor index includes: determining the motion vector predictor index when a template matching is not used; and deriving the motion vector predictor index when a template matching is used [2 Proposed Methods, MVP index signaled when template matching is disabled, and MVP index if template matching is used].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 9, LaGrange, Zhang, and Liu disclose the method of claim 7, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein generating the merging candidate list includes: composing the merging candidate list by using merge candidates that exist in a direction opposite to a direction indicated by the reference index of the AMVP mode [2 Proposed Methods, When the selected merge predictor and the AMVP predictor satisfy DMVR condition, which is there is at least one reference picture from the past and one reference picture from the future relatively to the current picture and the distances from two reference pictures to the current picture are the same, the bilateral matching MV refinement is applied for the merge MV candidate and AMVP MVP as a starting point. Otherwise, if template matching functionality is enabled, template matching MV refinement is applied to the merge predictor or the AMVP predictor which has a higher template matching cost.].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 10, LaGrange, Zhang, and Liu disclose the method of claim 6, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches wherein the merge index is determined based on a cost for rate-distortion optimization on the current block and the prediction block of the current block [2 Proposed Methods, When the selected merge predictor and the AMVP predictor satisfy DMVR condition, which is there is at least one reference picture from the past and one reference picture from the future relatively to the current picture and the distances from two reference pictures to the current picture are the same, the bilateral matching MV refinement is applied for the merge MV candidate and AMVP MVP as a starting point. Otherwise, if template matching functionality is enabled, template matching MV refinement is applied to the merge predictor or the AMVP predictor which has a higher template matching cost.].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 11, LaGrange, Zhang, and Liu disclose the method of claim 6, and are analyzed as previously discussed with respect to the claim.
Furthermore, LaGrange discloses of further comprising: encoding the merge index of the merge mode [Pg. 9, ll. 2-5, Fig. 5, P0 inter prediction is merge index identifying one candidate].
Regarding claim 12, LaGrange, Zhang, and Liu disclose the method of claim 7, and are analyzed as previously discussed with respect to the claim.
Furthermore, Zhang teaches further comprising: encoding the reference index of the AMVP mode and the motion vector difference [2 Proposed Methods, AMVP part is signaled as a regular uni-direction AMVP, i.e., reference index and MVD are signaled, matched within candidate list, and obtaining AMVP predictor] and refined for AMVP mode].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by LaGrange to integrate the AMVP-Merge mode as described in Zhang as above, in order to improve the coding efficiency with higher gain (Zhang, 4 Conclusions).
Regarding claim 13, method claim 13 for providing a video decoding apparatus with video data contains similar claim limitations to method claim 6, and thus corresponds to claim 6, and therefore is also rejected for the same reasons of obviousness as listed above.
Furthermore, LaGrange discloses of encoding the video data into a bitstream; and transmitting the bitstream to the video decoding device [Pg. 18-19, ll. 17-17, an encoder entropy codes a bitstream to transport to decoder].
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL CHANG whose telephone number is (571)272-5707. The examiner can normally be reached M-Sa, 12PM - 10 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL CHANG/Primary Examiner, Art Unit 2487