Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/13/2026 has been entered.
Response to Arguments
Applicant’s arguments have been considered but are moot in view of new grounds of rejections.
Applicant admits that Hsu discloses context coding but argues that he is silent on how those contexts are determined. The examiner would like to note that there is nothing in the claimed language that discloses how contexts are determined. Therefore, if Hsu teaches context coding is used, then it would be obvious that it was determined at some point in order to be used. In addition, the Applicant argues that Hsu does not teach selecting a context model “based on the absolute magnitude” of the motion vectors difference. The examiner would like to note that there is no limitation in the claimed language of record that states “selecting a context model based on the absolute magnitude of the motion vector”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 10-11, 17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over 20200374513 A1-Xiu et al (Hereinafter referred to as “Xiu”), in view of US 20190289317 A1-HSU, in further view of US 210301887-Nguyen et al (hereinafter referred to as “Nguyen”).
Regarding claim 1, Xiu discloses a method of decoding video data (Fig. 17), the method comprising:
constructing motion vector candidates using possible sign values ([0081], wherein combinations of signs for used by a list of MVD candidates), respective magnitudes of motion vector difference components (according to [0006] of the instant applicant’s publication, the absolute value of the difference is the magnitude. Therefore, to be consistent with applicant’s specification, Xiu discloses in [0081], wherein absolute values of the MVD. The absolute values are interpreted as the magnitudes), and a motion vector predictor for a block of video data (Fig. 7 shows a MVP of a block associated with the bold vector or arrow), wherein the possible sign values include a positive sign value and a negative sign value ([0080], negative and positive signs);
sorting the motion vector candidates based on a cost for each of the motion vector candidates to create a sorted list ([0081], the MVD candidates are sorted based on the calculated cost values);
determining a respective motion vector difference sign for each motion vector difference coordinate based on a motion vector sign predictor index and the sorted list ([0081], wherein using different combinations of the sign values for each of horizontal and vertical MVD. The horizontal and vertical components are the coordinates. According to instant applicant’s specification, [0129], the motion vector sign predictor index indicates a particular motion vector candidate in the sorted candidate list. To be consistent with applicant’s specification, Xiu discloses in [0081], MVD selected by sending an index in the candidate list to the decoder. The index is signaled to identify one of the candidates and used to decode or reconstruct); and
decoding the block of video data using the respective magnitudes of motion vector difference coordinates and the respective motion vector difference sign for each motion vector difference component ([0081] sending an index in the candidate list to the decoder. The index is signaled to identify one of the candidates and used to decode or reconstruct; [0091], decoding).
Xiu fails to disclose determining one or more entropy decoding context models based on the absolute magnitude; entropy decoding a motion vector sign predictor index using the entropy decoding context models
However, in the same field of endeavor, HSU discloses determining an absolute magnitude of a motion vector difference (Fig. 4, step 430; [0029], wherein the first and the second magnitude of an MVD are determined; [0020], wherein AbsMVDhor and absMVDver are the absolute values of the X and Y components of the MVD); determining one or more entropy decoding context models based on the absolute magnitude ([0019], wherein the magnitude part is always coded using context bin. Coding refers to encoding and decoding); entropy decoding a motion vector sign predictor index using the entropy decoding context models ([0021], according to the MVP index, the decoder side derives the MVP; [0029], the decoder may recover the magnitudes of the MVD from the bitstream)
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu to disclose determining one or more entropy decoding context models based on the absolute magnitude; entropy decoding a motion vector sign predictor index using the entropy decoding context models as taught by HSU, to improve efficiency by using information available at the decoder side to help the sign deriving process ([0021]).
Xiu and Hsu fail to explicitly disclose determining a respective entropy decoding context model for each of one or more context coded bins of the motion vector sign predictor index, wherein the respective entropy decoding context model is based on absolute magnitude
However, in the same field of endeavor, Nguyen discloses determining a respective entropy decoding context model for each of one or more context coded bins of the motion vector sign predictor index ([0074-0075], wherein context is selected and determined for a sytax flag; [0090], wherein context model was determined for the bins), wherein the respective entropy decoding context model is based on absolute magnitude ([0126], wherein the context selection may be based on absolute value; [0123], wherein the context set used for encoding coefficient levels in a set of 16 levels, e.g. a coefficient group, is dependent upon the previous set of coefficient levels processed, e.g. the previous coefficient group in scan order. The magnitudes of the coefficients in the previously processed scan set are used to determine which context set to use on the basis that the magnitudes)
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu and Hsu to disclose determining a respective entropy decoding context model for each of one or more context coded bins of the motion vector sign predictor index, wherein the respective entropy decoding context model is based on absolute magnitude as taught by Nguyen, to improve efficiency by applying magnitude based condition, when selecting context models in a CABAC decoding framework of Xiu and Hsu, (Nguyen, [0151])).
Regarding clam 7, Xiu discloses the method of claim 1, further comprising: determining the cost using template matching ([0081]).
Regarding claim 10, Xiu discloses the method of claim 1, further comprising: displaying a picture that includes the decoded block of video data ([0071]).
Regarding claim 11, analyses are analogous to those presented for claim 1 and are applicable for claim 11, wherein memory([0144]), one or more processors ([0144]).
Regarding claim 17, analyses are analogous to those presented for claim 7 and are applicable for claim 17.
Claim(s) 3, 5, 13 are rejected under 35 U.S.C. 103 as being unpatentable over 20200374513 A1-Xiu et al (Hereinafter referred to as “Xiu”), in view of US 20190289317 A1-HSU, in further view of US 210301887-Nguyen et al (hereinafter referred to as “Nguyen”), in further view of 20210160528 A1-Chen et al (Hereinafter referred to as “Chen”)
Regarding claim 3, Xiu discloses the method of claim 2, wherein the block of video data is coded using inter MMVD mode (See claim 2),
Xiu fails to disclose specifically decoding a merge index that indicates the motion vector predictor; decoding a step index that indicates the respective magnitudes of motion vector difference coordinates; and decoding the motion vector sign predictor index.
However, in the same field of endeavor, Chen discloses decoding a merge index that indicates the motion vector predictor ([0060], MVP index for MVP; [0061], merge mode utilize MVP….merge index for merge modes); decoding a step index that indicates the respective magnitudes of motion vector difference coordinates ([0148], wherein index to specify motion magnitude); and decoding the motion vector sign predictor index ([0058], wherein index are transmitted to the decoder).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu to disclose decoding a merge index that indicates the motion vector predictor; decoding a step index that indicates the respective magnitudes of motion vector difference coordinates; and decoding the motion vector sign predictor index as taught by Chen, to improve efficiency ([0138], Chen)
Regarding claim 5, Xiu discloses the method of claim 2 (See claim 2),
Xiu fails to disclose wherein the block of video data is coded using affine MMVD mode, and the motion vector predictor includes two or three control point motion vectors, the method further comprising: determining the control point motion vectors; decoding a step index that indicates the respective magnitudes of motion vector difference coordinates; and decoding the motion vector sign predictor index.
However, in the same field of endeavor, Chen discloses wherein the block of video data is coded using affine MMVD mode ([0074]), and the motion vector predictor includes two or three control point motion vectors ([0074]), the method further comprising: determining the control point motion vectors ([0080]); decoding a step index that indicates the respective magnitudes of motion vector difference coordinates ([0148], wherein index to specify motion magnitude); and decoding the motion vector sign predictor index([0058], wherein index are transmitted to the decoder).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu to disclose wherein the block of video data is coded using affine MMVD mode, and the motion vector predictor includes two or three control point motion vectors, the method further comprising: determining the control point motion vectors; decoding a step index that indicates the respective magnitudes of motion vector difference coordinates; and decoding the motion vector sign predictor index as taught by Chen, to improve efficiency ([0138], Chen)
Regarding claim 13, analyses are analogous to those presented for claim 3 and are applicable for claim 13.
Regarding claim 15, analyses are analogous to those presented for claim 5 and are applicable for claim 15.
Claim(s) 4, 6, 14, 16 are rejected under 35 U.S.C. 103 as being unpatentable over 20200374513 A1-Xiu et al (Hereinafter referred to as “Xiu”), in view of US 20190289317 A1-HSU, in further view of US 210301887-Nguyen et al (hereinafter referred to as “Nguyen”), in view of 20210160528 A1-Chen et al (Hereinafter referred to as “Chen”), in further view of US 20220239941-Jang et al (Hereinafter referred to as “Jang”).
Regarding claim 4, Chen discloses the method of claim 3 (See claim 3)
Xiu and Chen fail to disclose applying the respective motion vector difference sign for each motion vector difference components to the respective magnitudes of the motion vector difference components to determine a motion vector difference; adding the motion vector difference to the motion vector predictor to determine a final motion vector; and decoding the block of video data using the final motion vector.
However, in the same field of endeavor, Jang discloses applying the respective motion vector difference sign for each motion vector difference components to the respective magnitudes of the motion vector difference components to determine a motion vector difference ([0132]); adding the motion vector difference to the motion vector predictor to determine a final motion vector ([0131]); and decoding the block of video data using the final motion vector ([0131-0132]).owever
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu and Chen to disclose applying the respective motion vector difference sign for each motion vector difference components to the respective magnitudes of the motion vector difference components to determine a motion vector difference; adding the motion vector difference to the motion vector predictor to determine a final motion vector; and decoding the block of video data using the final motion vector as taught by Jang, to improve overall efficiency ([017], Jang).
Regarding claim 6, Chen discloses the method of claim 5 (See claim 5),
Xiu and Chen fail to disclose applying the respective motion vector difference sign for each motion vector difference components to the respective magnitudes of the motion vector difference components to determine a motion vector difference; adding the motion vector difference to each of the control point motion vectors to determine final control point motion vectors; and decoding the block of video data using the final control point motion vectors.
However, in the same field of endeavor, Jang discloses applying the respective motion vector difference sign for each motion vector difference components to the respective magnitudes of the motion vector difference components to determine a motion vector difference([0132]); adding the motion vector difference to each of the control point motion vectors to determine final control point motion vectors ([0131]); and decoding the block of video data using the final control point motion vectors ([0131-0132]).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu and Chen to disclose applying the respective motion vector difference sign for each motion vector difference components to the respective magnitudes of the motion vector difference components to determine a motion vector difference; adding the motion vector difference to the motion vector predictor to determine a final motion vector; and decoding the block of video data using the final motion vector as taught by Jang, to improve overall efficiency ([017], Jang).
Regarding claim 14, analyses are analogous to those presented for claim 4 and are applicable for claim 14
Regarding claim 16, analyses are analogous to those presented for claim 6 and are applicable for claim 16
Claim(s) 8, 18 rejected under 35 U.S.C. 103 as being unpatentable over 20200374513 A1-Xiu et al (Hereinafter referred to as “Xiu”), in view of US 20190289317 A1-HSU, in further view of US 210301887-Nguyen et al (hereinafter referred to as “Nguyen”), in view of 20190191171 A1-IKAI.
Regarding claim 8, Xiu discloses the method of claim 7 (SEE claim 7),
Xiu fails to disclose wherein the block of video data is coded using affine MMVD merge with motion vector difference (MMVD) mode, and wherein determining the cost using template matching comprises: determining the cost using sub-block based template matching.
However, in the same field of endeavor, IKAI discloses wherein the block of video data is coded using affine MMVD merge with motion vector difference (MMVD) mode ([0178]), and wherein determining the cost using template matching comprises: determining the cost using sub-block based template matching ([0184]).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu to disclose wherein the block of video data is coded using affine MMVD merge with motion vector difference (MMVD) mode, and wherein determining the cost using template matching comprises: determining the cost using sub-block based template matching.
as taught by Jang, to improve overall efficiency ([068], IKAI).
Regarding claim 18, analyses are analogous to those presented for claim 8 and are applicable for claim 18.
Claim(s) 9, 19 rejected under 35 U.S.C. 103 as being unpatentable over 20200374513 A1-Xiu et al (Hereinafter referred to as “Xiu”), in view of US 20190289317 A1-HSU, in further view of US 210301887-Nguyen et al (hereinafter referred to as “Nguyen”), in view of CHEN J., et al.(Hereinafter referred to as “Chen2”), “Algorithm Description for Versatile Video Coding and Test Model 11 (VTM 11)”, JVET-T2002-v2, Joint ideo Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, 20th Meeting, by teleconference, 7 - 16 ctober 2020, pp. 1-101.
Regarding claim 9, Xiu discloses the method of claim 1 (See claim 1),
Xiu fails to disclose scaling the respective magnitudes of motion vector difference components based on a picture order count (POC) difference.
However, in the same field of endeavor, Chen2 discloses scaling the respective magnitudes of motion vector difference components based on a picture order count (POC) difference (Section 3.4.2).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the method disclosed by Xiu to disclose scaling the respective magnitudes of motion vector difference components based on a picture order count (POC) difference as taught by Chen2, to improve overall efficiency (3.5.5.3, Chen2).
Regarding claim 19, analyses are analogous to those presented for claim 9 and are applicable for claim 19.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LERON BECK whose telephone number is (571)270-1175. The examiner can normally be reached M-F 8 am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
LERON . BECK
Examiner
Art Unit 2487
/LERON BECK/Primary Examiner, Art Unit 2487