Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/9/2025 has been entered.
Response to Arguments
Applicant's arguments filed 12/9/2025 have been fully considered but they are not persuasive. The applicant asserts in pgs. 8-9 of the Remarks that the references to not teach generating a warp predictor of the reference pictures based on two warp models of which a weighted average of the predictors of the first and second reference pictures is used as the warp predictor. The examiner disagrees.
Huang teaches, in par. 110, a bi-prediction motion model in which reference blocks from first and second reference pictures are used to predict a block of a current picture. In bi-prediction first and second motion vectors are used to model motion by identifying reference blocks in first and second reference pictures which correspond to the current block. These reference blocks are then motion compensated, or ‘warped’ to the location of the current block to predict the pixel values of the current block. Huang further indicates that the first and second motion compensated predictors are combine using a weighted average to generate a single predictor for predicting the current block. Thus the applicants arguments are not persuasive as Huang discloses using two warp models obtain predictors for first and second reference pictures and generating a single warp predictor using a weighted average of the two obtained predictors.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 21-22, 28-29, 34, 37 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al (2021/0203982) in view of Chuang et al (20210176485).
A method for video coding, performed by at least one processor (Huang Fig. 1 and pars 28-29) , the method comprising:
obtaining video data that comprises a plurality of blocks, each block of the plurality of blocks being associated with a first reference picture list and a second reference picture list (Huang pars 48-59 note partitioning video into blocks (CUs), note par. 58 blocks may be predicted using bi-directional prediction also note par. 59 prediction information includes reference picture list indicators, further note par. 110 bi-directional prediction uses prediction blocks from first and second reference pictures);
generating a warp model used for the first reference picture list and the second reference picture list of a current block based on a motion vectors of the current block and neighboring blocks that are adjacent to the current block, and generating the warp model comprises determining whether the motion vectors are associated with reference pictures in both the first reference picture list and the second reference picture list (Huang pars 58-67 note determining an inter prediction (warp model) using motion information, particularly note pars. 62 and 67 indicating the use of the AVMP mode which predicts the motion vectors of the current block using motion vectors of neighboring blocks, further note par. 110 the inter prediction may be performed using uni-prediction from a single reference picture list or bi-prediction from a first and second reference picture list, the uni or bi predictive modes indicated par. 67 indicating whether uni-prediction or bi-prediction is used as inter prediction mode information); and generating the warp model comprises:
determining to generate two warp models using neighboring motion vectors, of the neighboring block and the motion vectors, as having a same reference picture, of the first reference picture list to derive a first warp model, of the two warp models, for the first reference picture list and as having another same reference picture of the second reference picture list to derive a second warp model, of the two warp models, for the second reference picture list (Huang par. 110 note performing bi-prediction of the current block which requires the generation of forward and backward inter prediction ‘models’ to predict the current block, specifically note bi-prediction provides two motion vectors thus providing a first ‘warp model’ using a first motion vector pointing to a reference block in a first reference frame, and a second ‘warp model’ using a second motion vector pointing to a reference block in a second reference frame further note par. 63 only neighboring blocks associated with the same reference pictures are used for prediction); and
generating a warp predictor of both the first reference picture list and the second reference picture list based on the two warp models of which a weighted average of warp prediction of the first reference picture list and the second reference picture list is used as the warp predictor (Huang par. 110 note bi-prediction is performed using a weighted average of the prediction blocks identified by the first and second motion vectors or ‘warp models’); and
decoding a frame among the first reference picture list and the second reference picture list by applying the warp model to the frame (Huang Fig. 5 and pars 133-143 note decoding a coded bitstream into reconstructed frames, further note prediction processing unit 304 performing inter prediction including bi-directional inter prediction using first and second reference picture lists as noted in par. 110).
It is noted that Huang does not explicitly disclosed use of reference picture lists L0 and L1. However at the time of the invention it was common and notoriously well known in the art before the effective filing date of the invention to use reference forward and backward reference picture lists L0 and L1 as disclosed by Chuang (Chuang par. 8 note AVMP using reference lists L0 and L1).
It is therefore considered obvious that one of ordinary skill in the art before the effective filing date of the invention would recognize the advantage of incorporating the use forward and backward prediction lists L0 and L1 as taught by Chuang in the invention of Huang in order to gain the advantage of compliance with the HEVC standard as suggested by Chuang (Chaung pars. 3-5).
Further even assuming, arguendo, that use of L0 and L1 reference lists were not well known the labels ‘L0’ and ‘L1’ are merely used to distinguish forward and backward reference picture lists. Since Huang already discloses the use of reference picture lists (Huang par. 59) and the use of bi-directional prediction (Huang par. 110) the function of Huang would be identical to that of the claimed invention regardless of what label is used to refer to the list. Merely indicating that the list is labeled ‘L0’ or ‘L1’ does not render the claim patentably distinct.
In regard to claim 2 refer to the statements made in the rejection of claim 1 above. Huang further discloses:
wherein generating the warp model comprises determining to use neighboring motion vectors, of the neighboring block and the motion vectors, as having a same reference picture, of the reference pictures, of one of the first reference picture list L0 and the second reference picture list L1 based on determining whether one warp model, the warp model, is to be generated (Huang pars 62-63 and Chuang par. 8 note AMVP mode using motion information from neighboring blocks as prediction candidates, further note only motion information reference the same reference pictures as the current block is used further note par. 110 mode information indicating the use of unpredicting or bi-prediction and thus indicating whether the forward, L0, or backward, L1, or both reference picture lists are used in inter prediction of the current block)
In regard to claim 34 refer to the statements made in the rejection of claim 1 above. Huang further discloses that determining to generate warp models comprises determining whether one or more corrections to local warp motion model parameters are signaled (Huang pars 58-63 note par. 59 selecting between merge and AMVP mode, where AMVP mode includes a motion vector difference which ‘corrects’ the warp model to accurately represent motion, note that merge mode does not include a motion vector difference).
Claims 21-22 and 37 relate to an encoding method corresponding to the decoding method of claims 1-2 and 34 above. Refer to the statements made in regard to claims 1-2 and 34 above for the rejection of claims 21-22 and 37 and which will not be repeated here for brevity. In particular regard to claim 21, Huang further discloses encoding (Huang Fig. 4).
Claims 28-29 and 40 relate to a method for processing video data that corresponds to the method described in claims 1-2 and 34 above. Refer to the statements made in regard to claims 1-2 and 34 above for the rejection of claims 28-29 and 40 which will not be repeated here for brevity.
Claim(s) 35-36, 38-39 and 41-42 are rejected under 35 U.S.C. 103 as being unpatentable over Huang in view of Chuang as applied to claims 34, 37 and 39 above, and further in view of Chen et al (20210160528).
In regard to claims 35-36, 38-39 and 41-42, it is noted that neither Huang nor Chuang disclose corrections that comprise a mirrored delta value multiplied by a scaling factor having any of 2, 4 or 6 values. However, Chen discloses a merge mode with motion vector differences in which a correction to merge based ‘warp’ motion model is used, the correction comprising a mirrored delta value multiplied by a scaling factor and having 2, 4, or 6 values (Chen Fig. 21 and pars 147-150 note merge mode with MMVD where a motion vector difference comprises direction index or ‘delta value’ for a unit offset and a distance index, or ‘scaling factor’ to scale the unit offset, particularly note par. 150 for bi-prediction with MVs on both sides of a current frame the direction of the L1 MVD is opposite, or ‘mirrored’ to the L0 MVD, further note the direction index may have any of 4 values as indicated in Table I-2).
It is therefore considered obvious that one of ordinary skill in the art before the effective filing date of the invention would recognize the advantage of incorporating merge mode with motion vector differences as taught by Chen in the invention of Huang in view of Chuang in order to allow for further refinement of merge mode prediction candidates as suggested by Chen (Chen par. 148).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMIAH CHARLES HALLENBECK-HUBER whose telephone number is (571)272-5248. The examiner can normally be reached Monday to Friday from 9 A.M. to 5 P.M.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEREMIAH C HALLENBECK-HUBER/Primary Examiner, Art Unit 2481