DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are pending.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4 and 16-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al (US 20200404253 A1).
Concerning claim 1, Chen et al. (hereinafter Chen) teaches method of video processing, comprising:
applying, for a conversion between a video unit of a video and a bitstream of the video (fig. 3: video decoding), at least one of: a decoder side motion vector refinement (DMVR) or a variant of DMVR to the video unit to refine a motion vector of the video unit (¶0038: applying DMVR to a current block), wherein the video unit is applied with one of: an affine advanced motion vector prediction (AMVP) mode, a subblock-based temporal motion vector prediction (sbTMVP) mode, or an affine merge mode (¶0038: applying DMVR in the context of AMVP and SMVD); and
performing the conversion based on the refined motion vector (¶0038: implementing decoding of the video using the motion vector refined by DMVR).
Concerning claim 2, Chen further teaches the method of claim 1, wherein at least one of the DMVR or the variant of DMVR is used to refine a shift motion vector (MV) for the sbTMVP mode, and/or
wherein at least one of the DMVR or the variant of DMVR is used to refine a control point motion vector (CPMV) for the affine merge mode (¶0167; ¶0221: DMVR for affine CPMVs), and/or
wherein at least one of the DMVR or the variant of DMVR is used to refine a control point motion vector (CPMV) for the affine AMVP mode (¶¶0217-0221: DMVR for affine CPMVs).
Concerning claim 4, Chen further teaches the method of claim 2, wherein only a PU level DMVD or DMVR is used, and/or
wherein at least one of the DMVR or the variant of DMVR is applied to the CPMV that meets a DMVR condition or a DMVD condition, and/or
wherein at least one of the DMVR or the variant of DMVR is applied to the CPMV that points to reference pictures from both forward and backward, and/or
wherein at least one of the DMVR or the variant of DMVR is applied to the CPMV pairs for a bi-directional predicted affine AMVP mode (¶0221: DMVR can be applied to bi-directional affine CPMVs for MV refinement).
Concerning claim 16, Chen further teaches the method of claim 1, wherein a non-average weighting approach to is applied the video unit (¶0177: bi-prediction with weights (BCW) corresponds to a non-average weighting approach according to Applicant’s Published disclosure (see ¶0411, ¶0675, ¶0701)), wherein the video unit is applied with an advanced motion vector prediction (AMVP) merge mode (¶¶0035-0037).
Concerning claim 17, Chen further teaches the method of claim 1, wherein the conversion includes encoding the video unit into the bitstream (fig. 2: video encoding), or wherein the conversion includes decoding the video unit from the bitstream (fig. 3: video decoding).
Claim 18 is the corresponding apparatus to the method of claim 1 and is rejected under the same rationale. Chen further teaches implementing the method in a video encoding and decoding system that contain at least a processor, and a non-transitory memory with instructions thereon to execute the method (fig. 1: 100, figs. 2-3; ¶0052).
Claim 19 is the corresponding non-transitory computer-readable recording medium to the method of claim 1 and is rejected under the same rationale. Chen further teaches implementing the method in a computer-readable recording medium with instructions thereon to execute the method (¶0052).
Claims 20 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chono et al (US 20130101037 A1).
Claim 20’s recitation of “a bit stream generated by a method,…wherein the
method comprises…” is a product by process claim limitation where the product is the
bit stream and the process is the method steps to generate the bitstream. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps”. Thus, the scope of the claim is the storage medium storing the bitstream (with the structure implied by the method steps). The structure includes the generation of an encoded bitstream manipulated by the steps.
To be given patentable weight, computer-readable medium and the bitstream (i.e.,
descriptive material) must be in a functional relationship. A functional relationship can be found
where the descriptive material performs some function with respect to the computer-readable
medium to which it is associated. See MPEP §2111.05(I)(A). When a claimed “computer readable medium merely serves as a support for the information data (i.e., an encoded bitstream),
no functional relationship exists”. MPEP §2111.05(III). The storage medium storing the claimed
bitstream in claim 20 merely services as a support for the storage of the bitstream and provides
no functional relationship between the stored bitstream and storage medium. Therefore, the
structure bitstream, which scope is implied by the method steps, is non-functional descriptive
material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a
storage medium storing data and is anticipated by Chono et al. which recites a storage medium
storing a bitstream (fig. 10: storage medium 1004; ¶0136).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20200404253 A1) in view of Zhang et al. (US 20210152846 A1).
Concerning claim 3, Chen teaches the method of claim 2. Not explicitly taught is the method, wherein the shift MV is bi-directional predicted, and/or wherein the shift MV points to two reference pictures from both forward and backward, and/or wherein the shift MV that meets a DMVR condition or a decoder side motion vector difference (DMVD) condition is generated for the sbTMVP mode, and/or wherein the shift MV is used to derive motion vectors from corresponding prediction units (PUs) in reference pictures.
Zhang et al. (hereinafter Zhang), in a similar field of endeavor, teaches wherein the shift MV is bi-directional predicted (fig. 10; ¶0083: bi-directional MV along the motion trajectory of the current CU), and/or wherein the shift MV points to two reference pictures from both forward and backward (fig. 10; ¶0083: bi-directional MV along the motion trajectory of the current CU), and/or wherein the shift MV that meets a DMVR condition or a decoder side motion vector difference (DMVD) condition is generated for the sbTMVP mode, and/or wherein the shift MV is used to derive motion vectors from corresponding prediction units (PUs) in reference pictures . It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the teachings of Zhang to the Chen invention in order to derive motion information of the current CU (Zhang, ¶0083).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20200404253 A1) in view of Chiu et al. (US 20250286991 A1).
Concerning claim 13, Chen teaches the method of claim 1. Not explicitly taught is the method, wherein a MERGE MV in the motion pair is determined based on a rule, and wherein the rule is based on a predefined position of a MERGE MV candidate in a MERGE MV list, or wherein the rule is based on a TM cost.
Chiu et al. (hereinafter Chiu), in a similar field of endeavor, teaches wherein a MERGE MV in the motion pair is determined based on a rule, and wherein the rule is based on a predefined position of a MERGE MV candidate in a MERGE MV list, or wherein the rule is based on a TM cost (¶0056: Template Matching (TM) costs are used in determining Merge MVs). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the teachings of Chiu to the Chen invention in order to reduce complexity associated with MMVD (Chiu, ¶0002).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20200404253 A1) in view of Zhang et al. (US 20200288168 A1).
Concerning claim 14, Chen teaches the method of claim 1. Not explicitly taught is the method, wherein at least one local illumination compensation (LIC) model parameter of a LIC model for the video unit is derived based on a non-linear model, wherein the video unit coded is a LIC coded block, and wherein the LIC model is updated by adjusting the at least one LIC model parameter.
Zhang et al. (hereinafter Zhang 2), in a similar field of endeavor, teaches wherein at least one local illumination compensation (LIC) model parameter of a LIC model for the video unit is derived based on a non-linear model, wherein the video unit coded is a LIC coded block, and wherein the LIC model is updated by adjusting the at least one LIC model parameter (¶¶0104-106: equations (1) and (4)). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the teachings of Zhang to the Chen invention in order to address the issue of local illumination changes (Zhang, ¶0104).
Allowable Subject Matter
Claims 5-12 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M ANDERSON II whose telephone number is (571)270-1444. The examiner can normally be reached Monday - Friday 10AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN PENDLETON can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/James M Anderson II/Primary Examiner, Art Unit 2425