Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 17 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cho et al. (US 2019/0306526 A1) (hereinafter Cho) in view of He et al. (US 2021/0029378 A1) (hereinafter He), and further in view of Zheng et al. (US 2012/0230392 A1) (hereinafter Zheng).
Regarding claim 1, Cho discloses a method of video decoding [Paragraph [0246], Fig. 2, decoding], comprising:
receiving a coded video bitstream comprising coded information of a current block in a current picture [Paragraph [0249]-[0253], Fig. 2, decoding apparatus receives bitstream output];
determining that the coded information indicates an inter prediction of the current block with at least a first reference picture and a second reference picture [Paragraph [0249]-[0253] & [0396], Fig. 2, decoding apparatus receives bitstream output, including on inter mode for prediction mode using bi-prediction];
determining to apply a uni-directional optical flow (UDOF) on at least a sample in the current block; deriving an optical flow motion vector that refines a motion vector of the sample [Paragraph [0755]-[0758], Optical flow generated through deep learning as UDOF]; and
reconstructing at least the sample based on the optical flow motion vector [Paragraph [0755]-[0758] & [0778], Also, extrapolation that generates a frame disposed to the left or right of two frames may be performed using the two frames and the optical flow, and reconstructing frames].
However, Cho does not explicitly disclose the optical flow motion vector being derived based on the first reference picture and the second reference picture.
He teaches of the optical flow motion vector being derived based on the first reference picture and the second reference picture [Paragraph [0084]-[0092], Bi-directional optical flow, referencing two pictures].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Cho to add the teachings of He as above, to improve coding performance and refinement (He, Paragraph [0003]-[0004] & [0079]).
However, Cho and He do not explicitly disclose inter prediction of the current block with at least a first reference picture and a second reference picture that are both on a same side of the current picture in a display order.
Zheng teaches of inter prediction of the current block with at least a first reference picture and a second reference picture that are both on a same side of the current picture in a display order [Paragraph [0094], Fig. 10, In this example, a first motion vector (mv0) and a second motion vector (mv1) point to data associated with a same predictive frame (i.e., that of a previous frame). ].
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Cho to add the teachings of Zheng as above, by defining an additional MVP candidate for the second motion vector of a bi-predictive video block, improved compression may be achieved (Zheng, Abstract).
.
Regarding claim 17, method claim 17 recites limitations similar and reciprocal to the method of decoding as claimed in claim 1. Therefore method claim 17 corresponds to method claim 1, and is rejected for the same reasons of obviousness as used above.
Regarding claim 20, method claim 20 recites limitations similar to the method of decoding as claimed in claim 1. Therefore method claim 20 corresponds to method claim 1, and is rejected for the same reasons of obviousness as used above.
Allowable Subject Matter
Claims 2-16, 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The various claimed limitations mentioned in the claims are not taught or suggested by the prior art taken either singly or in combination, with emphasize that it is each claim, taken as a whole, including the interrelationships and interconnections between various claimed elements make them allowable over the prior art of record.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL CHANG whose telephone number is (571)272-5707. The examiner can normally be reached M-Sa, 12PM - 10 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL CHANG/Primary Examiner, Art Unit 2487