DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 19-40 are pending in the application. Claims 19-24 have been amended.
Response to Arguments / Amendments
Applicant’s arguments have been fully considered but are rendered moot in view of the new ground of rejection necessitated by amendments initiated by the applicant.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.)C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 19-40 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (Efficient Inter-View Motion Vector Prediction in Multi-View HEVC, IEEE, September 2018, hereinafter Lee) in view of Cai et al. (US 20080172384, hereinafter Cai) and SHIMIZU et al. (US 201503500678, hereinafter SHIMIZU).
Regarding Claim 19, Lee discloses a method for decoding a video data stream, the method comprising:
obtaining from the video data stream an information representative of the use of an epipolar (Abstract, motion vector prediction algorithm in which the geometry interrelation between two neighboring views is derived based on epipolar geometry);
obtaining from the video data stream first camera parameters of a current frame and second camera parameters of a reference frame (Section III-A, Fig. 4, two cameras are looking at point P, where CR and CL are the origins of the right and left cameras- If the projection point Pk on the right plane is known, then the epipolar line lk is known and any point on the epipolar line lk can be the corresponding matching point for Pk ; Section III-B, Fig. 5, Calculating matrix F using the relationship between pairs of the matching points on the right and left planes: the right and left planes are the current and reference pictures in the current and reference views, respectively);
PNG
media_image1.png
312
464
media_image1.png
Greyscale
determining, based on the obtained camera parameters, an epipolar line in the reference frame as a projection of a line comprising a point in the current frame and an optical center of the first camera ( Section III-A, Fig. 4, for known relative positions of the two cameras, the epipolar line lk is known, and it is known that the point P′k is located on the epipolar line. If the projection point Pk on the right plane is known, then the epipolar line lk is known, and we know that the point P′k is located on the epipolar line that gives an epipolar constraint);
PNG
media_image2.png
312
490
media_image2.png
Greyscale
determining a projection into the reference frame of a point selected on the line (Section III-B, Fig. 5, Calculating matrix F using the relationship between pairs of the matching points on the right and left planes: the right and left planes are the current and reference pictures in the current and reference views, respectively );
obtaining from the video data stream a distance value representing a distance parameter on the epipolar line (Section III-A, Fig. 4, inter-view disparity motion estimation using IvMVk , where the left picture (the reference picture) is RAP picture in the reference view. Note that Pk and P′k are the positions of upper-left corners in the blocks; Section III-C );
determining a reference point on the epipolar line based on the distance value of the projected point (Section III-B, Fig. 5, pairs of the corresponding blocks Bk and B′k , in the current and reference pictures, respectively and the corresponding block B′k for Bk is decided by inter-view disparity motion estimation using IvMVk , where the left picture (the reference picture) is random access point (RAP) picture in the reference view. Note that Pk and P′k are the positions of upper-left corners in the blocks); and
reconstructing the current block using motion compensation between the current block of the current frame and a block of the reference frame based on the determined motion vector ( Abstract, enhanced advanced motion vector prediction algorithm in which the geometry interrelation between two neighboring views is derived based on epipolar geometry, similarity transform, and affine transform, and then predicted motion vectors (PMVs) for efficient MV coding are generated using obtained geometry relation – that is improving multi-view high efficiency video codec (MV-HEVC) standard [involving both encoding & decoding (reconstructing) circuits as known implicit steps]).
Lee does not explicitly disclose the epipolar mode.
Cai teaches the epipolar mode (Abstract, fast motion estimation based upon epipolar geometry in compressing multi-view video with an epipolar line is computed based on a point in a macroblock to be predicted; [0025], FIG. 3, an epipolar geometry-based fast motion estimation framework is directed towards transferring a conventional search starting point such as starting the commonly-adopted median predicted search center (MPSC) to obtain another starting point or MPSC's orthogonal projection point (i.e., orthogonal projection epipolar search center (OPESC)) on the corresponding epipolar (dashed) line 308).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the epipolar mode as taught by Cai ([0121]) into the encoding & decoding system of Lee in order to prevent the prediction outliers from the sub-macroblocks from destroying the smooth motion field and thus the drop of coding the prediction outliers from the sub-macroblocks is prevented from destroying the smooth motion field and improve the coding efficiency (Cai, [0004]).
Lee & Cai do not explicitly the distance to be scalar value representing the distance parameter on the epipolar line.
SHIMIZU teaches the distance to be ([0010], disparity information provides information corresponding relationship represented as a one-dimensional amount representing a three-dimensional position of an object, rather than a two-dimensional vector, based on epipolar geometric constraints by using camera parameters and because the reciprocal of the distance is information proportional to the disparity, two reference cameras may be set and a three-dimensional position may be represented as the amount of disparity between images captured by the cameras; [0011], FIG. 11, epipolar geometric constrain).
PNG
media_image3.png
342
594
media_image3.png
Greyscale
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of distance to be scalar value representing the distance parameter on the epipolar line as taught by SHIMIZU ([0010]) into the encoding & decoding system of Lee & Cai in order to achieve pseudo-motion compensation prediction with sub-pixel accuracy for a synthesized perspective image when compensating for pseudo-motion indicating synthesis position misalignment in the synthesized perspective image and improve image encoding efficiency of the coding apparatus (SHIMIZU, [0004]).
Regarding Claim 25, Lee in view of Cai & SHIMIZU discloses the method of claim 19,
Cai discloses wherein the projection of the selected point is perpendicular to the epipolar line ([0025],FIG. 3, an epipolar geometry-based fast motion estimation framework is directed towards transferring a conventional search starting point, median predicted search center (MPSC) orthogonal projection point (i.e., orthogonal projection epipolar search center (OPESC)) on the corresponding epipolar (dashed) line 308 computing and the old computed point (MPSC) is mathematically moved (at ninety degrees in this implementation) to a new point (OPESC) on the epipolar line 308 ).
PNG
media_image4.png
306
376
media_image4.png
Greyscale
The same reason or rational of obviousness motivation applied as used above in claim 19.
Regarding Claim 26, Lee in view of Cai & SHIMIZU discloses the method of claim 19,
Lee discloses wherein the point in the current frame corresponds to a center of the current block (Section III-B, Fig. 5, pairs of the corresponding blocks Bk and B′k , in the current and reference pictures, respectively and the corresponding block B′k for Bk is decided by inter-view disparity motion estimation using IvMVk , where the left picture (the reference picture) is random access point (RAP) picture in the reference view. Note that Pk and P′k are the positions of upper-left corners in the blocks).
Regarding Claim 27, Lee in view of Cai & SHIMIZU discloses the method of claim 19,
Lee discloses wherein the point in the current frame corresponds to a pre-determined position in the current block (Section III-B, Fig. 5, pairs of the corresponding blocks Bk and B′k , in the current and reference pictures, respectively and the corresponding block B′k for Bk is decided by inter-view disparity motion estimation using IvMVk , where the left picture (the reference picture) is random access point (RAP) picture in the reference view. Note that Pk and P′k are the positions of upper-left corners in the blocks).
Regarding Claim 28, Lee in view of Cai & SHIMIZU discloses the method of claim 19,
Lee in view of Cai & SHIMIZU discloses wherein the point in the current frame corresponds to a top left sample of the current block (Section III-B, Fig. 5, pairs of the corresponding blocks Bk and B′k , in the current and reference pictures, respectively and the corresponding block B′k for Bk is decided by inter-view disparity motion estimation using IvMVk , where the left picture (the reference picture) is random access point (RAP) picture in the reference view. Note that Pk and P′k are the positions of upper-left corners in the blocks).
Regarding Claims 20 & 29-32, Encoding method claims 20 & 29-32 of using the corresponding decoding method claimed in claims 19 & 25-28, and the rejections of which are incorporated herein for the same reasons as used above.
Regarding Claims 21 & 33-36, Decoder Apparatus claims 21 & 33-36 of using the corresponding decoding method claimed in claims 19 & 25-28, and the rejections of which are incorporated herein for the same reasons as used above.
Regarding Claims 22 & 33-40, Encoder Apparatus claims 22 & 33-40 of using the corresponding encoding method claimed in claims 20 & 29-32, and the rejections of which are incorporated herein for the same reasons as used above.
Regarding Claims 23, Computer Media for decoding claim 23 of using the corresponding decoding method claimed in claims 19 & 25-28, and the rejections of which are incorporated herein for the same reasons as used above.
Regarding Claims 24, Computer Media for encoding claim 24 of using the corresponding encoding method claimed in claims 20 & 29-32, and the rejections of which are incorporated herein for the same reasons as used above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samuel D Fereja whose telephone number is (469)295-9243. The examiner can normally be reached 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID CZEKAJ can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL D FEREJA/
Primary Examiner, Art Unit 2487