Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Response to Arguments
The rejection of claims 29-42 and 46-51 for Double Patenting and under 35 U.S.C. 103 are withdrawn in light of the amendments to independent claims 22, 29 and 36. Newly filed claim 52 stands rejected as indicated below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 52 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Yan (2007/0086527) in view of Bull et al (2008/186404) and in further view of Saigo et al (2007/0110161).
In regard to claim 52 Yan discloses a video decoder apparatus comprising:
a memory (Yan Fig. 2 note RF storage 215 and reference frame storage 217); and
a processor in with the memory (Yan Fig. 2 and par. 26 note functional blocks may be software executed by a processor) the processor configured to:
determine a target block of a current frame (Yan Fig. 2 and par. 18 note performing operations on macroblocks (MB)); and
performing motion compensation (Yan Fig. 2 note motion compensation 219)
perform an inverse quantization and an inverse transform of the current frame (Yan Fig. 2 and par. 18 not inverse quantization 203 and inverse transform 205 which are preformed to decode a current frame).
Yan further discloses performing an error concealment process using spatio-temporal error concealment (Yan Fig. 2 and par. 19 and 24 note error detect and correct 209). It is noted that Yan does not disclose details of error concealment. However, Bull discloses a temporal error concealment method including:
identifying a first block of a previous frame based on the first block being a neighbor of a second block of the previous frame collocated with the target block (Bull pars 18-30 particularly note par 24 note identifying a collocated block to a lost MB in a current frame and obtaining the MV of the collocated MB and the MVs of 8 MB neighboring the collocated MB as candidate MVs, note the MB neighboring the collocated MB as a ‘first block’) ;
decode the target block of the current frame based at least in part on a first motion vector corresponding to the first block, wherein the motion vector is relative to a block of a reference frame distinct from the previous frame (Bull Fig. 3 and pars 18-30 particularly note par. 27 selecting a MV from among the candidate MVs, including the ‘first block’ and using the selected MV to decode a lost MB, note that the collocated block and its neighbors have MVs relative to a reference frame distinct from the reference frame of the lost MB which will be the frame containing the collocated block as the current frame is either a P or I frame as indicated by Fig. 3).
It is therefore considered obvious that one of ordinary skill in the art before the date of the invention, would recognize the advantage of incorporating the error concealment process of Bull in the invention of Yan in order to gain the advantage of performing error concealment that minimizes errors as suggested by Yan (Yan par. 27).
Bull teaches selecting a MV from a previous frame to use in coding a target block in a current frame (Bull pars. 18-30). Bull further disclose bi-directional or ‘B’ frame coding (Bull par. 3). It is noted that neither Yan nor Bull disclose details of a second motion vector corresponding to a third block in a succeeding frame or motion compensation using two motion vectors.
Hover Saigo discloses a method of coding a target block in a current B frame by obtaining a first motion vector from a block in a collocated frame, scaling the motion vector to generate a second motion vector corresponding to a block in a succeeding frame, and performing motion compensation using the first and second motion vectors (Saigo Fig. 8 and pars 90-93 note obtaining a motion vector form a collocated block and performing temporal scaling on the motion vector to generate first and second motion vectors pointing to blocks in previous and succeeding frames to use in coding the current block, particularly note par. 93 motion compensation using motion vectors MvF and MvB).
It is therefore considered obvious that one of ordinary skill in the art, prior to the invention, would recognize the advantage of incorporating motion vector scaling to generate a second motion vector and performing motion compensation using both vectors as taught by Saigo in the invention of Yan and Bull in order to determine motion vectors for the B pictures of Yan in view of Bull for which no motion vectors are present as suggested by Saigo (Saigo par. 93 note no motion vectors included).
Allowable Subject Matter
Claims 22-42 and 46-51 are allowed.
The following is an examiner’s statement of reasons for allowance:
Independent claims 22, 29 and 36 require identifying a first block in a previous frame which is a neighbor to a second block, also in the previous frame, that is collocated with a target block in a current frame, decoding the target block using a first motion vector corresponding to the first block, the motion vector using a reference frame distinct from the previous frame, and a second motion vector corresponding to a third block in a succeeding frame the second motion vector using a second reference frame that occurs after the succeeding frame in the video sequence.
The closest arts are Yan, Bull and Siago. Yan discloses conventional encoding and decoding with spatio-temporal error concealment. Bull teaches a particular method of temporal error concealment in which a target block may be reconstructed by using motion information of a temporally collocated block or its neighboring blocks in a previous frame, the motion information referring to a reference frame distinct from the previous frame. However, Bull does not disclose details relating to obtaining motion information form succeeding frames. Saigo discloses that a uni-directional motion vector may be scaled to generate a second motion vector that points to a succeeding frame. However the combination of Yan, Bull and Siago teaches a second motion vector that points to the third block in the succeeding frame and does not disclose determining a second motion vector that refers to a second succeeding frame that differs from the succeeding frame that includes the third block as required by the claims.
Claims 23-28, 30-35, 37-42 and 46-51 depend from claims 22, 29 and 36 respectively and are allowed for the same reasons.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMIAH CHARLES HALLENBECK-HUBER whose telephone number is (571)272-5248. The examiner can normally be reached Monday to Friday from 9 A.M. to 5 P.M.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEREMIAH C HALLENBECK-HUBER/Primary Examiner, Art Unit 2481