DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. US 12256065 B2 in view of Kadono et al (US 2004/0233988). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and the conflicting patent claim is the addition of limitations, performing inverse-transformation for the entropy decoded quantized transform coefficients in instant claim 1, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream in instant claims 3 and 5, See the table below.
Kadono teaches performing inverse-transformation for the entropy decoded quantized transform coefficients (16 and 17 of fig. 3), performing quantization for the transform coefficients and encoding the quantized transform coefficients (15 and 19 of fig. 3).
Taking the teachings of the Patent and Kadono together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify the inverse transformation, quantization, and encoding of Kadono into the encoding and decoding apparatus of the Patent to reduce a processing time for the prediction of the motion vector.
The instant claims 2, 4, and 6 are covered by the patent claims 2, 4, and 6.
Application 19/407,693
Patent US 12256065 B2
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
3. An image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into a bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream.
4. The image encoding method according to claim 3, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
5. A non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into the bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream.
6. The non-transitory computer-readable recording medium according to the non-transitory computer-readable recording medium according to wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
3. An image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into a bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients and encoding the transform coefficients into the bit stream.
4. The image encoding method according to claim 3, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
5. A non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into the bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients and encoding the transform coefficients into the bit stream.
6. The non-transitory computer-readable recording medium according to claim 5, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
Claims 1-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. US 11863740 B2 in view of Kadono et al (US 2004/0233988). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and the conflicting patent claim is the addition of limitations, performing inverse-transformation for the entropy decoded quantized transform coefficients in instant claim 1, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream in instant claims 3 and 5, See the table below.
Kadono teaches performing inverse-transformation for the entropy decoded quantized transform coefficients (16 and 17 of fig. 3), performing quantization for the transform coefficients and encoding the quantized transform coefficients (15 and 19 of fig. 3).
Taking the teachings of the Patent and Kadono together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify the inverse transformation, quantization, and encoding of Kadono into the encoding and decoding apparatus of the Patent to reduce a processing time for the prediction of the motion vector.
The instant claims 2, 4, and 6 are covered by the patent claims 2, 4, and 6.
Application 19/407,693
Patent US 11863740 B2
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
3. An image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into a bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream.
4. The image encoding method according to claim 3, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
5. A non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into the bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream.
6. The non-transitory computer-readable recording medium according to the non-transitory computer-readable recording medium according to wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by performing inverse-quantization for entropy decoded transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
3. An image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into a bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by performing quantization for transform coefficients relating to the residual block.
4. The image encoding method according to claim 3, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
5. A non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into the bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by performing quantization for transform coefficients relating to the residual block.
6. The non-transitory computer-readable recording medium according to claim 5, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
Claims 1-2 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6-7 of U.S. Patent No. US 8526499 B2 in view of Kadono et al (US 2004/0233988). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and the conflicting patent claim is the addition of limitations, (d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector; (e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients in instant claim 1. See the table below.
Kadono teaches (d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector (1005 and 1006 of fig. 13, [0106]); (e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream (1000 of fig. 13, variable length coding unit for decoding the bitstream); and wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients (1001, 1002, and 1003 of fig. 13).
Taking the teachings of the Patent and Kadono together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Kadono into the decoding apparatus of the Patent to reduce a processing time for the prediction of the motion vector.
The instant claim 2 is covered by the patent claim 7.
Application 19/407,693
Patent US 8526499 B2
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
6. A bi-prediction decoding method for decoding bi-prediction coded data by using a plurality of reference pictures, the method comprising the steps of:
(a) determining whether or not a current block to be decoded is a bi-prediction coding mode by analyzing the coded data;
(b) recovering a decoding target motion vector by decoding the coded data in a case where it is determined that the current block is the bi-prediction coding mode;
(c) calculating a non-decoding target motion vector corresponding to a second reference picture based on the recovered decoding target motion vector, a temporal distance between a current picture to which the current block belongs and a first decoding reference picture corresponding to the decoding target motion vector and a temporal distance between the current picture and a second decoding reference picture; and
(d) recovering the current block based on a generated prediction block by generating the prediction block for the current block based on the recovered decoding target motion vector and the calculated non-decoding target motion vector, wherein the decoding target motion vector, which is recovered by decoding the coded data, and the non-decoding target motion vector, which is calculated based on the decoding target motion vector, are related to the identical current block.
7. The bi-prediction decoding method according to claim 6, wherein in the step (c), the non-coding target motion vector is calculated by multiplying relative temporal distances between the current picture, and the first reference picture and the second reference picture by the decoding target motion vector.
Claims 1-2 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6-7 of U.S. Patent No. US 10178383 B2 in view of Kadono et al (US 2004/0233988). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and the conflicting patent claim is the addition of limitations, (d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector; (e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients in instant claim 1. See the table below.
Kadono teaches (d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector (1005 and 1006 of fig. 13, [0106]); (e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream (1000 of fig. 13, variable length coding unit for decoding the bitstream); and wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients (1001, 1002 and 1003 of fig. 13).
Taking the teachings of the Patent and Kadono together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Kadono into the decoding apparatus of the Patent to reduce a processing time for the prediction of the motion vector.
Instant claim 2 is covered by patent claim 7.
Application 19/407,693
Patent US 10178383 B2
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
6. A bi-prediction decoding method for decoding bi-prediction coded data by using a plurality of reference pictures, the method comprising the steps of:
(a) determining whether or not a current block to be decoded is a bi-prediction coding mode by analyzing the coded data;
(b) recovering a decoding target motion vector by decoding the coded data in a case where it is determined that the current block is the bi-prediction coding mode;
(c) calculating a non-decoding target motion vector corresponding to a second reference picture based on the recovered decoding target motion vector, a temporal distance between a current picture to which the current block belongs and a first decoding reference picture corresponding to the decoding target motion vector and a temporal distance between the current picture and a second decoding reference picture; and
d) recovering the current block based on a generated prediction block by generating the prediction block for the current block based on the recovered decoding target motion vector and the calculated non-decoding target motion vector,
wherein the decoding target motion vector, which is recovered by decoding the coded data, and the non-decoding target motion vector, which is calculated based on the decoding target motion vector, are related to the identical current block.
7. The bi-prediction decoding method according to claim 6, wherein in the step (c), the non-coding target motion vector is calculated by multiplying relative temporal distances between the current picture, and the first reference picture and the second reference picture by the decoding target motion vector.
Claims 1-2 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-2 of U.S. Patent No. US 11438575 B2 in view of Kadono et al (US 2004/0233988). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and the conflicting patent claim is the addition of limitations, wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients in instant claim 1;
Kadono teaches wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients in instant (1000, 1001, and 1002 of fig. 13).
Taking the teachings of the Patent and Kadono together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Kadono into the decoding apparatus of the Patent to reduce a processing time for the prediction of the motion vector.
Instant claim 4 is covered by patent claim 2.
Claims 3-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-2 of U.S. Patent No. US 11438575 B2 in view of Kadono et al (US 2004/0233988). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and the conflicting patent claim is the addition of limitations, an image encoding method with an encoding apparatus comprises (d) generating a residual block relating to the current block based on the prediction block, (e) encoding the residual block into the bit stream, wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream in instant claims 3 and 5; and a non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus in the instant claim 5. See the table below.
Kadono teaches an image encoding method with an encoding apparatus (fig. 3) comprises (d) generating a residual block relating to the current block based on the prediction block (13 of fig. 3, [0048]), (e) encoding the residual block into the bit stream (19 of fig. 3, stream); wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream (13, 14, 15, and 16 of fig. 3); and a non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus ([0111 and 0112]).
Taking the teachings of the Patent and Kadono together as a whole, it would have been obvious to one of ordinary skill in the art at the time of invention to modify teachings of Kadono into the decoding apparatus of the Patent to reduce a processing time for the prediction of the motion vector.
Instant claims 4 and 6 are covered by patent claim 2.
Application 19/407,693
Patent US 11438575 B2
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
3. An image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into a bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream.
4. The image encoding method according to claim 3, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
5. A non-transitory computer-readable recording medium storing a bit stream which is generated by an image encoding method with an encoding apparatus, the image encoding method comprising:
(a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block;
(b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into the bit stream of a current picture, the first decoding reference picture being different from the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a residual block relating to the current block based on the prediction block; and
(e) encoding the residual block into the bit stream,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector, and
wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream.
6. The non-transitory computer-readable recording medium according to the non-transitory computer-readable recording medium according to wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
1. An image decoding method with a decoding apparatus, the image decoding method comprising:
(a) receiving an encoded bit stream of a current picture to be decoded;
(b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture, the selected motion vector being obtained using a motion vector of a neighboring block adjacent to a current block, the neighboring block being included in the current picture;
(c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture;
(d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector;
(e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream; and
(f) recovering the current block based on the prediction block and the residual block,
wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector.
2. The image decoding method according to claim 1, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
3. An image decoding apparatus comprising: one or more processors configured to receive an encoded bit stream of a current picture to be decoded, recover a selected motion vector from the current picture to a first decoding reference picture based on the entropy decoded bit stream of the current picture, the first decoding reference picture being different from the current picture, the selected motion vector being obtained using a motion vector of a neighboring block adjacent to a current block, the neighboring block being included in the current picture, calculate a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture, generate a prediction block relating to the current block in the current picture, based on the calculated motion vector, generate a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream, and recover the current block based on the prediction block and the residual block, wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is performed based on whether a prediction direction for the calculated motion vector.
4. The image decoding apparatus according to claim 3, wherein scaling the selected motion vector comprises multiplying the selected motion vector by the second temporal distance.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States.
Claim(s) 5-6 is/are rejected under pre-AIA 35 U.S.C. 102(b) as being anticipated by Stevenson et al. (US 6081552 A1).
Claim 5 directed to a non-transitory computer readable recording medium (CRM) storing a bit stream generated by an image encoding method. The claim does not recite that the CRM contains executable instruction, that when executed, implement the image encoding method. The bit stream is a product produced by the image encoding method. Therefore, the claims are not limited to the recited steps, only the structure implied by the steps. (See MPEP 2113 - Product-by-Process claims.) Hence, the image encoding method steps recited are given patentable weight only to structures in the bitstream that are implied by the steps.
To be given patentable weight, the CRM and the bitstream (i.e. descriptive material) must be in a functional relationship. A functional relationship can be found where the descriptive material performs some function with respect to the CRM to which it is associated. See MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists”. MPEP §2111.05(III).
The CRM storing the claimed bitstream in claim 5 merely services as a support for the CRM of the bitstream and provides no functional relationship between the stored bitstream and the CRM.
Therefore, the structure bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III).
Thus, the claim scope is just a storage medium storing data and is anticipated by Stevenson et al. (US 6081552 A1) which recites a memory device for storing a bitstream (112 of fig. 1, Col. 2, lines 64-65, the resulting encoded video bitstream is then stored to memory device 112 via memory interface 110).
Dependent claim 6 is rejected for the same reasons as independent claim 1.
Allowable Subject Matter
Claims 1-6 are allowable if the nonstatutory double patenting rejections and the 35 U.S.C. 102 rejection based on the nonfunctional descriptive material are overcome.
Kadono (US 20040233988 A1) discloses (a) receiving an encoded bit stream of a current picture to be decoded (1000 of fig. 13); (d) generating a prediction block relating to the current block in the current picture, based on the calculated motion vector (1005 and 1006 of fig. 13); (e) generating a residual block relating to the current block through a residual data decoding process based on the entropy decoded bit stream (1000 of fig. 13); (f) recovering the current block based on the prediction block and the residual block (1003 of fig. 13, and wherein the residual block is generated by decoding the encoded bit stream to obtain entropy decoded transform coefficients, performing inverse-quantization for the entropy decoded transform coefficients and performing inverse-transformation for the entropy decoded quantized transform coefficients (1001 and 1002 of fig. 13).
Kadono (US 20040233988 A1) does not disclose (b) recovering a selected motion vector from the current picture to a first decoding reference picture by entropy decoding the encoded bit stream of the current picture, the first decoding reference picture being different from the current picture; (c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of a current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture; wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector in claim 1.
Kadono (US 20040233988 A1) discloses (a) generating a prediction block relating to a current block in a current picture by performing inter prediction on the current block (11 and 12 of fig. 3); (d) generating a residual block relating to the current block based on the prediction block (13 of fig. 3); and (e) encoding the residual block into the bit stream (19 of fig. 3), wherein the residual block is encoded by performing transformation for residual coefficients relating to the residual block to obtain transform coefficients, performing quantization for the transform coefficients and encoding the quantized transform coefficients into the bit stream (14 and 15 of fig. 3).
Kadono (US 20040233988 A1) does not teach (b) identifying a selected motion vector from the current picture to a first decoding reference picture, the selected motion vector being encoded into the bit stream of a current picture, the first decoding reference picture being different from the current picture; (c) calculating a calculated motion vector from the current picture to a second decoding reference picture by scaling the selected motion vector based on a first temporal distance between the current picture and the first decoding reference picture and a second temporal distance between the current picture and the second decoding reference picture, the calculated motion vector being used for inter prediction of the current block belonging to the current picture, the second decoding reference picture being different from the first decoding reference picture; wherein the calculating the calculated motion vector is performed based on whether a prediction direction for the selected motion vector is identical to or different from a prediction direction for the calculated motion vector in claims 3 and 5.
Dependent claims 2 and 4 and 6 are allowable for the same reasons as independent claims 1, 3, and 5.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Winger (US 7,020,200 B1) discloses system and method for direct motion vector prediction in bi-predictive video frames and fields.
Thoreau et al. (US 20090016439 A1) discloses the method wherein the field or frame mode is selected according to the following steps: determination of a motion vector associated with a co-located macroblock of a macroblock to be coded and finding in the next reference picture, a predefined macroblock to be coded, for the selection, in field or frame mode, scaling of the motion vector according to the temporal distances between the reference pictures corresponding to this motion vector and between the current picture field or frame.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached on Monday-Friday 6:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TUNG T. VO
Primary Examiner
Art Unit 2425
/TUNG T VO/Primary Examiner, Art Unit 2425