Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to amendment filed 01/09/2026 in which the claims
1, 3-4, 6 are pending.
Response to Arguments
Applicant’s arguments, see pages 6-11, filed 01/19/2026, with respect to the
rejections of claims have been fully considered and amended claims are moot in view of
new grounds of rejection by relying on the teachings of Song et al. (US 2015/0288969 A1).
Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
5. Claims 1, 3-4, 6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 6 of copending Application No. 18/964,088. Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting claims as shown in the table below. The difference between the instant examined claim and the conflicting claims is that the conflicting patented claim is narrower in scope and falls within the scope of the examined claim.
Instant application:18/964,083
Co-pending Application: 18/964,088
1.A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, and wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
1. A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
wherein the syntax elements are decoded using a context based coding,
and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, and wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
4.A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the syntax elements are encoded using a context based coding, and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6.A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; encoding post image processing information into the bitstream, and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream; and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image wherein the syntax elements are encoded using a context based coding, and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. Claims 1, 3-4, 6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 6 of copending Application No. 18/964,096. Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting claims as shown in the table below. The difference between the instant examined claims and the conflicting patented claims are that the conflicting patented claim is narrower in scope and falls within the scope of the examined claim.
Instant application:18/964,083
Co-pending Application: 18/964,096
1.A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image,
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
1. A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the prediction block is added to the residual block corresponding to each other, wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
4.A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block,
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
4. A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the prediction block is added to the residual block corresponding to each other,
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; encoding post image processing information into the bitstream, and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image,
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the prediction block is added to the residual block corresponding to each other, and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
7. Claims 1, 3-4, 6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 6 of copending Application No. 18/964,100. Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting claims as shown in the table below. The difference between the instant examined claim and the conflicting claim is that the conflicting patented claim is narrower in scope and falls within the scope of the examined claim.
Instant application:18/964,083
Co-pending Application: 18/964,100
1. A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, , wherein the post image processing is performed after performing in-loop filtering on the reconstructed image,
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
1. A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, , wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the syntax elements comprise at least one syntax element for a prediction mode for generating the prediction block, and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
4.A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, , wherein the post image processing is performed after performing in-loop filtering on the reconstructed image,
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
4. A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, , wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the syntax elements comprise at least one syntax element for a prediction mode for generating the prediction block, wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream; and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, , wherein the post image processing is performed after performing in-loop filtering on the reconstructed image,
and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, , wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the syntax elements comprise at least one syntax element for a prediction mode for generating the prediction block, wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed , the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
8. Claims 1, 3-4, 6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 6 of copending Application No. 18/964,091. Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting claims as shown in the table below. The difference between the instant examined claims and the conflicting patented claims are that the conflicting patented claim is narrower in scope and falls within the scope of the examined claim.
Instant application:18/964,083
Co-pending Application: 18/964,091
1. A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
A method of decoding an image with a decoding apparatus, comprising: obtaining syntax elements for the image from a bitstream; generating a prediction block and a residual block by decoding the syntax elements; reconstructing the image based on the prediction block and the residual block; and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image wherein the prediction block is added to the residual block corresponding to each other, wherein the syntax elements are decoded using a context based coding, wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
3. The method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image.
4.A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
4. A method of encoding an image with an encoding apparatus, comprising: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; and encoding post image processing information into the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the prediction block is added to the residual block corresponding to each other, wherein the syntax elements are encoded using a context based coding, wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6.A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; encoding post image processing information into the bitstream, and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image
and wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
6. A method of transmitting a bitstream comprises: generating a prediction block and a residual block for the image; obtaining syntax elements for the prediction block and the residual block; encoding the syntax elements into a bitstream; encoding post image processing information into the bitstream; and transmitting the bitstream, wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block, wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the prediction block is added to the residual block corresponding to each other, wherein the syntax elements are encoded using a context based coding, wherein the post image processing comprises padding at least one region to the reconstructed image or resizing the reconstructed image, and wherein information on a scale factor for the resizing is included in the post image processing information in response to the resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.
Claim Rejections - 35 USC § 103
9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
10. Claims 1, 3-4, 6 rejected under 35 U.S.C. 103 as being unpatentable over Hannuksela et al. (US 2017/0085917 A1) in view of Alshina et al.(US 2016/0337651 A1) and Song et al. (US 2015/0288969 A1).
Regarding claim 1, Hannuksela discloses a method of decoding an image with a decoding apparatus (FIG. 5 illustrates a block diagram of a video decoder), comprising: obtaining syntax elements for the image from a bitstream (para[0365 teaches context-based Adaptive Binary Arithmetic Coding (CABAC), a type of entropy coder, is a lossless compression tool to code syntax elements (SEs). SEs are the information that describe how a video has been encoded and how it should be decoded, para[0388] teaches decoder may decode from the bitstream one or more syntax elements generating a prediction block and a residual block by decoding the syntax elements (para[0159] & Fig. 5 teaches video decoder with prediction error decoding and pixel prediction ; Para[0365] teaches SEs are typically defined for all the prediction methods (e.g. CU/PU/TU partition, prediction type, intra prediction mode, motion vectors, and etc.) and prediction error (residual) coding information (e.g. residual skip/split, transform skip/split, coefficient_last_x, coefficient_last_y, significant_coefficient, and etc.); reconstructing the image based on the prediction block and the residual block (para[0159] & Fig. 5 teaches P′ n: Predicted representation of an image block; D′n: Reconstructed prediction error signal; I′ n: Preliminary reconstructed image; R′ n: Final reconstructed image, Para[0031] teaches prediction of the samples of the border region, reconstruction of the samples of the border region, obtaining a prediction block for intra prediction based on the one or more sample values); and performing a post image processing on the reconstructed image based on post image processing information included in the bitstream (para[0266], [0376] & Figs.8 teaches extending the reference picture to be larger (in width and/or height) compared to the coded picture. para[0318] teaches when both scale factors are less than 1, a pre-defined downsampling process may be inferred; and when both scale factors are greater than 1, a pre-defined upsampling process may be inferred. Para[0330] –[0331] teaches reconstructed/decoded base-layer picture may be upsampled prior to its insertion into the reference picture lists for an enhancement-layer picture. para[0362] & Fig. 10 teaches scaled/upsampled base layer 1010. Para[0401] & Fig. 13 teaches upsampling at least a part of the 360-degree panoramic source picture, Para[0421] teaches decoder decodes from the bitstream whether sample locations outside a picture boundary and/or parameters associated to locations outside a picture boundary), wherein the post image processing comprises padding at least one region to the reconstructed image (para[0329] teaches In the resampling process of SHVC, the source picture for inter-layer prediction may be cropped, upsampled and/or padded to obtain an ILR picture. The relative position of the upsampled source picture for inter-layer prediction to the enhancement layer picture is indicated through so-called reference layer location offsets).
Hannuksela does not explicitly disclose wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the post image processing comprises or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, and wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size. However Alshina discloses wherein the post image processing comprises or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed (Fig. 2A teaches scalable video decoding apparatus 200 & Para[0122]–[0128] teaches the scale ratio determiner 220 determines a scale ratio according to encoding information obtained by the encoding information obtainer 210. The scale ratio indicates a ratio of the reference area to the expanded reference area. & Para[0144]-[0148] Fig. 2B teaches scale ratio determine performing operations 22-24), the scale factor directly indicating a scale ratio for the resizing (Para [0226] teaches the scalable video decoding apparatus 200 may determine a horizontal scale ratio and a vertical scale ratio by using the height and width of the reference area and the height and width of the expanded reference area. The horizontal scale ratio and the vertical scale ratio may be determined according to Equation 5 and Equation 6.
SpatialScaleFactorHorY=((RefLayerRegionWidthInSamplesY<<16)+(ScaledRefRegionWidthInSamplesY>>1))/ScaledRefRegionWidthInSamplesY [Equation 5]
SpatialScaleFactorVerY=((RefLayerRegionHeightInSamplesY<<16)+(ScaledRefRegionHeightInSamplesY>>1))/ScaledRefRegionHeightInSamplesY [Equation 6]
[0227] In Equation 5 and Equation 6, SpatialScaleFactorHorY and SpatialScaleFactorVerY indicate a horizontal scale ratio and a vertical scale ratio, [0228], [0230],[0232]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of resampling process source picture upsampled and/or padded to obtain an ILR picture enables region-of-interest (ROI) scalability of Hannuksela with the scale ratio determiner configured to determine a scale ratio indicating a difference between a size of a reference area and a size of an expanded reference area, according to the size of the reference area which is determined from the reference layer size information and the reference layer offset information and the size of the expanded reference area which is determined from the current layer size information and the current layer offset information of Alshina in order to provide a system in which improves resolution of picture of the reference layer and the current layer. The apparatus reduces size of minimum coding unit in a reference area offset, so that reference area can be determined in an efficient manner.
Hannuksela in view of Alshina does not explicitly disclose wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size. However Song discloses wherein the post image processing is performed after performing in-loop filtering on the reconstructed image (Para[0135] &FIG. 16 teaches boundaries of blocks to be deblocking-filtered & Para[0278] the target macroblock reconstructed by the adder 4640 is deblocking-filtered by the filter 4650, accumulated in units of pictures, and then outputted as a reconstructed video); wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other (Para[0069]-[0072]& Figs. 2-3 teaches a macroblock having N×N pixels (N: an integer greater than 16)will be referred to as an extended macroblock (EMB). For example, the extended macroblock may include square pixel blocks of sizes such as 64×64and 32×32. It should be noted that macroblocks described below may include extended macroblocks and general macroblocks of 16×16 pixel blocks. When a video compression is performed by using extended macroblocks having N×N pixels (N: an integer greater than 16), if an input video is not a multiple of 16pixels, the video compression may be performed after the input video is padded to be a multiple of 16 pixels); determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other (Para[0074] & Figs. 4-5 teaches that is, if a macroblock having N×N pixels (N: an integer greater than or equal to 16 is used to encode a high-resolution, video, an extended macroblock may be divided into pixel blocks prior to encoding, and each of the pixel blocks may be divided into subblocks prior to encoding. In addition, as illustrated in FIG.5,if the length of one side of the extended macroblock or the pixel block is larger than 16 pixels, the division into pixel blocks having rectangular shapes such as 32×64, 64×32, 16×32 or 32×16is omitted, and the extended macroblock is divided into square pixel blocks and then an encoding may be performed in units of 16×16 pixel blocks. FIGS. 4 and5 illustrate each subblock with a minimum block size of 4×4 with respect to an extended macroblock. For example, as illustrated in FIG.4,if an extended macroblock is a 64×64 pixel block, subblocks of a 64×64 pixel block,64×32 pixel block, 32×64 pixel block and a 32×32 pixel block may belong to a macroblock layer 0, and subblocks of a 32×32 pixel block, a 32×16 pixel block, 16×32 pixel block and 16×16 pixel block may belong to a macroblock layer 1. In addition, as illustrated in FIG. 5, with respect to subblocks larger than a 16×16 pixel block, the division into rectangular subblocks such as a 64×32 pixel block, 32×64pixel block, a 32×16 pixel block and a 16×32 pixel block may be omitted. In this case, subblocks of a 64×64 pixel block and a 32×32 pixel block belong to the macroblock layer 0, and 32×32 pixel block and a 16×16 pixel block as subblocks belong to the macroblock layer 1); and performing the padding for the each partitioning unit based on the same size (Para[0257] teaches if the partition type values illustrated in FIG. 18 are used to divide an extended macroblock into 16×16 pixel blocks (that is, extended_mb_flag=0), the partition type is encoded/decoded by using the above-described method. In this case, the value of the lowermost node may be the partition type value of the 16×16 pixel block in the extended macroblock, and the maximum value of the values of the lower nodes may be used as the representative value of the upper node. For example, as illustrated in FIG. 40, if an extended macroblock of 32×32 pixel blocks belongs to an image padded to 16 and is divided into 16×16 pixel blocks, since the representative value of the uppermost node is 0, the uppermost node is encoded by binary bits ‘000’ representing a difference value ‘3’ between the representative value ‘0’ and the maximum value ‘3’ of the partition type. As another example, as illustrated in FIG. 41, if an extended macroblock of 32×32 pixel blocks belongs to an image padded to 16; the 16×16 pixel blocks are divided into 8×8 or less pixel blocks and then encoded; and one 16×16 pixel block is encoded to a 16×16 pixel block, since the representative value of the uppermost node is 3, a binary bit ‘1’ representing a difference value between the representative value ‘3’ of the uppermost node and the maximum value ‘3’ of the partition type is encoded. Para[0271] teaches In addition, if an extended macroblock being a 32×32 pixel block is an intra mode, a block belonging to an image in the extended macroblock padded to a multiple of 16 pixels is determined and then a partition type of the block is entropy-decoded. In the case of FIG. 3, a block belonging to a padded image in an extended macroblock being a 32×32 pixel block corresponds only to the first 16×16 pixel block, and a partition type of the first 16×16 pixel block is entropy-decoded). It would have been obvious to ne having ordinary skill in the art before the effective filing date of the invention to use the method of resampling process source picture upsampled and/or padded to obtain an ILR picture enables region-of-interest (ROI) scalability with a scale ratio determiner configured to determine a scale ratio indicating a difference between a size of a reference area and a size of an expanded reference area Hannuksela in view of Alshina with the method of with the method of improve a video compression efficiency and a video reconstruction efficiency by extending a macroblock to various sizes, dividing an extended macroblock into subblocks of various sizes and shapes of Song in order to provide a system in which encoding and decoding the image according to the size of the sub blocks so as to improve compression efficiency and reconstitution efficiency.
Regarding claim 3, Alshina discloses the method of claim 1, wherein the resizing is either increasing or decreasing the width or height of the reconstructed image (Para[0122] –[0128] & Figs. 2A-2B teaches the scale ratio includes a horizontal scale ratio indicating a ratio of a width of the reference area to a width of the expanded reference area, and a vertical scale ratio indicating a ratio of a height of the reference area to a height of the+++ expanded reference area & Para[0161]-[0162] , [0226] & Fig. 3). Motivation to combine as indicated in claim 1.
Regarding claim 4, Hannuksela the a method of encoding an image with an encoding apparatus (Para[0159] & Fig. 4 ), comprising: generating a prediction block and a residual block for the image (Para[0159] & Fig. 4 teaches where In: Image to be encoded; P′n: Predicted representation of an image block; Dn: Prediction error signal; D′n: Reconstructed prediction error signal); obtaining syntax elements for the prediction block and the residual block (para[0365] teaches SEs are typically defined for all the prediction methods (e.g. CU/PU/TU partition, prediction type, intra prediction mode, motion vectors, and etc.) and prediction error (residual) coding information (e.g. residual skip/split, transform skip/split, coefficient_last_x, coefficient_last_y, significant_coefficient, and etc.); encoding the syntax elements into a bitstream (Para[220]-[0221] teaches coding with syntax structure may be defined as zero or more syntax elements present together in the bitstream, para[0365 teaches context-based Adaptive Binary Arithmetic Coding (CAB AC), a type of entropy coder, is a lossless compression tool to code syntax elements (SEs). SEs are the information that describe how a video has been encoded and how it should be decoded); and encoding post image processing information into the bitstream (para[0266], [0376] & Figs.8 teaches extending the reference picture to be larger (in width and/or height) compared to the coded picture. para[0266], [0376] & Figs.8 teaches extending the reference picture to be larger (in width and/or height) compared to the coded picture. para[0318] teaches when both scale factors are less than 1, a pre-defined downsampling process may be inferred; and when both scale factors are greater than 1, a pre-defined upsampling process may be inferred. Para[0330] –[0331] teaches reconstructed/decoded base-layer picture may be upsampled prior to its insertion into the reference picture lists for an enhancement-layer picture. para[0362] & Fig. 10 teaches scaled/upsampled base layer 1010. Para[0401] & Fig. 13 teaches upsampling at least a part of the 360-degree panoramic source picture, Para[0421] teaches decoder decodes from the bitstream whether sample locations outside a picture boundary and/or parameters associated to locations outside a picture boundary), wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block (para[0159] & Fig. 4 teaches R′ n: Final reconstructed image, Para[0422] & Fig. 14 teaches method comprises coding or decoding samples of a border region of a 360-degree panoramic picture; said coding or decoding utilizing one or more sample values of an opposite side border region and/or one or more variable values associated with one or more blocks of the opposite side border region in the prediction and/or reconstruction of the samples of the border region, wherein said prediction and/or reconstruction), and wherein the post image processing comprises padding at least one region to the reconstructed image (para[0329] teaches In the resampling process of SHVC, the source picture for inter-layer prediction may be cropped, upsampled and/or padded to obtain an ILR picture. The relative position of the upsampled source picture for inter-layer prediction to the enhancement layer picture is indicated through so-called reference layer location offsets).
Hannuksela does not explicitly disclose wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the post image processing comprises or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, and wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size.. However Alshina discloses wherein the post image processing comprises or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed (Fig. 2A teaches scalable video decoding apparatus 200 & Para[0122]–[0128] teaches the scale ratio determiner 220 determines a scale ratio according to encoding information obtained by the encoding information obtainer 210. The scale ratio indicates a ratio of the reference area to the expanded reference area. & Para[0144]-[0148] Fig. 2B teaches scale ratio determine performing operations 22-24), the scale factor directly indicating a scale ratio for the resizing (Para [0226] teaches The scalable video decoding apparatus 200 may determine a horizontal scale ratio and a vertical scale ratio by using the height and width of the reference area and the height and width of the expanded reference area. The horizontal scale ratio and the vertical scale ratio may be determined according to Equation 5 and Equation 6.
SpatialScaleFactorHorY=((RefLayerRegionWidthInSamplesY<<16)+(ScaledRefRegionWidthInSamplesY>>1))/ScaledRefRegionWidthInSamplesY [Equation 5]
SpatialScaleFactorVerY=((RefLayerRegionHeightInSamplesY<<16)+(ScaledRefRegionHeightInSamplesY>>1))/ScaledRefRegionHeightInSamplesY [Equation 6]
[0227] In Equation 5 and Equation 6, SpatialScaleFactorHorY and SpatialScaleFactorVerY indicate a horizontal scale ratio and a vertical scale ratio, [0228], [0230],[0232]). It would have been obvious to ne having ordinary skill in the art before the effective filing date of the invention to use the method of resampling process source picture upsampled and/or padded to obtain an ILR picture enables region-of-interest (ROI) scalability of Hannuksela with the scale ratio determiner configured to determine a scale ratio indicating a difference between a size of a reference area and a size of an expanded reference area, according to the size of the reference area which is determined from the reference layer size information and the reference layer offset information and the size of the expanded reference area which is determined from the current layer size information and the current layer offset information of Alshina in order to provide a system in which improves resolution of picture of the reference layer and the current layer. The apparatus reduces size of minimum coding unit in a reference area offset, so that reference area can be determined in an efficient manner.
Hannuksela in view of Alshina does not explicitly disclose wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size. However Song discloses wherein the post image processing is performed after performing in-loop filtering on the reconstructed image (Para[0135] & FIG. 16 teaches boundaries of blocks to be deblocking-filtered & Para[0278] the target macroblock reconstructed by the adder 4640 is deblocking-filtered by the filter 4650, accumulated in units of pictures, and then outputted as a reconstructed video); wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; (Para[0069]-[0072]& Figs. 2-3 teaches a macroblock having N×N pixels (N: an integer greater than 16)will be referred to as an extended macroblock (EMB). For example, the extended macroblock may include square pixel blocks of sizes such as 64×64and 32×32. It should be noted that macroblocks described below may include extended macroblocks and general macroblocks of 16×16 pixel blocks. When a video compression is performed by using extended macroblocks having N×N pixels (N: an integer greater than 16), if an input video is not a multiple of 16pixels, the video compression may be performed after the input video is padded to be a multiple of 16 pixels) determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other (Para[0074] & Figs. 4-5 teaches that is, if a macroblock having N×N pixels (N: an integer greater than or equal to 16 is used to encode a high-resolution, video, an extended macroblock may be divided into pixel blocks prior to encoding, and each of the pixel blocks may be divided into subblocks prior to encoding. In addition, as illustrated in FIG.5,if the length of one side of the extended macroblock or the pixel block is larger than 16 pixels, the division into pixel blocks having rectangular shapes such as 32×64, 64×32, 16×32 or 32×16is omitted, and the extended macroblock is divided into square pixel blocks and then an encoding may be performed in units of 16×16 pixel blocks. FIGS. 4 and5 illustrate each subblock with a minimum block size of 4×4 with respect to an extended macroblock. For example, as illustrated in FIG.4,if an extended macroblock is a 64×64 pixel block, subblocks of a 64×64 pixel block,64×32 pixel block, 32×64 pixel block and a 32×32 pixel block may belong to a macroblock layer 0, and subblocks of a 32×32 pixel block, a 32×16 pixel block, 16×32 pixel block and 16×16 pixel block may belong to a macroblock layer 1. In addition, as illustrated in FIG. 5, with respect to subblocks larger than a 16×16 pixel block, the division into rectangular subblocks such as a 64×32 pixel block, 32×64pixel block, a 32×16 pixel block and a 16×32 pixel block may be omitted. In this case, subblocks of a 64×64 pixel block and a 32×32 pixel block belong to the macroblock layer 0, and 32×32 pixel block and a 16×16 pixel block as subblocks belong to the macroblock layer 1)) and performing the padding for the each partitioning unit based on the same size (Para[0257] teaches if the partition type values illustrated in FIG. 18 are used to divide an extended macroblock into 16×16 pixel blocks (that is, extended_mb_flag=0), the partition type is encoded/decoded by using the above-described method. In this case, the value of the lowermost node may be the partition type value of the 16×16 pixel block in the extended macroblock, and the maximum value of the values of the lower nodes may be used as the representative value of the upper node. For example, as illustrated in FIG. 40, if an extended macroblock of 32×32 pixel blocks belongs to an image padded to 16 and is divided into 16×16 pixel blocks, since the representative value of the uppermost node is 0, the uppermost node is encoded by binary bits ‘000’ representing a difference value ‘3’ between the representative value ‘0’ and the maximum value ‘3’ of the partition type. As another example, as illustrated in FIG. 41, if an extended macroblock of 32×32 pixel blocks belongs to an image padded to 16; the 16×16 pixel blocks are divided into 8×8 or less pixel blocks and then encoded; and one 16×16 pixel block is encoded to a 16×16 pixel block, since the representative value of the uppermost node is 3, a binary bit ‘1’ representing a difference value between the representative value ‘3’ of the uppermost node and the maximum value ‘3’ of the partition type is encoded. Para[0271] teaches In addition, if an extended macroblock being a 32×32 pixel block is an intra mode, a block belonging to an image in the extended macroblock padded to a multiple of 16 pixels is determined and then a partition type of the block is entropy-decoded. In the case of FIG. 3, a block belonging to a padded image in an extended macroblock being a 32×32 pixel block corresponds only to the first 16×16 pixel block, and a partition type of the first 16×16 pixel block is entropy-decoded). It would have been obvious to ne having ordinary skill in the art before the effective filing date of the invention to use the method of resampling process source picture upsampled and/or padded to obtain an ILR picture enables region-of-interest (ROI) scalability with a scale ratio determiner configured to determine a scale ratio indicating a difference between a size of a reference area and a size of an expanded reference area Hannuksela in view of Alshina with the method of with the method of improve a video compression efficiency and a video reconstruction efficiency by extending a macroblock to various sizes, dividing an extended macroblock into subblocks of various sizes and shapes of Song in order to provide a system in which encoding and decoding the image according to the size of the sub blocks so as to improve compression efficiency and reconstitution efficiency.
Regarding claim 6, Hannuksela discloses a method of transmitting a bitstream comprises (Para[0173] teaches bitstream structures, para[0280] para [0379] teaches encoder indicates the method in the bitstream): generating a prediction block and a residual block for the image (Para[0159] & Fig. 4 teaches where In: Image to be encoded; P′n: Predicted representation of an image block; Dn: Prediction error signal; D′n: Reconstructed prediction error signal); obtaining syntax elements for the prediction block and the residual block (para[0221] a syntax element may be defined as an element of data represented in the bitstream. para[0365] teaches SEs are typically defined for all the prediction methods (e.g. CU/PU/TU partition, prediction type, intra prediction mode, motion vectors, and etc.) and prediction error (residual) coding information (e.g. residual skip/split, transform skip/split, coefficient_last_x, coefficient_last_y, significant_coefficient, and etc.); encoding the syntax elements into a bitstream (Para[220]-[0221] teaches coding with syntax structure may be defined as zero or more syntax elements present together in the bitstream, para[0365 teaches context-based Adaptive Binary Arithmetic Coding (CAB AC), a type of entropy coder, is a lossless compression tool to code syntax elements (SEs). SEs are the information that describe how a video has been encoded and how it should be decoded), encoding post image processing information into the bitstream (para[0266], [0376] & Figs.8 teaches extending the reference picture to be larger (in width and/or height) compared to the coded picture. para[0266], [0376] & Figs.8 teaches extending the reference picture to be larger (in width and/or height) compared to the coded picture. para[0318] teaches when both scale factors are less than 1, a pre-defined downsampling process may be inferred; and when both scale factors are greater than 1, a pre-defined upsampling process may be inferred. Para[0330]–[0331] teaches reconstructed/decoded base-layer picture may be upsampled prior to its insertion into the reference picture lists for an enhancement-layer picture. para[0362] & Fig. 10 teaches scaled/upsampled base layer 1010. Para[0401] & Fig. 13 teaches upsampling at least a part of the 360-degree panoramic source picture); and transmitting the bitstream (para[0201] teaches signaling included by the encoder in the bitstream), wherein the post image processing information is used for performing a post image processing on a reconstructed image, the reconstructed image is obtained based on the prediction block and the residual block (para[0159] & Fig. 4 teaches R′ n: Final reconstructed image, Para[0422] –[0429] & Fig. 14 teaches FIG. 14, a method comprises coding or decoding samples of a border region of a 360-degree panoramic picture; said coding or decoding utilizing one or more sample values of an opposite side border region and/or one or more variable values associated with one or more blocks of the opposite side border region in the prediction and/or reconstruction of the samples of the border region, wherein said prediction and/or reconstruction), wherein the post image processing comprises padding at least one region to the reconstructed image (para[0329] teaches In the resampling process of SHVC, the source picture for inter-layer prediction may be cropped, upsampled and/or padded to obtain an ILR picture. The relative position of the upsampled source picture for inter-layer prediction to the enhancement layer picture is indicated through so-called reference layer location offsets).
Hannuksela does not explicitly disclose wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the post image processing comprises or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed, the scale factor directly indicating a scale ratio for the resizing, and wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size. However Alshina discloses wherein the post image processing comprises or resizing the reconstructed image, and wherein information on a scale factor for the resizing is obtained from the post image processing information in response to resizing being determined to be performed (Fig. 2A teaches scalable video decoding apparatus 200 & Para[0122] –[0128] teaches the scale ratio determiner 220 determines a scale ratio according to encoding information obtained by the encoding information obtainer 210. The scale ratio indicates a ratio of the reference area to the expanded reference area. & Para[0144]-[0148] Fig. 2B teaches scale ratio determine performing operations 22-24), the scale factor directly indicating a scale ratio for the resizing (Para [0226] teaches The scalable video decoding apparatus 200 may determine a horizontal scale ratio and a vertical scale ratio by using the height and width of the reference area and the height and width of the expanded reference area. The horizontal scale ratio and the vertical scale ratio may be determined according to Equation 5 and Equation 6.
SpatialScaleFactorHorY=((RefLayerRegionWidthInSamplesY<<16)+(ScaledRefRegionWidthInSamplesY>>1))/ScaledRefRegionWidthInSamplesY [Equation 5]
SpatialScaleFactorVerY=((RefLayerRegionHeightInSamplesY<<16)+(ScaledRefRegionHeightInSamplesY>>1))/ScaledRefRegionHeightInSamplesY [Equation 6]
[0227] In Equation 5 and Equation 6, SpatialScaleFactorHorY and SpatialScaleFactorVerY indicate a horizontal scale ratio and a vertical scale ratio, [0228], [0230],[0232]). It would have been obvious to ne having ordinary skill in the art before the effective filing date of the invention to use the method of resampling process source picture upsampled and/or padded to obtain an ILR picture enables region-of-interest (ROI) scalability of Hannuksela with the scale ratio determiner configured to determine a scale ratio indicating a difference between a size of a reference area and a size of an expanded reference area, according to the size of the reference area which is determined from the reference layer size information and the reference layer offset information and the size of the expanded reference area which is determined from the current layer size information and the current layer offset information of Alshina in order to provide a system in which improves resolution of picture of the reference layer and the current layer. The apparatus reduces size of minimum coding unit in a reference area offset, so that reference area can be determined in an efficient manner.
Hannuksela in view of Alshina does not explicitly disclose wherein the post image processing is performed after performing in-loop filtering on the reconstructed image, wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other; determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other and performing the padding for the each partitioning unit based on the same size. However Song discloses wherein the post image processing is performed after performing in-loop filtering on the reconstructed image (Para[0135] & FIG. 16 teaches boundaries of blocks to be deblocking-filtered & Para[0278] the target macroblock reconstructed by the adder 4640 is deblocking-filtered by the filter 4650, accumulated in units of pictures, and then outputted as a reconstructed video); wherein the padding is performed by: determining whether partitioning units included in the reconstructed image have a same size with each other (Para[0069]-[0072] & Figs. 2-3 teaches a macroblock having N×N pixels (N: an integer greater than 16)will be referred to as an extended macroblock (EMB). For example, the extended macroblock may include square pixel blocks of sizes such as 64×64and 32×32. It should be noted that macroblocks described below may include extended macroblocks and general macroblocks of 16×16 pixel blocks. When a video compression is performed by using extended macroblocks having N×N pixels (N: an integer greater than 16), if an input video is not a multiple of 16pixels, the video compression may be performed after the input video is padded to be a multiple of 16 pixels) determining the same size based on the determination that the partitioning units included in the reconstructed image have the same size with each other (Para[0074] & Figs. 4-5 teaches that is, if a macroblock having N×N pixels (N: an integer greater than or equal to 16 is used to encode a high-resolution, video, an extended macroblock may be divided into pixel blocks prior to encoding, and each of the pixel blocks may be divided into subblocks prior to encoding. In addition, as illustrated in FIG.5,if the length of one side of the extended macroblock or the pixel block is larger than 16 pixels, the division into pixel blocks having rectangular shapes such as 32×64, 64×32, 16×32 or 32×16is omitted, and the extended macroblock is divided into square pixel blocks and then an encoding may be performed in units of 16×16 pixel blocks. FIGS. 4 and5 illustrate each subblock with a minimum block size of 4×4 with respect to an extended macroblock. For example, as illustrated in FIG.4,if an extended macroblock is a 64×64 pixel block, subblocks of a 64×64 pixel block,64×32 pixel block, 32×64 pixel block and a 32×32 pixel block may belong to a macroblock layer 0, and subblocks of a 32×32 pixel block, a 32×16 pixel block, 16×32 pixel block and 16×16 pixel block may belong to a macroblock layer 1. In addition, as illustrated in FIG. 5, with respect to subblocks larger than a 16×16 pixel block, the division into rectangular subblocks such as a 64×32 pixel block, 32×64pixel block, a 32×16 pixel block and a 16×32 pixel block may be omitted. In this case, subblocks of a 64×64 pixel block and a 32×32 pixel block belong to the macroblock layer 0, and 32×32 pixel block and a 16×16 pixel block as subblocks belong to the macroblock layer 1) and performing the padding for the each partitioning unit based on the same size (Para[0257] teaches if the partition type values illustrated in FIG. 18 are used to divide an extended macroblock into 16×16 pixel blocks (that is, extended_mb_flag=0), the partition type is encoded/decoded by using the above-described method. In this case, the value of the lowermost node may be the partition type value of the 16×16 pixel block in the extended macroblock, and the maximum value of the values of the lower nodes may be used as the representative value of the upper node. For example, as illustrated in FIG. 40, if an extended macroblock of 32×32 pixel blocks belongs to an image padded to 16 and is divided into 16×16 pixel blocks, since the representative value of the uppermost node is 0, the uppermost node is encoded by binary bits ‘000’ representing a difference value ‘3’ between the representative value ‘0’ and the maximum value ‘3’ of the partition type. As another example, as illustrated in FIG. 41, if an extended macroblock of 32×32 pixel blocks belongs to an image padded to 16; the 16×16 pixel blocks are divided into 8×8 or less pixel blocks and then encoded; and one 16×16 pixel block is encoded to a 16×16 pixel block, since the representative value of the uppermost node is 3, a binary bit ‘1’ representing a difference value between the representative value ‘3’ of the uppermost node and the maximum value ‘3’ of the partition type is encoded. Para[0271] teaches In addition, if an extended macroblock being a 32×32 pixel block is an intra mode, a block belonging to an image in the extended macroblock padded to a multiple of 16 pixels is determined and then a partition type of the block is entropy-decoded. In the case of FIG. 3, a block belonging to a padded image in an extended macroblock being a 32×32 pixel block corresponds only to the first 16×16 pixel block, and a partition type of the first 16×16 pixel block is entropy-decoded). It would have been obvious to ne having ordinary skill in the art before the effective filing date of the invention to use the method of resampling process source picture upsampled and/or padded to obtain an ILR picture enables region-of-interest (ROI) scalability with a scale ratio determiner configured to determine a scale ratio indicating a difference between a size of a reference area and a size of an expanded reference area Hannuksela in view of Alshina with the method of with the method of improve a video compression efficiency and a video reconstruction efficiency by extending a macroblock to various sizes, dividing an extended macroblock into subblocks of various sizes and shapes of Song in order to provide a system in which encoding and decoding the image according to the size of the sub blocks so as to improve compression efficiency and reconstitution efficiency.
Conclusion
11. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425