DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. There are a total of 20 claims and claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/10/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 5, 18 are objected to because of the following informalities:
Claim 5, limitation (ii) does not end with a punctuation. It is recommended to add a comma (,) at the end of the limitation. Similar issue exists in claim 18.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,200,192 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because.
Claim 1 of the instant application is rejected on the ground of nonstatutory obvious type double patenting as being unpatentable over claim 1 of Patent 12,200,192 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the following table describes the double patenting rejection basis of claim 1 between the instant application and the patent.
19016699 (Instant Application)
12,200,192 B2 (Patent)
Claim 1
Claim 1
1
A method performed by at least one processor of a video decoder, the method comprising:
A method performed by at least one processor of a video decoder, the method comprising:
2
receiving a coded video bitstream including a current picture, a first reference picture, and a second reference picture, the current picture including a current block divided into a plurality of sub-blocks;
receiving a coded video bitstream including a current picture, a first reference picture, and a second reference picture, the current picture including a current block divided into a plurality of sub-blocks;
3
determining that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture;
determining that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture;
4
obtaining a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value;
obtaining a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value;
5
selecting a weighting pattern based on a predetermined condition;
selecting a weighting pattern based on a predetermined condition, wherein the predetermined condition indicates that the weigh pattern is selected from at least one of: (i) a first weighting pattern selected from a plurality of weighting patterns based on a selection index included in the bitstream, (ii) a second weighting pattern selected from a plurality of weighting patterns that minimizes a cost measurement, and (iii) a third weighting pattern selected based on a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block;
6
deriving a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture based on the index value corresponding to the selected weighting pattern;
deriving, based on the index value corresponding to the selected weighting pattern, a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture;
7
assigning the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern; and
assigning the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern; and
8
decoding the current block by a weighted bi-prediction based at least on the first sub- block weighted by the first weight and the second sub-block weighted by the second weight.
decoding the current block by a weighted bi-prediction based at least on the first sub-block weighted by the first weight and the second sub-block weighted by the second weight,
9
wherein the bitstream includes a first flag that indicates whether the selected weighting pattern applies equal weighting to the first sub-block and the second sub-block, and
10
wherein based on a determination that the first flag indicates an unequal weighting, a second flag indicates selection of one of the first weighting pattern and the second weighting pattern.
The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the instant application are claiming common subject matter, as follows:
The equivalencies in claim limitations of the instant application and the patent are highlighted in bold italics text. It is to be noted that all the limitations of the instant application are directly recited in the patent. The instant application claim 1 is a broader version of the patent claim 1 since the patent claim has additional limitations therein. Therefore, the instant application claim 1 as a whole is not patentably distinct from the patent claim 1.
Claim 2-13 of the instant application is rejected on the ground of nonstatutory obvious type double patenting as being unpatentable over combination of claims 1-13 of Patent 12,200,192 B2. Although the claims at issue are not identical, they are not patentably distinct from each other.
Claim 14 of the instant application is rejected on the ground of nonstatutory obvious type double patenting as being unpatentable over claim 14 of Patent 12,200,192 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the following table describes the double patenting rejection basis of claim 14 between the instant application and the patent.
19016699 (Instant Application)
12,200,192 B2 (Patent)
Claim 14
Claim 14
1
A video decoder comprising:
A video decoder comprising:
2
at least one memory configured to store computer program code; and
at least one memory configured to store computer program code; and
3
at least one processor configured to access the computer program code and operate as instructed by the computer program code, the computer program code including:
at least one processor configured to access the computer program code and operate as instructed by the computer program code, the computer program code including:
4
receiving code configured to cause the at least one processor to receive a coded video bitstream including a current picture, a first reference picture, and a second reference picture, the current picture including a current block divided into a plurality of sub-blocks,
receiving code configured to cause the at least one processor to receive a coded video bitstream including a current picture, a first reference picture, and a second reference picture, the current picture including a current block divided into a plurality of sub-blocks,
5
determining code configured to cause the at least one processor to determine that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture,
determining code configured to cause the at least one processor to determine that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture,
6
obtaining code configured to cause the at least one processor to obtain a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value,
obtaining code configured to cause the at least one processor to obtain a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value,
7
selecting code configured to cause the at least one processor to select a weighting pattern based on a predetermined condition,
selecting code configured to cause the at least one processor to select a weighting pattern based on a predetermined condition, (i) a first weighting pattern selected from a plurality of weighting patterns based on a selection index included in the bitstream, (ii) a second weighting pattern selected from a plurality of weighting patterns that minimizes a cost measurement, and (iii) a third weighting pattern selected based on a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block,
8
deriving code configured to cause the at least one processor to derive a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture based on the index value corresponding to the selected weighting pattern,
deriving code configured to cause the at least one processor to derive, based on the index value corresponding to the selected weighting pattern, a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture,
9
assigning code configured to cause the at least one processor to assign the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern, and
assigning code configured to cause the at least one processor to assign the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern, and
10
decoding code configured to cause the at least one processor to decode the current block by a weighted bi-prediction based at least on the first sub-block weighted by the first weight and the second sub-block weighted by the second weight.
decoding code configured to cause the at least one processor to decode the current block by a weighted bi-prediction based at least on the first sub-block weighted by the first weight and the second sub-block weighted by the second weight,
11
wherein the bitstream includes a first flag that indicates whether the selected weighting pattern applies equal weighting to the first sub-block and the second sub-block, and
12
wherein based on a determination that the first flag indicates an unequal weighting, a second flag indicates selection of one of the first weighting pattern and the second weighting pattern.
The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the instant application are claiming common subject matter, as follows:
The equivalencies in claim limitations of the instant application and the patent are highlighted in bold italics text. It is to be noted that all the limitations of the instant application are directly recited in the patent. The instant application claim 14 is a broader version of the patent claim 14 since the patent claim has additional limitations therein. Therefore, the instant application claim 14 as a whole is not patentably distinct from the patent claim 14.
Claim 15-19 of the instant application is rejected on the ground of nonstatutory obvious type double patenting as being unpatentable over combination of claims 14-19 of Patent 12,200,192 B2. Although the claims at issue are not identical, they are not patentably distinct from each other.
Claim 20 of the instant application is rejected on the ground of nonstatutory obvious type double patenting as being unpatentable over claim 20 of Patent 12,200,192 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the following table describes the double patenting rejection basis of claim 20 between the instant application and the patent.
19016699 (Instant Application)
12,200,192 B2 (Patent)
Claim 20
Claim 20
1
A non-transitory computer readable medium having instructions stored therein, which when executed by a processor in a video decoder cause the processor to execute a method comprising:
A non-transitory computer readable medium having instructions stored therein, which when executed by a processor in a video decoder cause the processor to execute a method comprising:
2
receiving a coded video bitstream including a current picture, a first reference picture, and a second reference picture, the current picture including a current block divided into a plurality of sub-blocks;
receiving a coded video bitstream including a current picture, a first reference picture, and a second reference picture, the current picture including a current block divided into a plurality of sub-blocks;
3
determining that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture;
determining that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture;
4
obtaining a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value;
obtaining a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value;
5
selecting a weighting pattern based on a predetermined condition;
selecting a weighting pattern based on a predetermined condition, wherein the predetermined condition indicates that the weigh pattern is selected from at least one of: (i) a first weighting pattern selected from a plurality of weighting patterns based on a selection index included in the bitstream, (ii) a second weighting pattern selected from a plurality of weighting patterns that minimizes a cost measurement, and (iii) a third weighting pattern selected based on a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block;
6
deriving a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture based on the index value corresponding to the selected weighting pattern;
deriving, based on the index value corresponding to the selected weighting pattern, a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture;
7
assigning the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern; and
assigning the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern; and
8
decoding the current block by a weighted bi-prediction based at least on the first sub- block weighted by the first weight and the second sub-block weighted by the second weight.
decoding the current block by a weighted bi-prediction based at least on the first sub-block weighted by the first weight and the second sub-block weighted by the second weight,
9
wherein the bitstream includes a first flag that indicates whether the selected weighting pattern applies equal weighting to the first sub-block and the second sub-block, and
10
wherein based on a determination that the first flag indicates an unequal weighting, a second flag indicates selection of one of the first weighting pattern and the second weighting pattern.
The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the instant application are claiming common subject matter, as follows:
The equivalencies in claim limitations of the instant application and the patent are highlighted in bold italics text. It is to be noted that all the limitations of the instant application are directly recited in the patent. The instant application claim 20 is a broader version of the patent claim 20 since the patent claim has additional limitations therein. Therefore, the instant application claim 20 as a whole is not patentably distinct from the patent claim 20.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US PGPub 2021/0195200 A1) in view of Chen et al. (US PGPub 2022/0312001 A1) hereinafter called Chen_2.
Regarding claim 1, Chen et al. teach a method performed by at least one processor of a video decoder (Fig. 7; [0103], L7-10), the method comprising:
receiving a coded video bitstream including a current picture, a first reference picture, and a second reference picture ([0085]; It teaches a bi-prediction technique, with two reference pictures, such as a first reference picture and a second reference picture), the current picture including a current block divided into a plurality of sub-blocks (Figs. 8A-B; Fig. 12, reference numeral S1210; [0207]; It teaches that a current block is partitioned in to two sub-blocks);
determining that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture ([0085]; [0111]; It teaches that Mv1 and Mv2 are from different reference picture lists (e.g., one from L0 and the other from L1), then Mv1 and Mv2 are simply combined to form the bi-prediction motion vector. In [0089]. It teaches that when the processing block is to be coded in inter mode or bi-prediction mode, the video encoder may use an inter prediction or bi-prediction technique);
obtaining a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value (Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices);
selecting a weighting pattern based on a predetermined condition ([0190]-[0191]; Eqn. 24 shows the weighting factor selection based on conditions recited in Eqns. 25-26);
deriving a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture based on the index value corresponding to the selected weighting pattern ([0117]-[0118]; It teaches a blending process with two weights W0 and W1 associated with two partitions P0 and P1 respectively acquired from the weighting indices of the GEO according to Eqn. 2);
assigning the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern ([0117]-[0118]; The first weight W0 is assigned to P0 and the second weight W1 is assigned to P1); and
decoding the current block by a weighted bi-prediction based at least on the first sub- block weighted by the first weight and the second sub-block weighted by the second weight ([0117]-[0118]; Eqn. 2 shows the weighted bi-prediction value PB by the blending process of the two prediction blocks P0 and P1 with their corresponding weighting factor values W0 and W1 respectively).
Although, Chen et al. teach plurality of weighting factors as described in [0110] and weights associated with each prediction block, but it does not explicitly teach, as claimed, selecting weights based on a condition and assigning first selected weight to the first sub-block and second selected weight to second sub-block.
However, Chen_2, in the same field of endeavor (Abstract), teach a decoding method where it selects weights (Abstract; it teaches that the current block is predicted using a weighted sum of the reference blocks wherein the weights may be selected from among a plurality of candidate weights) based on a condition ([0051]; It teaches the conditions of selecting weights, where it states that video encoder may either switch adaptively between several predefined assignment manners or assign each weight to a unique leaf node dynamically on a PU-by-PU basis based on the usage of weight values from previously coded blocks) and assigning first selected weight to the first sub-block and second selected weight to second sub-block ([0004]; It teaches that the current block is predicted as a weighted sum of a first reference block in the first reference picture and a second reference block in the second reference picture, wherein the first reference block is weighted by the first weight and the second block is weighted by the second weight).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of selecting weighting factors for each prediction block of the current block, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 2, Chen et al. and Chen_2 teach the method according to claim 1, wherein the predetermined condition specifies a selection index included in the coded video bitstream, and wherein the weighting pattern is selected from a plurality of the weighting patterns based on the selection index (Chen et al.; [0016]; Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices. Chen_2 also teach the same as described in [0041], Eqn. 3, where it states that frequently-used weight values may be arranged in a set (referred hereafter to as WL1), so each weight value can be indicated by an index value).
Regarding claim 3, Chen et al. and Chen_2 teach the method of claim 1, wherein the predetermined condition specifies a minimum cost measurement for selecting the weighting pattern from a plurality of weighting patterns, and wherein the cost measurement is calculated based on a template associated with the at least one sub-block, a template associated with the first sub-block, and a template associated with the second sub-block (Chen_2; [0128]-[0134], [0142]; It teaches that the weight index is used to calculate a motion estimation cost to determine which weight index gives the best estimation).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of cost based selection of weighting factors, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 4, Chen et al. and Chen_2 teach the method of claim 1, wherein the predetermined condition indicates a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block (Chen_2; [0008]; it teaches that weights are identified using the corresponding codeword, wherein the codewords to weights may be adapted based on weights used in previously-coded blocks, which are neighboring blocks).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of weighting factors of neighboring blocks, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 5, Chen et al. and Chen_2 teach the method of claim 1, wherein the predetermined condition indicates a plurality weighting modes including:
(i) a first weighting mode in which the weighting pattern is selected from a plurality of weighting patterns based on a selection index included in the bitstream (Chen et al.; [0016]; Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices. Chen_2 also teach the same as described in [0041], Eqn. 3, where it states that frequently-used weight values may be arranged in a set (referred hereafter to as WL1), so each weight value can be indicated by an index value),
(ii) a second weighting mode in which the weighting pattern is selected from a plurality of weighting patterns that minimizes a cost measurement that is calculated based on a template associated with the at least one sub-block, a template associated with the first sub-block, and a template associated with the second sub-block (Chen_2; [0128]-[0134], [0142]; It teaches that the weight index is used to calculate a motion estimation cost to determine which weight index gives the best estimation)
(iii) a third weighting mode in which the weighting pattern is selected based on a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block (Chen_2; [0008]; it teaches that weights are identified using the corresponding codeword, wherein the codewords to weights may be adapted based on weights used in previously-coded blocks, which are neighboring blocks).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of cost based selection of weighting factors, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 6, Chen et al. and Chen_2 teach the method of claim 5, wherein the selection of the one of the plurality of weighting modes is based on an indicator included in the bitstream (Chen_2; [0007]; it teaches that the set of weights is coded in the bitstream, allowing different weight sets to be adapted for use in different slices, pictures, or sequences).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of cost based selection of weighting factors, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 13, Chen et al. and Chen_2 teach the method of claim 1, further comprising:
determining a first motion vector that points from at least one sub-block in the current block to a first sub-block of a first block in the first reference picture (Chen et al.; [0085]; it teaches that a block in the current picture can be coded by a first motion vector that points to a first reference block in the first reference picture); and
determining a second motion vector that points from the at least one sub-block in the current block to a second sub-block of a second block in the second reference picture (Chen et al.; [0085]; It teaches that a block in the current picture can be coded by a second motion vector that points to a second reference block in the second reference picture).
Regarding claim 14, Chen et al. teach a video decoder (Fig. 7; [0103], L7-10) comprising:
at least one memory configured to store computer program code ([0231]; Fig. 13, reference numeral 1345); and
at least one processor configured to access the computer program code and operate as instructed by the computer program code ([0231]; Fig. 13, reference numeral 1341), the computer program code including:
receiving code configured to cause the at least one processor to receive a coded video bitstream including a current picture, a first reference picture, and a second reference picture ([0085]; It teaches a bi-prediction technique, with two reference pictures, such as a first reference picture and a second reference picture), the current picture including a current block divided into a plurality of sub-blocks (Figs. 8A-B; Fig. 12, reference numeral S1210; [0207]; It teaches that a current block is partitioned in to two sub-blocks),
determining code configured to cause the at least one processor to determine that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture ([0085]; [0111]; It teaches that Mv1 and Mv2 are from different reference picture lists (e.g., one from L0 and the other from L1), then Mv1 and Mv2 are simply combined to form the bi-prediction motion vector. In [0089]. It teaches that when the processing block is to be coded in inter mode or bi-prediction mode, the video encoder may use an inter prediction or bi-prediction technique),
obtaining code configured to cause the at least one processor to obtain a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value (Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices),
selecting code configured to cause the at least one processor to select a weighting pattern based on a predetermined condition ([0190]-[0191]; Eqn. 24 shows the weighting factor selection based on conditions recited in Eqns. 25-26),
deriving code configured to cause the at least one processor to derive a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture based on the index value corresponding to the selected weighting pattern ([0117]-[0118]; It teaches a blending process with two weights W0 and W1 associated with two partitions P0 and P1 respectively acquired from the weighting indices of the GEO according to Eqn. 2),
assigning code configured to cause the at least one processor to assign the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern ([0117]-[0118]; The first weight W0 is assigned to P0 and the second weight W1 is assigned to P1), and
decoding code configured to cause the at least one processor to decode the current block by a weighted bi-prediction based at least on the first sub-block weighted by the first weight and the second sub-block weighted by the second weight ([0117]-[0118]; Eqn. 2 shows the weighted bi-prediction value PB by the blending process of the two prediction blocks P0 and P1 with their corresponding weighting factor values W0 and W1 respectively).
Although, Chen et al. teach plurality of weighting factors as described in [0110] and weights associated with each prediction block, but it does not explicitly teach, as claimed, selecting weights based on a condition and assigning first selected weight to the first sub-block and second selected weight to second sub-block.
However, Chen_2, in the same field of endeavor (Abstract), teach a decoding method where it selects weights (Abstract; it teaches that the current block is predicted using a weighted sum of the reference blocks wherein the weights may be selected from among a plurality of candidate weights) based on a condition ([0051]; It teaches the conditions of selecting weights, where it states that video encoder may either switch adaptively between several predefined assignment manners or assign each weight to a unique leaf node dynamically on a PU-by-PU basis based on the usage of weight values from previously coded blocks) and assigning first selected weight to the first sub-block and second selected weight to second sub-block ([0004]; It teaches that the current block is predicted as a weighted sum of a first reference block in the first reference picture and a second reference block in the second reference picture, wherein the first reference block is weighted by the first weight and the second block is weighted by the second weight).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of selecting weighting factors for each prediction block of the current block, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 15, Chen et al. and Chen_2 teach the video decoder according to claim 14, wherein the predetermined condition specifies a selection index included in the coded video bitstream, and wherein the weighting pattern is selected from a plurality of the weighting patterns based on the selection index (Chen et al.; [0016]; Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices. Chen_2 also teach the same as described in [0041], Eqn. 3, where it states that frequently-used weight values may be arranged in a set (referred hereafter to as WL1), so each weight value can be indicated by an index value).
Regarding claim 16, Chen et al. and Chen_2 teach the video decoder of claim 14, wherein the predetermined condition specifies a minimum cost measurement for selecting the weighting pattern from a plurality of weighting patterns, and wherein the cost measurement is calculated based on a template associated with the at least one sub-block, a template associated with the first sub-block, and a template associated with the second sub-block (Chen_2; [0128]-[0134], [0142]; It teaches that the weight index is used to calculate a motion estimation cost to determine which weight index gives the best estimation).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of cost based selection of weighting factors, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 17, Chen et al. and Chen_2 teach the video decoder of claim 14, wherein the predetermined condition indicates a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block (Chen_2; [0008]; it teaches that weights are identified using the corresponding codeword, wherein the codewords to weights may be adapted based on weights used in previously-coded blocks, which are neighboring blocks).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of weighting factors of neighboring blocks, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 18, Chen et al. and Chen_2 teach the video decoder of claim 14, wherein the predetermined condition indicates a plurality weighting modes including:
(i) a first weighting mode in which the weighting pattern is selected from a plurality of weighting patterns based on a selection index included in the bitstream (Chen et al.; [0016]; Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices. Chen_2 also teach the same as described in [0041], Eqn. 3, where it states that frequently-used weight values may be arranged in a set (referred hereafter to as WL1), so each weight value can be indicated by an index value),
(ii) a second weighting mode in which the weighting pattern is selected from a plurality of weighting patterns that minimizes a cost measurement that is calculated based on a template associated with the at least one sub-block, a template associated with the first sub-block, and a template associated with the second sub-block (Chen_2; [0128]-[0134], [0142]; It teaches that the weight index is used to calculate a motion estimation cost to determine which weight index gives the best estimation)
(iii) a third weighting mode in which the weighting pattern is selected based on a weighting pattern of a neighboring sub-block that neighbors the at least one sub-block (Chen_2; [0008]; it teaches that weights are identified using the corresponding codeword, wherein the codewords to weights may be adapted based on weights used in previously-coded blocks, which are neighboring blocks).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of cost based selection of weighting factors, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 19, Chen et al. and Chen_2 teach the video decoder of claim 18, wherein the selection of the one of the plurality of weighting modes is based on an indicator included in the bitstream (Chen_2; [0007]; it teaches that the set of weights is coded in the bitstream, allowing different weight sets to be adapted for use in different slices, pictures, or sequences).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of cost based selection of weighting factors, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Regarding claim 20, Chen et al. teach a non-transitory computer readable medium having instructions stored therein ([0024]), which when executed by a processor in a video decoder (Fig. 7; [0103], L7-10) cause the processor to execute a method comprising:
receiving a coded video bitstream including a current picture, a first reference picture, and a second reference picture ([0085]; It teaches a bi-prediction technique, with two reference pictures, such as a first reference picture and a second reference picture), the current picture including a current block divided into a plurality of sub-blocks (Figs. 8A-B; Fig. 12, reference numeral S1210; [0207]; It teaches that a current block is partitioned in to two sub-blocks);
determining that the current picture is predicted using a bi-prediction or compound prediction mode based on the first reference picture and the second reference picture ([0085]; [0111]; It teaches that Mv1 and Mv2 are from different reference picture lists (e.g., one from L0 and the other from L1), then Mv1 and Mv2 are simply combined to form the bi-prediction motion vector. In [0089]. It teaches that when the processing block is to be coded in inter mode or bi-prediction mode, the video encoder may use an inter prediction or bi-prediction technique);
obtaining a plurality of predefined weighting patterns, each weighting pattern being signaled as an index value (Table 4-5; [0110]; [0023]; It teaches weighting index to represent a weight value or weighting factor, wherein in [0189], it teaches a range of weight factor values with a minimum and maximum weight factor values. Table 4-5 shows plurality of weighting values corresponding to weighting indices);
selecting a weighting pattern based on a predetermined condition ([0190]-[0191]; Eqn. 24 shows the weighting factor selection based on conditions recited in Eqns. 25-26);
deriving a first weight to be applied to a first sub-block in the first reference picture and a second weight to be applied to a second sub-block in the second reference picture based on the index value corresponding to the selected weighting pattern ([0117]-[0118]; It teaches a blending process with two weights W0 and W1 associated with two partitions P0 and P1 respectively acquired from the weighting indices of the GEO according to Eqn. 2);
assigning the first weight to the first sub-block and the second weight to the second sub-block based on the selected weighting pattern ([0117]-[0118]; The first weight W0 is assigned to P0 and the second weight W1 is assigned to P1); and
decoding the current block by a weighted bi-prediction based at least on the first sub- block weighted by the first weight and the second sub-block weighted by the second weight ([0117]-[0118]; Eqn. 2 shows the weighted bi-prediction value PB by the blending process of the two prediction blocks P0 and P1 with their corresponding weighting factor values W0 and W1 respectively).
Although, Chen et al. teach plurality of weighting factors as described in [0110] and weights associated with each prediction block, but it does not explicitly teach, as claimed, selecting weights based on a condition and assigning first selected weight to the first sub-block and second selected weight to second sub-block.
However, Chen_2, in the same field of endeavor (Abstract), teach a decoding method where it selects weights (Abstract; it teaches that the current block is predicted using a weighted sum of the reference blocks wherein the weights may be selected from among a plurality of candidate weights) based on a condition ([0051]; It teaches the conditions of selecting weights, where it states that video encoder may either switch adaptively between several predefined assignment manners or assign each weight to a unique leaf node dynamically on a PU-by-PU basis based on the usage of weight values from previously coded blocks) and assigning first selected weight to the first sub-block and second selected weight to second sub-block ([0004]; It teaches that the current block is predicted as a weighted sum of a first reference block in the first reference picture and a second reference block in the second reference picture, wherein the first reference block is weighted by the first weight and the second block is weighted by the second weight).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of weighting index calculation for geometric partition mode blocks to include Chen_2's usage of selecting weighting factors for each prediction block of the current block, because prediction methods as described herein thus improve the operation of video encoders and decoders by decreasing, in at least some implementations, the number of bits required to encode and decode video (Chen_2; [0010]).
Allowable Subject Matter
Claims 7-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 7 recites two flags, of which the first flag that indicates whether the selected weighting pattern applies equal weighting to both the sub-blocks and if the first flag indicates unequal weighting, then the second flag indicates selection of one of the first weighting mode and the second weighting mode. None of the two references of Chen et al. and Chen_2 teach this limitation. Although, the reference of Li et al. (US PGPub 2020/0336749 A1), teach a flag to indicate equal weighting for the two blocks, but it does not teach second flag based on the value of thew first flag to indicate selection of one of the first weighting mode and the second weighting mode. Claims 8-12 are directly or indirectly dependent upon the objected claim 7. Therefore, any of the claims 7-12 would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
“METHOD AND APPARATUS FOR VIDEO CODING” – Li et al., US PGPub 2020/0336749 A1.
“METHOD AND APPARATUS FOR VIDEO CODING” – Li et al., US PGPub 2022/0210456 A1.
3. “WEIGHTED PREDICTION FOR VIDEO CODING” – Han et al., US PGPub 2020/0213586 A1.
4. "Generalized Bi-prediction Method for Future Video Coding" - Chen et al.; 978-1-5090-5966-9/16/$31.00 A©2016 IEEE.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAINUL HASAN whose telephone number is (571)272-0422. The examiner can normally be reached on MON-FRI: 10AM-6PM, Alternate FRIDAYS, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAY PATEL can be reached on (571)272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mainul Hasan/
Primary Examiner, Art Unit 2485