DETAILED ACTION
This Office Action is in response to the Amendment filed on 10/21/2025 and is submitted as a Non-Final for the reasons presented below.
In the filed response, claims 1, 4, 6, 8, 14, 17, and 19 have been amended, where claims 1, 8, and 14 are independent claims. Further, claims 3, 10, and 16 have been canceled with claim 18 being previously canceled, and new claims 21-22 have been added.
Accordingly, Claims 1-2, 4-9, 11-15, 17, and 19-22 have been examined and are pending.
Response to Arguments
1. Applicant’s arguments, see pgs. 8-10, filed 10/21/2025, with respect to the prior art rejections of claim(s) 1-17, 19, and 20 under 35 U.S.C. 102 have been fully considered and are persuasive. Therefore, the rejections have been withdrawn. However, upon further consideration of the prior art, a new ground(s) of rejection is made in view of Pu. For this reason, this office action is filed as a Non-Final. Please see examiner’s responses below for details.
2. Based on Applicant’s arguments, the examiner agrees that Pu does not reasonably address the limitation (of now canceled claim 3) “determining the bit depth associated with the weighted prediction is further based on a variable indicating a desired bit depth for the weighted prediction” (emphasis added) which has been incorporated into claims 1, 8, and 14, since a bit depth “based on a variable indicating a desired bit depth for the weighted prediction” does not appear to be the same as “the bit depth associated with the input video”. Thus, Pu’s ‘bitDepth’ (e.g. ¶0046) cannot reasonably represent both of the aforementioned features as claimed. For this reason, the rejection has been withdrawn. However, after carefully considering Pu’s teachings in light of Applicant’s response, the examiner respectfully submits the claimed variable “indicating a desired bit depth for the weighted prediction” can be reasonably construed to mean any variable that is capable of representing a desired bit depth for weighted prediction. As such, ¶0108-¶0110 of Pu, for example, describe the variable “shift1” which appears to be used for converting the bit depth of the weighted prediction sample to a desired bit depth, based on shift1 being set equal to Max(2, 14-bitDepth). Although the term ‘bitDepth’ is associated with the input video as recited earlier in the claim, shift1 is a variable that is believed can be set and used for indicating the bit depth for weighted prediction as now required, given the broadest reasonable interpretation (BRI) of the limitation. In other words, shift1 appears to modify the bitDepth used such that it is not the same as the bit depth of the input video, but instead can be seen as a desired bit depth for weighted prediction based on its set value (¶0108). For these reasons, which are further elaborated on in the office action below, the examiner respectfully submits that prior art Pu reasonably teaches and/or suggests, either alone or in combination, all of the disclosed features of instant claims 1, 8, and 14, given their BRI.
3. Additional prior art deemed relevant include Wang et al. WO 2021/047632 A1 and Chen et al. US 9,521,434 (PTO 892), hereinafter referred to as Wang and Chen, respectively. For example in Wang (e.g. abstract), when the BCW tool is used (bi-directional weighting tool), a right shift operation for the final prediction sample converts a bit depth of said sample, wherein the right shift operation is pbSample>> (shift1+3) ), wherein pbSample represents the final prediction sample, shift1 is set equal to Max (2, 14-bitDepth). As in Pu, ‘shift1’ can be construed as the claimed “variable” given the BRI of “based on a variable indicating a desired bit depth for the weighted prediction”. As to Chen, Chen discusses an internal bit depth increase (IBDI) in video coding (col. 2 lines 1-5). In particular, Chen shows the internal bit depth generally refers to a bit depth that is used for calculations internal to the video encoder/decoder which includes weighted prediction (e.g. col. 3 lines 36-48 and col. 14 lines 20-34). Also please note figs. 2-4. In other words, Chen appears to distinguish between using an “original” bit depth (the bit depth at which the video was received) and an “increased/decreased” bit depth (e.g. col. 4 lines 35-67), which in turn can be construed as being analogous to the bit depth of the input video (e.g. bitDepth) and a variable indicating a desired bit depth for the weighted prediction (e.g. IBDI). Although both Wang and Chen are deemed relevant to the claims as presented given their BRI, this office action, relies solely on Pu’s teachings as indicated below.
4. Applicant’s response to the nonstatutory double patenting rejection in the office action dated 07/23/2025 (see pg. 5) is acknowledged. As such, the rejection of instant claims 1-2, 4-9, 11-15, 17, and 19-22 is maintained over claims 1, 3, and 13 of copending Application No. 18/372,134 (the “reference application”) in view of Pu. The rejection has been updated to reflect the recent amendments of both applications. Please see below for details.
5. The examiner is available to discuss the matters of this office action to help move the Instant Application forward. To help with advancing prosecution, it is recommended that the claimed “variable” be further defined. Please refer to the conclusion to this office action regarding scheduling interviews.
6. In light of the foregoing, Claims 1-2, 4-9, 11-15, 17, and 19-22 have been examined and are pending.
Double Patenting
7. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321I or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-2, 4-9, 11-15, 17, and 19-22 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, and 13 of copending Application No. 18/372,134, in view of Pu et al. US 2015/0098503 A1, hereinafter referred to as 134 and Pu, respectively. With Pu’s teachings, high precision explicit weighted prediction can be realized which may improve coding efficiency for high bit depth input video (e.g. ¶0056). This is a provisional nonstatutory double patenting rejection. Details of the claim mapping between claim sets is provided in table 1 below for reference.
Table 1
**Note: The items below that are BOLD/UNDERLINED in the Instant Application/Co-pending Application, respectively, indicate differences in the claim limitation.
Instant Application 18/339,360
Co-pending Application 18/372,134
Claim 1
A computer-implemented method for encoding or decoding a video comprising: determining a bit depth associated with an input video; determining a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video; determining a weighting factor and an offset value of the weighted prediction based on the bit depth associated with the weighted prediction; and processing the input video based on the weighting factor and the offset value of the weighted prediction, wherein the determining the bit depth associated with the weighted prediction is further based on a variable indicating a desired bit depth for the weighted prediction. [See Pu below for support].
Note – weighting factor is discussed in claim 13 of 134
Claim 1
A computer-implemented method for encoding or decoding an input video comprising: determining a bit depth of the input video; determining for the input video, a bit depth of offset values for weighted prediction based on the bit depth of the input video; determining weighted prediction values for pictures of the input video based on an application of the offset values to prediction values for the pictures of the input video; and processing the input video based on the weighted prediction values and the offset values for weighted prediction.
Claim 13
The encoder of claim 9, wherein the application of the weighted offset values for weighted prediction to the prediction values for the pictures of the input video is further based on weighting factors associated with the pictures.
Claim 2
The computer-implemented method of claim 1, wherein the bit depth associated with the weighted prediction is the same as the bit depth associated with the input video.
Note – Since claim 3 of 134 recites “offset values for weighted prediction”, claim 3 is deemed relevant.
Claim 3
The computer-implemented method of claim 1, wherein the bit depth of the input video is 12-bit or greater, and the bit depth of the offset values for weighted prediction is the same as the bit depth of the input video.
Claim 4
The computer-implemented method of claim 1, wherein the determining the weighting factor and the offset value of the weighted prediction is further based on a left shift by a number of bits based on the variable indicating the desired bit depth and the bit depth associated with the input video.
Not in 134
Claim 5
The computer-implemented method of claim 1, further comprising: determining a pixel value of a picture in the input video based on the weighting factor and the offset value of the weighted prediction and a reference pixel value of a reference picture in the input video.
Note - although claim 13 of 134 is deemed relevant, Pu below is brought in for more explicit support.
Claim 13
The encoder of claim 9, wherein the application of the weighted offset values for weighted prediction to the prediction values for the pictures of the input video is further based on weighting factors associated with the pictures.
Claim 6
The computer-implemented method of claim 5, wherein the pixel value of the picture in the input video is clipped to a minimum pixel value or a maximum pixel value.
Note - please see Pu below for corresponding support.
Not in 134
Claim 7
The computer-implemented method of claim 1, wherein the processing the input video includes encoding the input video or decoding the input video.
See claim 1 of 134
Claim 8
Similar to claim 1 above
See claim 1 of 134
Claim 9
Similar to claim 2 above
See claim 3 of 134
Claim 11
The computing system of claim 8, wherein the instructions, when executed by the at least one processor, further cause the computing system to perform: determining a pixel value of a picture in the input video based on the weighting factor and the offset value of the weighted prediction and a reference pixel value of a reference picture in the input video.
Claim 13
The encoder of claim 9, wherein the application of the weighted offset values for weighted prediction to the prediction values for the pictures of the input video is further based on weighting factors associated with the pictures.
Note – although reference picture is not disclosed, weighted prediction is applied to reference pictures in reference lists.
Claim 12
Note - please see Pu below for corresponding support.
Not in 134
Claim 13
Note - please see Pu below for corresponding support.
Not in 134
Claim 14
Similar to claim 1 above
See claim 1 of 134
Claim 15
Similar to claim 2 above
See claim 3 of 134
Claim 17
Similar to claim 4 above
Not in 134
Claim 19
Similar to claims 5 and 6 above
Not in 134
Claim 20
Similar to claim 5 above
Not in 134
Claim 21
The computer-implemented method of claim 1, wherein the processing the input video comprises: scaling a reference pixel of a reference picture by the weighting factor; applying the offset value to the reference pixel of the reference picture; and clipping a pixel value determined from the scaling and the applying to a minimum pixel value or a maximum pixel value.
Note: the foregoing limitation is similar to claims 5-6 above. As to “clipping a pixel value determined from the scaling and the applying to a minimum pixel value or a maximum pixel value.”, see Pu below for support.
See claim 13 of 134. Also see Pu.
Claim 22
A transmission method for a bitstream, comprising: generating the bitstream; and transmitting the bitstream, wherein the bitstream is generated according to the method of claim 1.
Note: Regarding “transmitting the bitstream”, see Pu below for support.
See claims 1 and 13 of 134.
Obviousness rationale:
As to claim 1, 134 does not address “wherein the determining the bit depth associated with the weighted prediction is further based on a variable indicating a desired bit depth for the weighted prediction”, however, Pu from the same or similar field of endeavor is found to teach and/or suggest this feature. ¶0108-¶0110 of Pu, for example, describe the variable “shift1” which appears to be used for converting the bit depth of the weighted prediction sample to a desired bit depth, based on shift1 being set equal to Max(2, 14-bitDepth). Although the term ‘bitDepth’ is associated with the input video as recited earlier in the claim, shift1 is a variable that is believed can be set and used for indicating the bit depth for weighted prediction as required, given the BRI of the amended features. In other words, shift1 appears to modify the bitDepth used such that it is not the same as the bit depth of the input video but instead can be seen as a desired bit depth for weighted prediction. Given Pu’s teachings above, it would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the video coding techniques in 134 with the teachings of Pu for high precision explicit weighted prediction which may improve coding efficiency for high bit depth input video (e.g. ¶0056).
As to claim 4, 134 does not address the recited features, however, Pu from the same or similar field of endeavor is found to teach and/or suggest these. Pu discusses computing weighting factors and offsets with respect to a left shift 1<<(bitDepth-8)) if the bitdepth associated with the weighted prediction parameters is 8 bits (see for e.g. ¶0085-¶0103) or without the left shift if the bit depth of the weighted prediction parameters is determined to be equal to “bitDepth” (bit depth of the input video). The motivation for bringing in Pu’s teachings, is the same as that presented for claim 1.
As to claim 5, claim 13 of 134 does appear to suggest the recited features, Pu from the same or similar field of endeavor is brought in to more explicitly teach and/or suggest claim 5. See ¶0052-¶0055 with respect to equations 8-253, 8-254, and 8-255, with pixel values of a picture in the input video “predSamples[x][y]”, weighting factors “w0”, “w1”, offsets “o0” and “o1”, and reference pixel value of a reference picture “predSamplesL0[x][y]” and “predSamplesL1[x][y]”]. The motivation for bringing in Pu’s teachings, is the same as that presented for claim 1.
As to claims 6, 19, and 21, 134 does not address the limitation associated with “clipping”. However, Pu from the same or similar field of endeavor is found to teach and/or suggest the aforementioned features. Please see for e.g. ¶0052-¶0056 with respect to equations 8-253, 8-254, and 8-255 along with ¶0150-¶0151 and ¶0155-¶0156 for both weighting factors and offset values employed in weighted prediction for video coding. The motivation for bringing in Pu’s teachings, is the same as that presented for claim 1.
As to claim 12, 134 does not address the recited features. As such, Pu from the same or similar field of endeavor is brought in to teach and/or suggest these features. See for e.g. ¶0107 and Table 2 in Pu. If the bit depth associated with luma and chroma is > 10, the syntax “use_high_precision_weighted_prediction_flag” can be signaled to indicate whether to use high precision weighted prediction. The motivation for bringing in Pu’s teachings, is the same as that presented for claim 1.
As to claim 13, 134 does not address the recited features. As such, Pu from the same or similar field of endeavor is brought in to teach and/or suggest these features. Please refer to luma_offsetl0[i], luma_offset_l1[i], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] (see for e.g. ¶0091-¶0097 and ¶0150 in Pu for support). The motivation for bringing in Pu’s teachings, is the same as that presented for claim 1.
As to claim 22, 134 does not address the recited features in relation to “transmitting the bitstream”. As such, Pu from the same or similar field of endeavor is brought in to teach and/or suggest these features. Please refer to fig. 1 with respect to a video encoder for generating a bitstream and transmitting said bitstream to a decoder of a destination device. The motivation for bringing in Pu’s teachings, is the same as that presented for claim 1.
Claim Rejections - 35 USC § 102
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4-9, 11-15, 17, and 19-22 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Pu et al. US 2015/0098503 A1, hereinafter referred to as Pu, where Pu discloses high precision explicit weighted prediction for coding video (e.g. abstract). Please see below for details.
Regarding claim 1, (Currently Amended) Pu teaches and/or suggests “A computer-implemented method for encoding or decoding a video comprising: determining a bit depth associated with an input video [Pu teaches a bit depth of the input video (e.g. ¶0019). See for e.g. bitDepth in ¶0046]; determining a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video [¶0019 further describes matching the bit depth of explicit weighted prediction parameters to the bit depth of the input video in relation to the disclosed techniques for HEVC range extension] and an extended precision flag [See syntax element “use_high_precision_weighted_prediction_flag” (e.g. ¶0104-¶0106)], wherein the extended precision flag indicates whether a weighting factor and/or an offset value of the weighted prediction is the same as or different from the bit depth associated with the input video [Depending on the state of the “use_high_precision_weighted_prediction_flag” (0 or 1), the bit depth of the weighted prediction parameters can be set to a ‘high bit depth’ (for e.g. > 10 bits as shown in ¶0106-¶0107) or matched to the bit depth of the input video (¶0019), i.e. the same)]; determining the weighting factor and the offset value of the weighted prediction based on the bit depth associated with the weighted prediction [Weighting factors and offsets are calculated via left shift (1 << (bitDepth – 8) when the bit depth of the weighted prediction parameters is determined to be 8 bit (e.g. ¶0085-¶0103). When the bit depth of the weighted prediction parameters is determined to be equal to the bit depth of the input video bitDepth, the left shift is not performed (¶0103). Also please refer to figs. 4-5]; and processing the input video based on the weighting factor and the offset value of the weighted prediction [Please refer to fig. 4-5], wherein the determining the bit depth associated with the weighted prediction is further based on a variable indicating a desired bit depth for the weighted prediction” [Given the BRI of the foregoing limitation, variable “shift1” (e.g. ¶0108-¶0110) can be set equal to Max(2, 14-bitDepth) which appears to modify the input bit depth “bitDepth” to “bitDepth + shift1 +1 (scaling from interpolation) +1 (sign bit from interpolation)”. In other words, “shift1” is a variable understood to set a desired bit depth for performing weighted prediction]
Regarding claim 2, (Original) Pu teaches/suggests all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the bit depth associated with the weighted prediction is the same as the bit depth associated with the input video.” [Please refer to ¶0019 0f Pu with respect to matching the bit depth of explicit weighted prediction parameters to the bit depth of the input video, i.e. is the same]
Regarding claim 4, (Original) Pu teaches/suggests all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the determining the weighting factor and the offset value of the weighted prediction is further based on a left shift by a number of bits based on the variable indicating the desired bit depth and the bit depth associated with the input video.” [Pu discusses computing the weighting factors and offsets with respect to a left shift 1<<(bitDepth-8)) if the bitdepth associated with the weighted prediction parameters is 8 bits (see for e.g. ¶0085-¶0103) or without the left shift if the bit depth of the weighted prediction parameters is determined to be equal to “bitDepth” (bit depth of the input video)]
Regarding claim 5, (Original) Pu teaches/suggests all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “further comprising: determining a pixel value of a picture in the input video based on the weighting factor and the offset value of the weighted prediction and a reference pixel value of a reference picture in the input video.” [See ¶0052-¶0055 with respect to equations 8-253, 8-254, and 8-255, with pixel values of a picture in the input video “predSamples[x][y]”, weighting factors “w0”, “w1”, offsets “o0” and “o1”, and reference pixel value of a reference picture “predSamplesL0[x][y]” and “predSamplesL1[x][y]”]
Regarding claim 6, (Original) Pu teaches/suggests all the limitations of claim 5, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the pixel value of the picture in the input video is clipped to a minimum pixel value or a maximum pixel value.” [See Clip3 operator in equations 8-253, 8-254, and 8-255 which clips the pixel values to a minimum pixel value of 0 or a maximum pixel value of ((1<<bitDepth)-1)]
Regarding claim 7, (Original) Pu teaches/suggests all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the processing the input video includes encoding the input video or decoding the input video.” [See fig. 4 and ¶0149-¶0153 along with fig. 5 and ¶0154-¶0158]
Regarding claim 8, claim 8 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. As to the claimed hardware and software of the computing system, please see encoder 20 and decoder 30 of Pu in for e.g. figs. 2 and 3, respectively.
Regarding claim 9, claim 9 is rejected under the same art and evidentiary limitations as determined for the method of Claim 2.
Regarding claim 11, claim 11 is rejected under the same art and evidentiary limitations as determined for the method of Claim 5.
Regarding claim 12, (Original) Pu teaches/suggests all the limitations of claim 8, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the determining the bit depth associated with the weighted prediction includes determining a bit depth of weighted prediction values for luma based on a bit depth luma of the input video and determining a bit depth of weighted prediction values for chroma based on a bit depth chroma of the input video.” [See for e.g. ¶0107 and Table 2 in Pu. If the bit depth associated with luma and chroma is > 10, the syntax “use_high_precision_weighted_prediction_flag” can be signaled to indicate whether to use high precision weighted prediction]
Regarding claim 13, (Original) Pu teaches/suggests all the limitations of claim 8, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the determining the weighting factor and the offset value of the weighted prediction includes determining additive offset values for luma that are applied to luma prediction values for a reference picture and determining offset deltas for chroma that are applied to chroma prediction values for the reference picture.” [Please refer to luma_offsetl0[i], luma_offset_l1[i], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] (see for e.g. ¶0091-¶0097 and ¶0150 in Pu for support)]
Regarding claim 14, claim 14 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. As to the claimed hardware and software of the computing system, please see encoder 20 and decoder 30 of Pu in for e.g. figs. 2 and 3, respectively.
Regarding claim 15, claim 15 is rejected under the same art and evidentiary limitations as determined for the method of Claim 2.
Regarding claim 17, claim 16 is rejected under the same art and evidentiary limitations as determined for the method of Claim 4.
Regarding claim 19, (Currently Amended) Pu teaches/suggests all the limitations of claim 14, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the processing the input video comprises: scaling a first reference pixel of a first reference picture by a first weighting factor [Weighting factor “w0” (i.e. a 1st weighting factor) scales a first reference pixel value of a first reference picture corresponding to a first reference picture list 0, i.e. “predSamplesL0[x][y]”. See for e.g. ¶0052-¶0056 and eq. 8-253 thru 8-255]; scaling a second reference pixel of a second reference picture by a second weighting factor [Weighting factor “w1” (i.e. a 2nd weighting factor) scales a second reference pixel value of a second reference picture corresponding to a second reference picture list 1, i.e. “predSamplesL1[x][y]”. See citations above]; applying a first offset value to the first reference pixel of the first reference picture [See citations above with respect to a first offset value “o0” that can be applied to said first reference pixel value of a first reference picture corresponding to a first reference picture list 0. Also please see ¶0150-¶0151 and ¶0155-¶0156 regarding weighting factors and offset values]; applying a second offset value to the second reference pixel of the second reference picture [See citations above with respect to a second offset value “o1” that can be applied to said second reference pixel value of a second reference picture corresponding to a second reference picture list 1. Also please see ¶0150-¶0151 and ¶0155-¶0156 regarding weighting factors and offset values]; and clipping a pixel value determined from the scaling the first reference pixel [Please refer to the Clip3 operator in equation 8-253 for clipping a pixel value from scaling (“w0”) the first reference pixel], the scaling the second reference pixel [Please refer to the Clip3 operator in equation 8-254 for clipping a pixel value from scaling (“w1”) the second reference pixel], the applying the first offset value [Eq. 8-253 further shows applying first offset value “o0”], and the applying the second offset value to a minimum pixel value or a maximum pixel value. [Eq. 8-254 further shows applying second offset value “o1”. The clipping operation clips the pixel values to a minimum pixel value of 0 or a maximum pixel value of ((1<<bitDepth)-1)]
Regarding claim 20, claim 16 is rejected under the same art and evidentiary limitations as determined for the method of Claim 5.
Regarding claim 21, (New) Pu teaches/suggests all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Pu further teaches and/or suggests “wherein the processing the input video comprises: scaling a reference pixel of a reference picture by the weighting factor [Weighting factors “w0” and “w1” for scaling a reference pixel value of a reference picture corresponding to a reference picture list 0 and a reference pictures list 1, i.e. “predSamplesL0[x][y] and “predSamplesL1[x][y]”, respectively (e.g. ¶0052-¶0056 and equations 8-253 thru 8-255]; applying the offset value to the reference pixel of the reference picture [Please refer to citations above regarding offset values. Similar support can also be found in ¶0150-¶0151 and ¶0155-¶0156 for both weighting factors and offset values]; and clipping a pixel value determined from the scaling and the applying to a minimum pixel value or a maximum pixel value.” [The Clip3 operator in equations 8-253 thru 8-255 clips the pixel values to a minimum pixel value of 0 or a maximum pixel value of ((1<<bitDepth)-1)]
Regarding claim 22, (New) Pu further teaches and/or suggests “A transmission method for a bitstream, comprising: generating the bitstream; and transmitting the bitstream [See fig. 1 with respect to a video encoder for generating a bitstream and transmitting said bitstream to a decoder of a destination device], wherein the bitstream is generated according to the method of claim 1.” [See citations of Pu in claim 1 above]
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see PTO 892 for additional references. For e.g., Wang and Chen noted in examiner’s response #3 above.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD A HANSELL JR. whose telephone number is (571)270-0615. The examiner can normally be reached Mon - Fri 10 am- 7 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD A HANSELL JR./Primary Examiner, Art Unit 2486