8DETAILED ACTION
1. The Office Action is in response to Application 18966783 filed on 07/09/2025. Claim 2-25 are pending.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
3. The information disclosure statements (IDS) submitted on 07/09/2025, 11/19/2025, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 17441040 filed on 09/20/2022.
Priority # Filling Data Country
1903844.7 2019-03-20 GB
1904014.6 2019-03-23 GB
1904492.4 2019-03-29 GB
Double Patenting
5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
6. Claim 2-16 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 1-14 of US Patent US 12160601 indicated below.
For Claim 2 and its dependent claims 3-16, although the conflicting claims are not identical, they both are dealing with method of encoding an input video into a plurality of encoded streams. As clearly indicated in the table below, each claimed limitations of 2 and its dependent claims 3-16 of the current application are anticipated by the corresponding limitations of claim 1-14 of the reference patent.
US 12160601
Current Application
Claim 1:
A method of encoding an input video into a plurality of encoded streams, wherein the encoded streams may be combined to reconstruct the input video, the method comprising:
receiving an input video;
downsampling the input video to create a downsampled video;
instructing an encoding of the downsampled video using a base encoder to create a base encoded stream;
instructing a decoding of the base encoded stream using a base decoder to generate a reconstructed video;
comparing the reconstructed video to the downsampled video to create a first set of residuals;
decoding the first set of residuals to generate a decoded first set of residuals;
correcting the reconstructed video using the decoded first set of residuals to generate a corrected reconstructed video;
upsampling the corrected reconstructed video to generate an up-sampled reconstructed video;
comparing the up-sampled reconstructed video to the input video to create a second set of residuals;
encoding the first set of residuals to create a first level encoded stream and encoding the second set of residuals to create a second level encoded stream,
the encoding including: applying a respective transform to the first set of residuals and the second set of residuals to create a respective first set of coefficients and a second set of coefficients;
applying a respective quantization operation to the first set of coefficients and the second set of coefficients to create a first set of quantized coefficients and a second set of quantized coefficients;
and applying a respective encoding operation to the first set of quantized coefficients and the second set of quantized coefficients, wherein applying the quantization operation comprises:
respectively adapting the quantization according to the first set of coefficients and the second set of coefficients to be quantized, including varying a step-width used for different ones of the first set of coefficients and the second set of coefficients, wherein a first set of parameters and a second set of parameters derived from the adapting is signalled to a decoder to enable dequantization of the first set of quantized coefficients and the second set of quantized coefficients.
claim 1’s limitation:
decoding the first set of residuals to generate a decoded first set of residuals;
correcting the reconstructed video using the decoded first set of residuals to generate a corrected reconstructed video;
upsampling the corrected reconstructed video to generate an up-sampled reconstructed video;
comparing the up-sampled reconstructed video to the input video to create a second set of residuals;
encoding the first set of residuals to create a first level encoded stream and encoding the second set of residuals to create a second level encoded stream,
the encoding including: applying a respective transform to the first set of residuals and the second set of residuals to create a respective first set of coefficients and a second set of coefficients; applying a respective quantization operation to the first set of coefficients and the second set of coefficients to create a first set of quantized coefficients and a second set of quantized coefficients; and applying a respective encoding operation to the first set of quantized coefficients and the second set of quantized coefficients, wherein applying the quantization operation comprises: respectively adapting the quantization according to the first set of coefficients and the second set of coefficients to be quantized, including varying a step-width used for different ones of the first set of coefficients and the second set of coefficients, wherein a first set of parameters and a second set of parameters derived from the adapting is signalled to a decoder to enable dequantization of the first set of quantized coefficients and the second set of quantized coefficients
claim 2:
wherein one or more of the first set of parameters and the second set of parameters are signalled using a quantization matrix.
claim 3:
transmitting a quantization matrix mode parameter indicating how values within the quantization matrix are to be applied to one or more of the first set of coefficients and the second set of coefficients.
claim 4:
wherein the quantization matrix mode parameter indicates one of the following modes: a first mode wherein the decoder is to use a set of values within the quantization matrix for both the first level encoded stream and the second level encoded stream; a second mode wherein the decoder is to use a set of values within the quantization matrix for the first level encoded stream; a third mode wherein the decoder is to use a set of values within the quantization matrix for the second level encoded stream; and a fourth mode wherein two quantization matrices are signalled for each of the first level encoded stream and the second level encoded stream.
claim 5:
wherein the first and second set of parameters comprise signalling to indicate that a default set of one or more quantization matrices are to be used at the decoder.
claim 6:
combining at least the first level encoded stream and the second level encoded stream into a combined encoded stream; and transmitting the combined encoded stream to the decoder for use in reconstructing the input video together with a received base encoded stream.
Claim 7:
wherein the combined encoded stream comprises the base encoded stream.
claim 8:
wherein applying the quantization operation comprises quantizing coefficients using a linear quantizer, wherein the linear quantizer uses a dead zone of variable size.
Claim 9:
wherein the quantization operation further comprises using a quantization offset.
claim 10:
wherein the quantization offset is selectively signalled to the decoder.
Claim 11:
comprising adapting the distribution used in the quantization step..
Claim 12:
wherein adapting the quantization is predetermined and/or selectively applied based on analysis of any one or more of: the input video, a downsampled video, a reconstructed video, and an upsampled video.
Claim 13:
wherein adapting the quantization is selectively applied based on a predetermined set of rules and/or determinatively applied based on an analysis or feedback of decoding performance
Claim 14:
wherein encoding residuals comprises applying the encoding to blocks of residuals that are associated with a frame of the input video, wherein each block is encoded without using image data from another block in the frame such that each block is encodable in parallel, wherein each element location in the block has a respective quantization parameter for varying the step-width.
Claim 2
A method of encoding an input video into a plurality of encoded streams, wherein the encoded streams may be combined to reconstruct the input video, the method comprising:
receiving an input video;
downsampling the input video to create a downsampled video;
instructing an encoding of the downsampled video using a base encoder to create a base encoded stream;
instructing a decoding of the base encoded stream using a base decoder to generate a reconstructed video;
comparing the reconstructed video to the downsampled video to create a first set of residuals;
and, encoding the first set of residuals to create a first level encoded stream, including:
applying a transform to the first set of residuals to create a first set of coefficients;
applying a quantization operation to the first set of coefficients to create a first set of quantized coefficients;
and applying an encoding operation to the first set of quantized coefficients, wherein applying the quantization operation comprises:
adapting the quantization based on the first set of coefficients to be quantized, including varying a step-width used for different ones of the first set of coefficients, wherein a first set of parameters derived from the adapting is signalled to a decoder to enable dequantization of the first set of quantized coefficients.
Claim 3
decoding the first set of residuals to generate a decoded first set of residuals; correcting the reconstructed video using the decoded first set of residuals to generate a corrected reconstructed video; upsampling the corrected reconstructed video to generate an up-sampled reconstructed video; comparing the up-sampled reconstructed video to the input video to create a second set of residuals; and encoding the second set of residuals to create a second level encoded stream, including: applying a transform to the second set of residuals to create a second set of coefficients; applying a quantization operation to the second set of coefficients to create a second set of quantized coefficients; and applying an encoding operation to the second set of quantized coefficients, wherein applying the quantization operation comprises: adapting the quantization based on the second set of coefficients to be quantized, including varying a step-width used for different ones of the second set of coefficients, wherein a second set of parameters derived from the adapting is signalled to a decoder to enable dequantization of the quantized coefficients.
Claim 4
wherein one or more of the first set of parameters and the second set of parameters are signalled using a quantization matrix.
Claim 5
transmitting a quantization matrix mode parameter indicating how values within the quantization matrix are to be applied to one or more of the first set of coefficients and the second set of coefficients.
Claim 6
wherein the quantization matrix mode parameter indicates one of the following modes: a first mode wherein the decoder is to use a set of values within the quantization matrix for both the first level encoded stream and the second level encoded stream; a second mode wherein the decoder is to use a set of values within the quantization matrix for the first level encoded stream; a third mode wherein the decoder is to use a set of values within the quantization matrix for the second level encoded stream; and a fourth mode wherein two quantization matrices are signalled for each of the first level encoded stream and the second level encoded stream.
Claim 7
wherein the first and second set of parameters comprise signalling to indicate that a default set of one or more quantization matrices are to be used at the decoder..
Claim 8:
combining at least the first level encoded stream and the second level encoded stream into a combined encoded stream; and transmitting the combined encoded stream to the decoder for use in reconstructing the input video together with a received base encoded stream.
Claim 9:
wherein the combined encoded stream comprises the base encoded stream.
Claim 10:
wherein applying the quantization operation comprises quantizing coefficients using a linear quantizer, wherein the linear quantizer uses a dead zone of variable size.
Claim 11:
wherein the quantization operation further comprises using a quantization offset.
Claim 12:
wherein the quantization offset is selectively signalled to the decoder.
Claim 13:
comprising adapting the distribution used in the quantization step.
Claim 14:
wherein adapting the quantization is predetermined and/or selectively applied based on analysis of any one or more of: the input video, a downsampled video, a reconstructed video, and an upsampled video.
Claim 15:
wherein adapting the quantization is selectively applied based on a predetermined set of rules and/or determinatively applied based on an analysis or feedback of decoding performance.
Claim 16:
wherein encoding residuals comprises applying the encoding to blocks of residuals that are associated with a frame of the input video, wherein each block is encoded without using image data from another block in the frame such that each block is encodable in parallel, wherein each element location in the block has a respective quantization parameter for varying the step-width.
7. Claim 17 and its dependent claims 18-22 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 15-19 of US Patent US 12160601 indicated below.
For Claim 17 and its dependent claims 18-22, although the conflicting claims are not identical, they both are dealing with method of decoding an encoded stream into a reconstructed output video. As clearly indicated in the table below, each claimed limitations of 17 and its dependent claims 18-22 of the current application are anticipated by the corresponding limitations of claim 15-19 of the reference patent.
US 12160601
Current Application
Claim 15:
A method of generating a reconstructed output video, the method comprising:
receiving a first base encoded stream;
instructing a decoding operation on the first base encoded stream using a base decoder to generate a first output video;
receiving a first level encoded stream a second level encoded stream;
decoding the first level encoded stream and the second level encoded stream to obtain a first set of residuals and a second set of residuals;
and, combining the first set of residuals with the first output video to generate a reconstructed video, and combining the second set of residuals with an unsampled version of the reconstructed video to generate a reconstruction of an original resolution input video,
wherein decoding the first level encoded stream and the second level encoded stream comprises:
decoding a first set of quantized coefficients from the first level encoded stream and a second set of quantized coefficients from the second level encoded stream;
obtaining a first set of parameters and a second set of parameters respectively indicating how to dequantize the first set of quantized coefficients and the second set of quantized coefficients;
and respectively dequantizing the first set of quantized coefficients and the second set of quantized coefficients using the first set of parameters and the second set of parameters, wherein different ones of the first set of quantized coefficients and the second set of quantized coefficients are dequantized using respective dequantization parameters, and wherein obtaining the first set of parameters comprises:
obtaining a quantization mode parameter that is signalled with the first level encoded stream;
and responsive to a first value of the quantization mode parameter, using a default quantization matrix as the first set of parameters
claim 15’s limitation:
and wherein obtaining the first set of parameters comprises: obtaining a quantization mode parameter that is signalled with the first level encoded stream; and responsive to a first value of the quantization mode parameter, using a default quantization matrix as the first set of parameters
claim 16:
responsive to other values of the quantization mode parameter, obtaining a quantization matrix that is signalled with the first level encoded stream and using quantization matrix as the first set of parameters.
claim 17:
prior to dequantizing the first set of quantized coefficients, applying an entropy decoding operation to the first level encoded stream; and after dequantizing the first set of quantized coefficients, applying an inverse transform operation to generate the first set of residuals.
claim 15’s limitation:
receiving a first level encoded stream a second level encoded stream; decoding the first level encoded stream and the second level encoded stream to obtain a first set of residuals and a second set of residuals; and, combining the first set of residuals with the first output video to generate a reconstructed video, and combining the second set of residuals with an unsampled version of the reconstructed video to generate a reconstruction of an original resolution input video….obtaining a first set of parameters and a second set of parameters respectively indicating how to dequantize the first set of quantized coefficients and the second set of quantized coefficients; and respectively dequantizing the first set of quantized coefficients and the second set of quantized coefficients using the first set of parameters and the second set of parameters, wherein different ones of the first set of quantized coefficients and the second set of quantized coefficients are dequantized using respective dequantization parameters
claim 18:
obtaining a quantization matrix that is signalled with one or more of the first and second level encoded streams, and dequantizing comprises, for a plurality of quantized coefficient elements within a block of quantized coefficients for a frame of video, a block corresponding to a n by n grid of picture elements, a frame comprising multiple blocks that cover the spatial area associated with the frame: obtaining a quantization parameter from the quantization matrix based on a location of a given quantized coefficient element; and using the quantization parameter to dequantize the given quantized coefficient element.
Claim 19:
wherein dequantizing comprises using a linear dequantization operation and applying a non-centered de-quantization offset
Claim 17
A method of decoding an encoded stream into a reconstructed output video, the method comprising:
receiving a first base encoded stream;
instructing a decoding operation on the first base encoded stream using a base decoder to generate a first output video;
receiving a first level encoded stream;
decoding the first level encoded stream to obtain a first set of residuals;
and, combining the first set of residuals with the first output video to generate a reconstructed video,
wherein decoding the first level encoded stream comprises:
decoding a first set of quantized coefficients from the first level encoded stream;
obtaining a first set of parameters indicating how to dequantize the first set of quantized coefficients;
and dequantizing the first set of quantized coefficients using the first set of parameters, wherein different ones of the first set of quantized coefficients are dequantized using respective dequantization parameters
Claim 18
wherein obtaining the first set of parameters comprises: obtaining a quantization mode parameter that is signalled with the first level encoded stream; responsive to a first value of the quantization mode parameter, using a default quantization matrix as the first set of parameters; and responsive to other values of the quantization mode parameter, obtaining a quantization matrix that is signalled with the first level encoded stream and using quantization matrix as the first set of parameters
Claim 19
prior to dequantizing the first set of quantized coefficients, applying an entropy decoding operation to the first level encoded stream; and after dequantizing the first set of quantized coefficients, applying an inverse transform operation to generate the first set of residuals.
Claim 20
receiving a second level encoded stream; decoding the second level encoded stream to obtain a second set of residuals; and combining the second set of residuals with an upsampled version of the reconstructed video to generate a reconstruction of an original resolution input video, wherein decoding the second level encoded stream comprises: decoding a second set of quantized coefficients from the second level encoded stream; obtaining a second set of parameters indicating how to dequantize the second set of quantized coefficients; and dequantizing the second set of quantized coefficients using the second set of parameters, wherein different ones of the second set of quantized coefficients are dequantized using respective dequantization parameters.
Claim 21
obtaining a quantization matrix that is signalled with one or more of the first and second level encoded streams, and dequantizing comprises, for a plurality of quantized coefficient elements within a block of quantized coefficients for a frame of video, a block corresponding to a n by n grid of picture elements, a frame comprising multiple blocks that cover the spatial area associated with the frame: obtaining a quantization parameter from the quantization matrix based on a location of a given quantized coefficient element; and using the quantization parameter to dequantize the given quantized coefficient element.
Claim 22:
wherein dequantizing comprises using a linear dequantization operation and applying a non-centred de-quantization offset
8. Claim 23 is rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 1 of US Patent US 12160601 for the similar reason as for claim 2.
9. Claim 24 is rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 15 of US Patent US 12160601 for the similar reason as for claim 17.
10. Claim 25 is rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 1 of US Patent US 12160601 for the similar reason as for claim 2.
Claim Rejections - 35 USC § 112(d)
11. The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
12. Claim 23 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 23 depend on independent claim 1 but fails to further limit the subject matter of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
13. Claim 24 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 24 depend on independent claim 17 but fails to further limit the subject matter of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
14. Claim 25 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 25 depend on independent claim 1 but fails to further limit the subject matter of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 112 (b)
15. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
16. For claim 23, it is dependent on claim 1, which is a cancelled claim; method; Therefore, it is not clear what claim 23 claims about.
17. For claim 24, it is dependent on claim 17, which is a method; however, claim 24 as a whole is directed to a decoder. Therefore, it is not clear claim 24 claims a decoder or a method.
18. For claim 25, it is dependent on claim 1, which is a cancelled claim; method; Therefore, it is not clear what claim 25 claims about.
Claim Rejections - 35 USC § 112
19. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
21. Claim 17 and its dependent claims 18-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
For claim 17, it recites limitations of “an encoded stream” in “, method of decoding an encoded stream into a reconstructed output video”. However, there are two encoded stream, as: “receiving a first base encoded stream” and “; receiving a first level encoded stream”; and the exact meaning of “an encoded stream” is unclear since it is unclear which encoded stream it refer to. Thus the scope of the claim and its dependent claims 18-22 are unclear.
Claim Rejections - 35 USC § 103
22. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
23. Claims 2, 8-9, 13-15, 17, 24 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over TANAKA et al. (EP 2822275) and in view of ROSSATO et al. (CA 2873499).
Regarding claim 2, TANAKA discloses a method of encoding an input video into a plurality of encoded streams (fig. 50), wherein the encoded streams may be combined to reconstruct the input video (as shown in fig. 50, 723 multiplexing unit), the method comprising:
receiving an input video (fig. 50);
downsampling the input video to create a downsampled video (fig. 50, based layer; downsampling is shown in fig. 17; paragraph 0040, … Fig. 17 is a diagram illustrating an example of downsampling);
instructing an encoding of the downsampled video using a base encoder to create a base encoded stream (fig. 50, 721);
instructing a decoding of the base encoded stream using a base decoder to generate a reconstructed video (fig. 51, component 732 is a base decoder);
comparing the reconstructed video to the downsampled video to create a first set of residuals (fig. 16, component 162 create a first set of residuals; paragraph 0139, … difference matrix generation unit 162 generates a difference matrix (residual matrix) that is a difference between the prediction matrix supplied from the prediction unit 161 {the prediction matrix generation unit 172) and the scaling list input to the matrix processing unit 150) ;
and, process the first set of residuals to create a first level encoded stream (fig. 16, component 164), including: applying a transform to the first set of residuals to create a first set of coefficients (fig. 16, component 181; paragraph 0140, … The prediction matrix size transformation unit 181 transforms (hereinafter also referred to as converts) the size of the prediction matrix supplied from the prediction matrix generation unit 172);
applying a quantization operation to the first set of coefficients to create a first set of quantized coefficients (fig. 16, component 183; paragraph 0145, … The quantization unit 183 quantizes the difference matrix supplied from the computation unit 182. The quantization unit 183 supplies the quantized difference matrix to the difference matrix size transformation unit 163);
and applying an encoding operation to the first set of quantized coefficients (fig. 16, component 164; paragraph 152, … The entropy encoding unit 164 encodes the difference matrix (quantized data) supplied from the difference matrix size transformation unit 163), wherein applying the quantization operation comprises:
adapting the quantization based on the first set of coefficients to be quantized, including varying a step-width used for different ones of the first set of coefficients (paragraph 0002, … allow quantization of image data with quantization step sizes that differ from one component of orthogonal transform coefficient to another. The quantization step size for each component of orthogonal transform coefficient may be set based on a reference step value and a quantization matrix), wherein a first set of parameters derived from the adapting is signalled to a decoder to enable dequantization of the first set of quantized coefficients (paragraph 0128, … the
quantization unit 130 switches the quantization step size in accordance with the rate control signal supplied from the rate control unit 18 to change the bit rate of the quantized data to be output; in which, the rate control signal is a first set of parameters derived from the adapting and it is signalled to a decoder ; paragraph 0011, …a dequantization unit configured to dequantize quantized data obtained by decoding encoded data, using an up-converted quantization matrix in which a coefficient located at the beginning of the up-converted quantization matrix set by the upconversion unit has been replaced with the replacement coefficient).
It is noticed that TANAKA does not disclose explicitly that encoding the first set of residuals to create a first level encoded stream.
ROSSATO teaches that encoding the first set of residuals to create a first level encoded stream (as shown in fig. 5, the residual signal from 510 is encoded in 520).
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to incorporate the technology that encoding the first set of residuals to create a first level encoded stream as a modification to the method for the benefit of that achieve high compression ratios (see page 1).
Regarding claim 17, TANAKA discloses a method of decoding an encoded stream into a reconstructed output video (fig. 51), the method comprising:
receiving a first base encoded stream (as shown in fig. 51, encoded base layer image stream);
instructing a decoding operation on the first base encoded stream using a base decoder to generate a first output video (fig. 5, component 732 is a base decoder);
receiving a first level encoded stream (fig. 24);
decoding the first level encoded stream to obtain a first set of residuals (fig. 24, component 562; paragraph 0226, ... if the residual data is a remaining portion of a 135-degree symmetric matrix from which the data (matrix elements) of the overlapping symmetric part has been removed, the inverse overlap determination unit 553 restores the data of the symmetric part…. supplies the difference matrix restored in the way described above to the scaling list restoration unit 534 {the difference matrix size transformation unit 562)) ;
and, process the first set of residuals with the first output video to generate a reconstructed video (as shown in fig. 24, component 564; 535; paragraph 0238, … the output unit 535 supplies the scaling list for the current region supplied from the scaling list restoration unit 534 (the computation unit 564) to the dequantization unit 440),
wherein decoding the first level encoded stream comprises:
decoding a first set of quantized coefficients from the first level encoded stream (fig. 24, component 563; paragraph 0011, …a dequantization unit configured to dequantize quantized data obtained by decoding encoded data, using an up-converted quantization matrix in which a coefficient located at the beginning of the up-converted quantization matrix set by the upconversion unit has been replaced with the replacement coefficient);
obtaining a first set of parameters indicating how to dequantize the first set of quantized coefficients (paragraph 0128, … the quantization unit 130 switches the quantization step size in accordance with the rate control signal supplied from the rate control unit 18 to change the bit rate of the quantized data to be output; in which, the rate control signal is a first set of parameters derived from the adapting and it is signalled to a decoder; ; paragraph 0011, …a dequantization unit configured to dequantize quantized data obtained by decoding encoded data, using an up-converted quantization matrix in which a coefficient located at the beginning of the up-converted quantization matrix set by the upconversion unit has been replaced with the replacement coefficient);
and dequantizing the first set of quantized coefficients (fig. 24, component 563, dequantization unit) using the first set of parameters, wherein different ones of the first set of quantized coefficients are dequantized using respective dequantization parameters (paragraph 0128, … the quantization unit 130 switches the quantization step size in accordance with the rate control signal supplied from the rate control unit 18 to change the bit rate of the quantized data to be output).
It is noticed that TANAKA does not disclose explicitly that combining the first set of residuals with the first output video to generate a reconstructed video
ROSSATO teaches that combining the first set of residuals with the first output video to generate a reconstructed video (as shown in fig. 5, the residual signal from 545 is combined to generate a reconstructed video in 550).
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to incorporate the technology that generate a reconstructed video as a modification to the method for the benefit of that achieve high compression ratios (see page 1).
Regarding claim 24, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 17 as discussed above. In addition, TANAKA also discloses that decoder for decoding an encoded stream into a reconstructed output video (fig. 51, 732 and
).
Regarding claim 8, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 2 as discussed above. In addition, TANAKA also discloses that combining at least the first level encoded stream (fig. 50, encoded base-layer image stream) and the second level encoded stream (fig. 50, encoded non-based-layer image stream) into a combined encoded stream (fig. 50, encoded layered image stream after 723); and transmitting the combined encoded stream to the decoder (fig. 51) for use in reconstructing the input video together with a received base encoded stream (fig. 51, encoded based-layer image stream)
Regarding claim 9, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 8 as discussed above. In addition, TANAKA also discloses that the combined encoded stream comprises the base encoded stream (as shown in fig. 50).
Regarding claim 13, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 2 as discussed above. In addition, TANAKA also discloses that adapting the distribution used in the quantization step (paragraph 0128, … the quantization unit 130 switches the quantization step size in accordance with the rate control signal supplied from the rate control unit 18 to change the bit rate of the quantized data to be output).
Regarding claim 14, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 2 as discussed above. In addition, TANAKA also discloses that adapting the quantization is predetermined and/or selectively applied based on analysis of any one or more of: the input video, a downsampled video, a reconstructed video, and an upsampled video ((paragraph 0128, … the quantization unit 130 switches the quantization step size in accordance with the rate control signal supplied from the rate control unit 18 to change the bit rate of the quantized data to be output; which is input video).
Regarding claim 15, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 2 as discussed above. In addition, TANAKA also discloses that adapting the quantization is selectively applied based on a predetermined set of rules and/or determinatively applied based on an analysis or feedback of decoding performance (paragraph 0128, … the quantization unit 130 switches the quantization step size in accordance with the rate control signal supplied from the rate control unit 18 to change the bit rate of the quantized data to be output; which is based on a predetermined set of rules ).
24. Claims 10 is rejected are rejected under 35 U.S.C. 103 as being unpatentable over TANAKA et al. (EP 2822275) and in view of ROSSATO et al. (CA 2873499) and further in view of LEI et al. (JP 2001298366) .
Regarding claim 10, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 1 as discussed above.
It is noticed that TANAKA does not disclose explicitly that quantizing coefficients using a linear quantizer, wherein the linear quantizer uses a dead zone of variable size..
LEI teaches that quantizing coefficients using a linear quantizer (page 4, … performing linear quantization), wherein the linear quantizer uses a dead zone of variable size. (as shown in page 7, a dead zone and an output index at the center of the step, etc. can be used for video encoding).
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to incorporate the technology that quantizing coefficients using a linear quantizer, wherein the linear quantizer uses a dead zone of variable size. as a modification to the device for the benefit of that take advantage of. The weighting factors can be stored in the quantization table (page 7).
25 Claims 11, 12 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over TANAKA et al. (EP 2822275) and in view of ROSSATO et al. (CA 2873499)and further in view of Ugur et al. (US 20070025441) .
Regarding claim 11, the combination of TANAKA and ROSSATO discloses the limitations recited in claim 2 as discussed above.
It is noticed that TANAKA does not disclose explicitly that comprises using a quantization offset..
Ugur teaches that comprises using a quantization offset (paragraph 0186, … the quantization parameter QP.sub.IDR may be obtainable from a minimum value, a maximum value, or one or more last quantization parameters QP.sub.IDR offset by one or more predefined offset values).
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to incorporate the technology that comprises using a quantization offset as a modification to the method for the benefit of that QP calculation (paragraph 0185).
Regarding claim 12, the combination of TANAKA, ROSSATO and Ugur discloses the limitations recited in claim 11 as discussed above. In addition, Ugur also discloses that the quantization offset is selectively signalled to the decoder (paragraph 0186, … the quantization parameter QP.sub.IDR may be obtainable from a minimum value, a maximum value, or one or more last quantization parameters QP.sub.IDR offset by one or more predefined offset values; which means is selectively signalled to the decoder).
The reason of combination is the same as in claim 10’s rejection.
Conclusion
26 The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form 892.
27. Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAIHAN JIANG whose telephone number is (571)272-1399. The examiner can normally be reached on flexible.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-270-0655.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZAIHAN JIANG/Primary Examiner, Art Unit 2488