DETAILED ACTION
This action is in response to the application filed on 1/16/2025.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Mapping Notation
In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference.
In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference.
[0005] (Paragraph number [0005])
C5 (Column 5)
Pa5 (Page 5)
S5 (Section 5)
Furthermore, unless necessary to distinguish from other references in this action, “et al.” will be omitted when referring to the reference.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6,9-10 and 18-20 are rejected under 35 U.S.C. 102 (a2) as being anticipated by RACAPE et al. (US 20260059128 A1)
1. A method of video decoding, comprising:
receiving a coded video bitstream comprising coded information of a video using multi-layer coding, the coded information comprising first coded information corresponding to first coded pictures of a first layer serving for a machine task, and
“[0036] In addition, like other scalable video compression methods, the proposed scheme is not limited to only one layer for machine tasks and one layer for human consumption, it can contain several layers optimized for different tasks and several layers for human consumption with different quality levels, spatial resolutions, temporal resolutions, etc.”
second coded information corresponding to second coded pictures of a second layer serving for a human consumption;
“[0036] In addition, like other scalable video compression methods, the proposed scheme is not limited to only one layer for machine tasks and one layer for human consumption, it can contain several layers optimized for different tasks and several layers for human consumption with different quality levels, spatial resolutions, temporal resolutions, etc.”
determining a consumption type from at least the machine task and the human consumption; and
“[0032] In this application, we aim to optimize the compression of a bitstream including features dedicated for machine consumption as well as data for reconstructing the image or video frames for human viewing. For machine consumption only, NN-based codecs can be used to extract and compress the features for remote analysis.”
One of ordinarily skilled in the art understands that determining type is inherent in order to utilize the corresponding codec.
reconstructing, based on the consumption type, at least a reconstructed picture according to at least one of the first coded information and/or the second coded information.
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
2. The method of claim 1, wherein: when the consumption type is machine task, the reconstructing comprises: reconstructing he reconstructed picture according to the first coded information; and
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
when the consumption type is the human consumption, the reconstructing comprises at least one of: reconstructing the reconstructed picture according to the second coded information;
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
and/or reconstructing the reconstructed picture according to the first coded information and the second coded information.
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
3. The method of claim 1, wherein the first coded information is coded using a subset of coding tools of a video codec, the subset of coding tools being associated with the machine task.
“[0035] In this application, it is proposed to design a hybrid framework with multiple types of outputs to support machine tasks and human consumption. Such framework is also called a scalable framework where a base layer uses NN-based methods to compress the content for computer vision machine tasks and at least one enhancement layer uses traditional scalable video compression methods to compress the content for human viewing. We call such a framework a scalable framework as a reconstructed image from the base layer can be used as a predictor for the enhancement layer.”
4. The method of claim 3, wherein the first coded information comprises at least one of:
coded information of only contents that are labeled as region of interest (ROI) in the video;
In the alternative language.
coded information with a truncated bit depth; coded information of residual samples that are scaled by one or more scaling values;
In the alternative language.
and/or coded information of a different spatial and/or temporal sampling ratio from an original ratio value of the video.
“[0036] In addition, like other scalable video compression methods, the proposed scheme is not limited to only one layer for machine tasks and one layer for human consumption, it can contain several layers optimized for different tasks and several layers for human consumption with different quality levels, spatial resolutions, temporal resolutions, etc.”
5. The method of claim 3, wherein the reconstructing comprises: reconstructing the reconstructed picture from the first coded information using a neural network that is pre-trained according to machine learning.
“[0035] In this application, it is proposed to design a hybrid framework with multiple types of outputs to support machine tasks and human consumption. Such framework is also called a scalable framework where a base layer uses NN-based methods to compress the content for computer vision machine tasks and at least one enhancement layer uses traditional scalable video compression methods to compress the content for human viewing. We call such a framework a scalable framework as a reconstructed image from the base layer can be used as a predictor for the enhancement layer.”
6. The method of claim 1, wherein the reconstructing comprises: reconstructing the reconstructed picture from the first coded information and the second coded information.
“[0035] In this application, it is proposed to design a hybrid framework with multiple types of outputs to support machine tasks and human consumption. Such framework is also called a scalable framework where a base layer uses NN-based methods to compress the content for computer vision machine tasks and at least one enhancement layer uses traditional scalable video compression methods to compress the content for human viewing. We call such a framework a scalable framework as a reconstructed image from the base layer can be used as a predictor for the enhancement layer.”
9. The method of claim 6, wherein the reconstructing comprises:
determining a first sub-sample value of a sample in the reconstructed picture according to the first coded information;
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
determining a second sub-sample value of the sample in the reconstructed picture according to the second coded information of the second layer; and
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
reconstructing the sample according to an addition of the first sub-sample value and the second sub-sample value.
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
It is understood by one of ordinarily skilled in the art that the prediction of the enhancement layer includes addition of sub-sample values.
10. The method of claim 1, wherein the reconstructing comprises:
reconstructing a first picture corresponding to a first coded picture in the first coded pictures of the first layer according to the first coded information; and
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
reconstructing, using an inter-layer prediction, one or more samples of a second coded picture in the second layer based on at least a first sample in the first picture of the first layer.
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
18. The method of claim 10, wherein the first coded information comprises coded information with a different truncated bit depth from the second coded information, and method comprises: performing a reverse bit depth truncation on at least the first sample in the first picture of the first layer to generate at least a restored first sample; and reconstructing, using the inter-layer prediction, the one or more samples of the second coded picture in the second layer based on at least the restored first sample in the first picture.
“[0041] For reconstructing video frames, a synthesis module g.sub.s, is used to transform quantized latent tensor (Ŷ), to produce an image that can serve as predictor for the enhancement layer. However, compared to existing methods that aim at optimizing complex deep-autoencoders for both feature coding and image reconstruction simultaneously, it is proposed to re-use the compressed information from the base layer to synthesize frames that can be used as predictor for encoding an enhancement layer for human viewing…”
One of ordinarily skilled in the art understands that reverse truncation is performed on the base layer in the reference.
Regarding the claims 19 and 20, they recite elements that are at least included in the claims 1 and 1 above but in a different claim form and/or encoding/decoding counterpart that are reciprocal. Therefore, the same rationale for the rejection of the claims 1 and 1 applies.
Regarding the processor, memory and storage medium in the claims, see [00].
Allowable Subject Matter
Claims 7, 8, 11-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding the claims 6, 17 and 23, applicants uniquely claimed distinct features, which are not found in the prior art, either singularly or in an obvious combination of all the limitation of the claim, the distinct features being… generating a first block of the reconstructed picture from the first coded information when the first block belongs to a region of interest (ROI); and generating a second block of the reconstructed picture from the second coded information when the second block does not belong to the ROI.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Xiu et al. US 11343519 B2) discloses relevant art related to the subject matter of the present invention.
A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. An extension of time may be obtained under 37 CFR 1.136(a). However, in no event, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE N. NOH whose telephone number is (571) 270-0686. The examiner can normally be reached on Mon-Fri 8:30AM-5PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAE N NOH/
Primary Examiner
Art Unit 2481