Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, and 3-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Denoual et al., WO 2024/217942 A1.
Regarding claim 1, Denoual discloses: video streaming method comprising:
reconstructing a current picture based on coded video received in a first set of network data packets (See page 31, lines 7-9, “When a reader supports the NNPFA sample group, it should perform, for the mapped samples, the insertion of the prefix SEI NAL units corresponding to NNPFA SEI message as a part of the bitstream reconstruction.”);
receiving a second set of network data packets (See “Metabox 504 in figure 5, as disclosed in lines 3-5 of page 35.);
receiving a particular network data packet comprising a group identifier identifying a group comprising the second set of network data packets (See lines 14-16 of page 22, “It is proposed a sample group to convey information about neural network-based post filter characteristics (NNPFC sample group) and a sample group to convey information about neural network-based post filter activation (NNPFA sample group).”); and
processing and outputting the reconstructed current picture by using a set configuration data transmitted by the group identified by the group identifier (See page 31, lines 7-9, “When a reader supports the NNPFA sample group, it should perform, for the mapped samples, the insertion of the prefix SEI NAL units corresponding to NNPFA SEI message as a part of the bitstream reconstruction).
Regarding claim 3, Denoual discloses: the video streaming method of claim 1, wherein the particular network data packet associates a processing characteristic with the group identifier (“See NNPFC SEI message, NNPFA for neural-network post-filter activation and NNPFC for neural-network post-filter characteristics”, on page 2, lines 24-25.)
Regarding claim 4, Denoual discloses: the video streaming method of claim 1, wherein the particular network data packet associates a processing persistence (See nnpfa_Persistence_flag in table on page 22.) and a processing purpose with the group identifier (See post_filter_purpose flag in lines 8-14 of page 19.).
Regarding claim 5, Denoual discloses: the video streaming method of claim 1, wherein the particular network data packet associates a processing grouping type with the group identifier (See figure 4a, and line 37-41 on page 17. See also page 18, “class PostFilterSampleGroupEntry extends VisualSampleGroupEntry ('pfif')).
Regarding claim 6, Denoual discloses: the video streaming method of claim 1, wherein the particular network data packet activates or de-activates a processing function that uses the configuration data in the second set of network data packets (See NNFPA SEI message handling on page 22 at top.)
Regarding claim 7, Denoual discloses: the video streaming method of claim 6, wherein the processing function is a neural network post filter to be applied to the current picture and the generated configuration data is for configuring the neural network post filter (See page 11, lines 28-30, “As an alternative to SEI messages in the bitstream, postfilter information (like 105-2) may be provided by other means (e.g., as a separate 30 bitstream or as configuration information ... page 13, lines 28-30: “the parser may append in 304 the filter description to the reconstructed bitstream for the track. The filter configuration may be added in 304 if it defines a new configuration,”).
Regarding claim 8, Denoual discloses: the video streaming method of claim 1, wherein the group identifier is a first group identifier for a first group comprising one or more network data packets and at least a second group associated with a second group identifier, wherein the second group comprises one or more network data packets (See “sampletogroupbox”, as described in page 39, lines 8-10.).
Regarding claim 9, Denoual discloses: the video streaming method of claim 1, wherein the group is one of a plurality of groups that are associated with the current picture, the plurality of groups comprising multiple sets of network data packets for supporting respective multiple processing functions (See page 16, lines 11-16, disclosing multiple filters with respective identifiers (e.g. luminance filter, chrominance filter, indication of a target usage, etc., may be associated with a single sample.).
Regarding claim 10, Denoual discloses: the video streaming method of claim 9, wherein the multiple processing functions comprise multiple neural network post filters to be applied to the current picture (See page 51, lines 15-16, “the bitstream provides identifier for a group of filters to be applied to a same set of samples.”).
Regarding claim 11, Denoual discloses: the video streaming method of claim 1, wherein each network data packet in the first set of data packets is a video coding layer (VCL) network abstraction layer (NAL) unit, and each network data packet in the second set of data packets is a supplemental enhancement information (SEI) message (See page 40, lines 3-6 the item_lDs for the metadata items describing post-filter information, their value may differ from the nnpfc_id parameter conveyed in the NNPFC SEI message or from a postfilter identifier.).
Video streaming method claim 12 is directed to a transmitting/encapsulation method corresponding to the decoding method of claim 1. Therefore, video streaming method claim 12 corresponds to video decoding method claim 1, and is rejected for the same reasons of anticipation as given above for claim 1.
Electronic apparatus claims 13 and 14 are directed to an electronic apparatus respectively performing the decoding and transmitting/encapsulation steps of method claims 1 and 12, respectively. Therefore, apparatus claims 13 and 14 are rejected for the same reasons of anticipation as given above, respectively, for claims 13 and 14.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Denoual, in view of Wang, US 2026/0019642 A1.
Regarding claim 2, Denoual discloses the limitations of claim 1, upon which depends claim 2. Denoual does not disclose: the video streaming method of claim 1, wherein the particular network data packet assigns processing order to the network data packets in the second set of network data packets
However, Wang discloses these limitations in an analogous art directed to specifying processing order for neural network post filters. See Step 4202 in figure 4 of [0349], which reads “determine to signal a processing order or a preferred processing order of different post-processing filters, including zero or more neural network post-filters (NNPFs) and zero or more non-NNPF post-processing filters,”
It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate a processing order indication, as disclosed in Wang, in order to indicate the preferred processing order for different post-processing filters, as this order of filter implementation can be non-commutative with respect to final image quality. See Wang [0221]. Combining these elements would have merely entailed combining these elements according to known methods to yield predictable results. See MPEP 2143.01.I.(A)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE M LOTFI whose telephone number is (571)272-8762. The examiner can normally be reached 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KYLE M LOTFI/ Examiner, Art Unit 2425