DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 20 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kadono et al. (Pub. No. US 2004/0076237 A1).
Regarding claim 20, Kadono discloses One or more memory or storage devices having stored thereon a program ([0247] recording a program implementing the steps of … method to a floppy disk or other computer-readable data recording medium; [0251]; [0257] The software for … can be stored to any computer-readable data recording medium (such as a CD-ROM disc, floppy disk, or hard disk drive)).
See MPEP 2111.05 (III), when determining the scope of the claims, “a bitstream of a video” is not given patentable weight, because “a bitstream of a video” is non-functional descriptive material. It is merely static data that imparts no function (unlike an executable computer program which performs a function). It does not have any functional relationship with the intended computer system. Thus, the computer-readable data recording medium disclosed in Kadono meets claim 20.
Claims 1, 3-5, 13-14, 17-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by LI et al. (US 20220109890 A1).
Regarding claim 1. LI discloses A method for video processing ([0004] techniques that can be used by image, audio or video encoders and decoders), comprising:
performing a conversion between a current video unit of a video and a bitstream of the video ([0005] a conversion between visual media data and a bitstream of the visual media data), wherein the bitstream comprises a first indication being allowed to activate a target neural-network post-processing filter (NNPF) ([0205] indication of enabling/disabling the CNN filters; [0229] when the CNN filter is disabled, the indication of using the CNN filter is not present in the bitstream; [0033] the loop filter in image/video coding may be used as post-processing method which is out of encoding/decoding process; [0158] 2.9. Convolutional Neural Network-Based Loop Filters for Video Coding), the target NNPF being applied to a plurality of video units of the video ([0196] for a given video unit (e.g., a sequence/picture/subpicture/slice/tile/CTU/CTB/CU/PU/TU), the CNN filter may be applied; [0199] the proposed CNN-based filters may be applied to certain slice/picture types, certain temporal layers, or certain slices/picture; [0203] Whether and/or how to use CNN filters (denoted as CNN information) may be controlled at a video unit (e.g., sequence/picture/slice/tile/brick/subpicture/CTU/CTU row/one or multiple CUs or CTUs/CTBs) level).
Regarding claim 3. LI discloses The method of claim 1, wherein the plurality of video units are in a same layer as the current video unit ([0199] the proposed CNN-based filters may be applied to certain temporal layers; [0217] the use of the CNN filter can be conditioned on temporal layer id; [0276] e. In one example, the selection of a set of CNN filters may depend on temporal layer identification (e.g., the Temporal id in the VVC specification); [0277] i. In one example, slices or pictures in different temporal layers may utilize different sets of CNN filter models), and the plurality of video units comprise one of the following:
a plurality of consecutive video units in an output order,
a plurality of consecutive video units in a decoding order, or
a plurality of video units with a same parameter in the output order ([0276] e. In one example, the selection of a set of CNN filters may depend on temporal layer identification (e.g., the Temporid in the VVC specification); [0277] i. In one example, slices or pictures in different temporal layers may utilize different sets of CNN filter models (same temporal layer identification)).
Regarding claim 4. LI discloses The method of claim 1, wherein a video unit is a picture or a slice ([0203] Whether and/or how to use CNN filters (denoted as CNN information) may be controlled at a video unit (e.g., sequence/picture/slice/tile/brick/subpicture/CTU/CTU row/one or multiple CUs or CTUs/CTBs) level).
Regarding claim 5. LI discloses The method of claim 1, wherein the first indication is comprised in a neural-network post-filter activation (NNPFA) supplemental enhancement information (SEI) message in the bitstream ([0264] The CNN information may be signaled as a SEI message).
Regarding claim 13. LI discloses The method of claim 1, wherein the first indication equal to a third value indicates that the target NNPF is used for post-processing filtering for the current video unit only ([0262] an indicator (e.g., a flag) in the slice header is signaled to indicate whether CNN filter is activated for current slice).
Regarding claim 14. LI discloses The method of claim 13, wherein the third value is 0 ([0262] an indicator (e.g., a flag) in the slice header is signaled to indicate whether CNN filter is activated for current slice).
Regarding claim 17. LI discloses The method of claim 1, wherein the conversion includes encoding the current video unit into the bitstream, or
wherein the conversion includes decoding the current video unit from the bitstream ([0004] techniques that can be used by image, audio or video encoders and decoders; [0005] a conversion between visual media data and a bitstream of the visual media data).
Regarding claim 18. The same analysis has been stated in claim 1.
Regarding claim 19. The same analysis has been stated in claim 1.
Regarding claim 20. The same analysis has been stated in claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2, 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over LI et al. (US 20220109890 A1) in view of Li’483 (US 20220191483 A1).
Regarding claim 2. LI in view of LI’483 discloses The method of claim 1, wherein the bitstream further comprises a second indication indicating an identifying number of the target NNPF (LI [0265] The number of different CNN filter models and/or sets of CNN filter models may be signaled to the decoder LI’483 [0170] As an example, different CNN filters can be used for different layers, different components (e.g., luma, chroma, Cb, Cr, etc.), different specific video units, etc. Flags and/or indices can be signaled to indicate which CNN filter should be used for each video item; LI’483 [0185]-[0186] indicators of one or several NN filter model indices may be signaled for a video unit, the indicator is the NN filter model index).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of LI and LI’483, to signal an index to indicate which CNN filter should be used, in order to efficiently process the video.
Regarding claim 6. LI’483 discloses The method of claim 2, wherein the bitstream further comprises a third indication indicating deactivation of the target NNPF ([0019] a flag is signaled to indicate whether the NN filter model index is at least partially predicted based on a previous NN filter model index or is inherited from the previous NN filter model index).
The same motivation has been stated in claim 2.
Regarding claim 7. LI discloses The method of claim 6, wherein the third indication is comprised in an NNPFA SEI message in the bitstream ([0264] The CNN information may be signaled as a SEI message).
Regarding claim 8. LI in view of LI’483 discloses The method of claim 6, wherein the first indication comprises a syntax element nnpfa_persistence_flag, or the third indication comprises a syntax element nnpfa_cancel_flag (LI [0203] Whether and/or how to use CNN filters (denoted as CNN information) may be controlled at a video unit (e.g., sequence/picture/slice/tile/brick/subpicture/CTU/CTU row/one or multiple CUs or CTUs/CTBs) level; LI’483 [0019] a flag is signaled to indicate whether the NN filter model index is at least partially predicted based on a previous NN filter model index or is inherited from the previous NN filter model index).
Regarding claim 9. LI in view of LI’483 discloses The method of claim 6, wherein the third indication equal to a first value indicates that persistence of the target NNPF established by a previous NNPFA SEI message with a same second indication as a current SEI message is cancelled (LI [0264] The CNN information may be signaled as a SEI message; LI’483 [0019] a flag is signaled to indicate whether the NN filter model index is at least partially predicted based on a previous NN filter model index or is inherited from the previous NN filter model index; LI’483 [0170] The CNN filters can be signaled based on whether a neighbor video unit uses the filter; LI’483 [0195] indicators of one or several NN filter model indices in current video unit may be inherited from previously coded/neighboring video units).
Regarding claim 10. LI’483 discloses The method of claim 9, wherein the first value is 1 (LI’483 [0019] a flag is signaled to indicate whether the NN filter model index is at least partially predicted based on a previous NN filter model index or is inherited from the previous NN filter model index; LI’483 [0170] The CNN filters can be signaled based on whether a neighbor video unit uses the filter; LI’483 [0195] indicators of one or several NN filter model indices in current video unit may be inherited from previously coded/neighboring video units).
Allowable Subject Matter
Claims 11-12, 15-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOLAN XU/ Primary Examiner, Art Unit 2488