DETAILED ACTION
The present Office action is in response to the Request for Continued Examination (RCE) filed on 9 JANUARY 2026.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 21, and 22 have been amended. No claims have been cancelled or added. Claims 3-18 have been withdrawn. Claims 1 and 19-22 are pending and herein examined.
Response to Arguments
Applicant's arguments filed 9 JANUARY 2026 have been fully considered but the arguments directed to the 35 U.S.C. § 103 are not persuasive.
With regard to the 35 U.S.C. § 112(b) rejection of claim 19, Applicant’s position is the “determining” of claim 1 is a proactive decision for ensuring the bitstream will contain the claimed indicator. See Remarks, p. 2. The Examiner recognizes the indicator is in the bitstream as a result of encoder making such a decision; however, the interpretation of the claim has been the encoder is determining the bitstream already contains the indicator rather than the indicator is determined to be included in the bitstream. Since it is Applicant’s position that the “determining” is proactive, meaning the determination is to determine to include the indicator in the bitstream, then such an interpretation to the claim will be provided. The 35 U.S.C. § 112(b) of claim 19 is withdrawn.
With regard to the 35 U.S.C. § 103 rejection of claim 1, Applicant states:
“Ding fails to make up for the shortcomings of Na. Indeed, the reconstructed samples in Ding are only used for post-processing operations (such as deblocking filtering or enhancement), not for deriving neural network parameters. For example, the parameters of the deblocking NN are obtained from update information in the bitstream itself, rather than being calculated based on reconstructed samples. For example, column 43, lines 1-10 of Ding’s specification states that “Deblocking can be performed on the boundary regions A-D with the deblocking NN (2130). … The one or more of the boundary regions A-D can be sent to the deblocking NN (2130) to reduce the artifacts.”” (Remarks, p. 4.)
The Examiner respectfully disagrees. Ding describes in col. 26, ll. 42-61 the process of training the NN, including the use of a quality loss function using “difference between the reconstructed e.g., the reconstructed block t) and an original block (e.g., the training block t).” Ding also discloses the one or more replacement parameters are based on optimizing a rate-distortion performance. See Ding, col. 28, ll. 40-56. Therefore, because the rate-distortion performance is done with reconstructed samples and replacement parameters are determined therewith, the claim limitation of “using the second value to process a video unit of the video, wherein the second value is derived from coded information which comprises a reconstructed sample” is taught by the prior-art of record.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 and 19-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2021/0021823 A1 (hereinafter “Na”) in view of U.S. Patent No. 12,058,314 B2 (hereinafter “Ding”).
Regarding claim 1, Na discloses a method of processing video data ([0009], “method using a convolutional neural network (CNN)-based filter”), comprising:
determining, for a conversion between a video and bitstream of the video ([0007], “video encoding or decoding operation”), that the bitstream includes an indicator, wherein the indicator indicates that a first parameter set for a neural network (NN) filter model includes different filter parameters than a second parameter set for the NN filter model ([0284], “the CNN setting unit 2210 may select one set of filter coefficients from the set sets based on the similarity of pixel values between the current block and the sample images. Alternatively, the CNN setting unit 2210 may select, from the set sets, a set of filter coefficients closest to the filter coefficients calculated through one training process. Filter coefficient selection information, for example, index information, may be transmitted to the video decoding apparatus through a bitstream and used for the video decoding operation.” [0304-0305] additionally describes when the preset filter coefficients for the CNN are pre-stored, they can be differentiated with index information for selecting to encode/decode. [0142-0146] describes CNN-based filter); and
performing the conversion based on the indicator ([0284], “the CNN setting unit 2210 may select one set of filter coefficients from the set sets based on the similarity of pixel values between the current block and the sample images. Alternatively, the CNN setting unit 2210 may select, from the set sets, a set of filter coefficients closest to the filter coefficients calculated through one training process. Filter coefficient selection information, for example, index information, may be transmitted to the video decoding apparatus through a bitstream and used for the video decoding operation.” [0304-0305] additionally describes when the preset filter coefficients for the CNN are pre-stored, they can be differentiated with index information for selecting to encode/decode).
Na fails to expressly disclose wherein the method further comprises:
updating a filter parameter set of the first parameter set or the second parameter set from a first value to a second value; and
using the second value to process a video unit of the video, wherein the second value is derived from coded information which comprises a reconstructed sample.
However, Ding teaches wherein the method further comprises:
updating a filter parameter of the first parameter set or the second parameter set from a first value to a second value (Col. 5, ll. 50-56, “decode neural network update information in the coded video bitstream where the neural network update information corresponds to one of the blocks and indicates a replacement parameter corresponding to a pretrained parameter in a neural network in the video decoder. The processing circuitry can reconstruct the one of the blocks based on the neural network updated with the replacement parameter”); and
using the second value to process a video unit of the video, wherein the second value is derived from coded information (Col. 5, ll. 50-56, “decode neural network update information in the coded video bitstream where the neural network update information corresponds to one of the blocks and indicates a replacement parameter corresponding to a pretrained parameter in a neural network in the video decoder. The processing circuitry can reconstruct the one of the blocks based on the neural network updated with the replacement parameter”) which comprises a reconstructed sample (col. 26, ll. 42-61 discloses a quality loss function using the “difference between the reconstructed e.g., the reconstructed block t) and an original block (e.g., the training block t).” Col. 28, ll. 40-56 discloses optimizing a rate-distortion performance for determining replacement parameters. See col. 18, ll. 7-17).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have updated a neural network parameter, as taught by Ding (Col. 5), in Na’s invention. One would have been motivated to modify Na’s invention, by incorporating Ding’s invention, to improve the neural network functionality and thereby enhancing video quality.
Regarding claim 19, Na and Ding disclose every limitation of claim 1, as outlined above. Additionally, Na discloses wherein the conversion includes encoding the video into the bitstream ([0303], “The CNN setting unit 2610 may set the filter coefficients signaled from the video encoding apparatus.” [0379], “The filter coefficients calculated by the video encoding apparatus may be signaled to the video decoding apparatus through a bitstream.” [0006], “video encoding or decoding”).
Regarding claim 20, Na and Ding disclose every limitation of claim 1, as outlined above. Additionally, Na discloses wherein the conversion includes decoding the video from the bitstream ([0379], “The filter coefficients calculated by the video encoding apparatus may be signaled to the video decoding apparatus through a bitstream.” [0006], “video encoding or decoding.” [0284], “Filter coefficient selection information, for example, index information, may be transmitted to the video decoding apparatus through a bitstream and used for the video decoding operation”).
Regarding claim 21, the limitations are the same as those in claim 1; however, written in machine form instead of process form. Therefore, the same rationale of claim 1 applies equally as well to claim 21. Additionally, Na discloses a processor and non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to [compress] ([0064], “The functions of the respective elements may be implemented in software, and a microprocessor may be implemented to execute the respective functions of the software.” [0403], “a computer program, and may be recorded on a non-transitory computer-readable medium”).
Regarding claim 22, the limitations are the same as those in claim 21. Therefore, the same rationale of claim 21 applies equally as well to claim 22.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STUART D BENNETT/Examiner, Art Unit 2481