Prosecution Insights
Last updated: April 19, 2026
Application No. 18/851,620

MESSAGING PARAMETERS FOR NEURAL-NETWORK POST FILTERING IN IMAGE AND VIDEO CODING

Final Rejection §103
Filed
Sep 26, 2024
Examiner
BENNETT, STUART D
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Dolby Laboratories Licensing Corporation
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
54%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
245 granted / 355 resolved
+11.0% vs TC avg
Minimal -15% lift
Without
With
+-15.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
31 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§103
DETAILED ACTION The present Office action is in response to the amendments filed on 10 DECEMBER 2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claim 13 has been amended. Claims 16 and 17 have been added. No claims have been canceled. Claims 1-17 are pending and herein examined. Response to Arguments Applicant's arguments filed 10 DECEMBER 2025 have been fully considered but they are not persuasive. With regard to claim 1, rejected under 35 U.S.C. § 103 as being unpatentable over Choi et al., “AHG9/AHG11: SEI messages for carriage of neural network information for post-filtering,” 20-28 Apr. 2021, JVET-V0091-v2 (hereinafter “Choi”) in view of U.S. Publication No. 2022/0329837 A1 (hereinafter “Li”), Applicant alleges the following: “In contrast, Choi is absent of any mention of “persist”, much less the claim limitations as featured in Claim 1. Even if the first and second SEI messages in Choi could be hypothetically analogized to the first and second sets of NNPF messaging parameters respectively in Claim 1 (a point not conceded by Applicant), nothing in the cited art including Choi discloses that the first SEI message (the first set of NNPF messaging parameters under the Office Action’s analogy) persists until the end of decoding the coded video sequence, and the second SEI message (the second set of NNPF messaging parameters under the Office Action’s analogy) persists until the end of NN post-filtering of the decoded image as featured in Claim 1.” (Remarks, p. 3.) The Examiner respectfully disagrees. The broadest reasonable interpretation for the persistence of the messages is based on the specification of the instant application, because the claim does not specify the mechanism by which persistence occurs (e.g., there is no flag or other syntax in either message for persistence). Paragraph [0021] of the original specification states the following: “Two levels of NNPF-related messaging are proposed: 1) at the CLVS (Coded Layer Video Sequence) layer (where NNPF operations persist until the end of the video sequence), and 2) at the Picture layer (where NNPF operations persist only until the end of the current layer).” (Specification, ¶ [0021].) In view of the disclosure of ¶ [0021], the broadest reasonable interpretation is the first message is a message signaled at a higher level than the picture level and intended for a plurality of pictures, consistent with the sequence level, and the second message is a message signaled at a picture level. Therefore, the persistence is based on signaling at the picture level for persistence until the end of the decoded image or signaling higher than the picture level to be persistent for a plurality of decoded images until the end of the coded video sequence. Choi’s disclosure describes two messages, a first message with NN topology and a second message identifying NN models to be applied to each picture or block. See Choi, p. 1, Abstract. The following summarizes the relationship between the two messages in Choi’s disclosure: “Beyond that, most post-/inloop-filtering methods utilize multiple NN models, so that the best NN model can be selectively applied to each picture or block. Fig. 2 illustrates an example that the picture is hypothetically divided into multiple blocks, and each block is processed by a different NN inference process. To provide such a picture/block-level adaptation information of multiple NNs, another SEI message (neural_network_inference_process_info) is proposed. The proposed neural_network_inference_process_info SEI message contains the information on where a specific NN inference process is applied. The NN ID is assigned to each picture or block to identify the corresponding NN model, where a unique value of NN ID is signaled in each neural_network_topology_parameter_info SEI message.” (Choi, p. 3, Section 2.) The second message (i.e., neural_network_inference_process_info) is described as including information at the picture level for identifying a NN model to be applied thereto. Thus, the second message being at the picture level means the information therein persists until the end of the decoded image. In contrast with the second message, the first message provides NN topology that can be applied to any picture and must exist prior to the second message to make reference to. As is consistent with VVC and HEVC, such signaling of a first message is transmitted at a higher level than a picture level and the sequence level services all pictures. Therefore, the first message is considered signaled at a level higher than the picture level, such as a sequence level, then the information included in the first message will persist until the end of the coded video sequence. The Examiner also observes the first set of NNPF messaging parameters required by the claims are a part of a first message that represents the same type of message disclosed in Choi. Paragraph [0028] of the instant application describes CLVS-Layer NNPF SEI message including the “network topology and model parameters.” Choi’s disclosure of the first message is “neural network topology and parameter SEI message.” See Choi, p. 4, Section 3.1. The first message of Choi represents the same NN topology message of the instant application that is intended to function with a plurality of pictures associated with corresponding second messages. For all the reasons stated above, the rejection is maintained. Examiner’s Note: An interview was held after the Non-Final rejection explaining the Examiner’s position above. See Applicant-Initiated Interview Summary PTO-413 mailed on 11/28/2025. At the time it was indicated the “persist” aspect appeared to be how the system would handle the messages based on whether the message was intended for a picture or every picture (e.g., second and first message, respectively), irrespective of any particular syntax in the messages (e.g., there is no parameter indicating persistence). Therefore, because the first message can be applied to any picture it would then follow it persists until the end of the coded video sequence, whereas the second message persists until the end of the decoded image because its purpose is to identify a NN for a picture. The interview summary further included pathways to advance prosecution, because the syntax elements of the first and second messages as described in the specification of the instant application are not all the same as those in the prior-art of record. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al., “AHG9/AHG11: SEI messages for carriage of neural network information for post-filtering,” 20-28 Apr. 2021, JVET-V0091-v2 (hereinafter “Choi”) in view of U.S. Publication No. 2022/0329837 A1 (hereinafter “Li”). Regarding claim 1, Choi discloses a method to process with neural-networks post filtering (NNPF) one or more pictures in a coded video sequence (Abstract, p. 1, “This contribution proposes an updated SEI message design for carriage of a neural network (NN) topology and parameters that are utilized for post filtering with neural network models”), the method comprising: receiving a decoded image and NNPF metadata related to processing the decoded image with NNPF (2. Things to be specified, p. 3, “NN topologies and parameters are transmitted,” and “When decoded, all concatenated chunk data in SEI messages for representation of neural network are spliced and consumed by neural network library or decoders,” and “The NN ID is assigned to each picture or block to identify the corresponding NN model”); parsing syntax parameters in the NNPF metadata to perform NNPF according to one or more neural-network models, associated NNPF data, and NNPF parameters (2. Things to be specified, p. 3, “NN topologies and parameters are transmitted,” and “When decoded, all concatenated chunk data in SEI messages for representation of neural network are spliced and consumed by neural network library or decoders,” and “The NN ID is assigned to each picture or block to identify the corresponding NN model”); and performing NNPF on the decoded image according to the syntax parameters to generate an output image, wherein the syntax parameters in the NNPF metadata comprise a first et of NNPF messaging parameters that persist until the end of decoding the coded video sequence and a second set of NNPF messaging parameters that persist until the end of NN post-filtering of the decoded image (Abstract, p. 1, “two SEI messages are proposed; the first SEI specifies the internal or external carriage of a NN topology and its parameters, and the second one specifies the organization of multiple NN models, which are carried by the first SEI messages, for the post-processing. The second SEI message contains the information on which model is applied to each picture or block among multiple candidate models.” The first message is in section 3.1.1 and the second message is in section 3.2.1, where the first message includes topology information for the NN and is therefore considered persistent information for the plurality of decoded images (e.g., persistent for the sequence), whereas the second message includes picture/block information. 2. Things to be specified, p. 3, “most post-/inloop-filtering methods utilize multiple NN models, so that the best NN model can be selectively applied to each picture or block.” Therefore, the information in the second message is considered persistent information until the end of the decoded picture). Choi fails to expressly disclose a method and steps of a method, because Choi is a technical document implying the steps/structure of the technical features disclosed therein. However, Li teaches a method and steps of a method for processing NN SEI messages (FIGS. 17 and 18 depict an encoder and decoder, respective, with their respective SEI steps in FIGS. 19 and 20. [0190], “The bitstream 1300 also contains one or more SEI messages, such as SEI message 1322, which contain supplemental enhancement information.” [0198], “The present disclosure describes methods and techniques to allow the encoder to signal to the decoder which NN filter model to use for each video unit. The video unit may be a sequence of pictures, a picture, a slice, a tile, a brick, a subpicture, a coding tree unit (CTU), a CTU row, a coding unit (CU), etc. As an example, different NN filters can be used for different layers, different components (e.g., luma, chroma, Cb, Cr, etc.), different specific video units, etc. Flags and/or indices can be signaled via one or more rules to indicate which NN filter model should be used for each video item”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have used well-known implementations of SEI messages, as taught by Li (FIGS. 17-20), in Choi’s disclosure. One would have been motivated to modify Choi’s disclosure, by incorporating Li’s disclosure, because it is an obvious combination of prior art elements according to known methods for implementing neural-network post processing decoded information with the predictability of successfully implementing the technical document of Choi. Regarding claim 2, Choi and Li disclose every limitation of claim 1, as outlined above. Additionally, Choi discloses wherein the first set of NNPF messaging parameters comprises one or more of: an NNPF model information is present flag, indicating NNPF model information is present in the NNPF metadata; an NNPF joint model flag (nnpf_joint_model_flag) indicating whether NNPF applies or not identical neural network models for both luma and chroma components; an NNPF number of picture types parameter (nnpf_num_pic_type_minus1) indicating a number of different picture types being supported by NNPF; an array of NNPF model IDs (nnpf_model_id[i]) to identify each NNPF model; first parameters related to neural networks topology and model information (3.1.1 Neural network model topology and parameter SEI message syntax, p. 4, discloses NN topology and model syntax elements, including at least nn_topology_info_external_present_flag and nn_parameter_info_external_present_flag); second parameters related to data information in the decoded image; and third parameters related to NNPF auxiliary information. Regarding claim 3, Choi and Li disclose every limitation of claim 2, as outlined above. Additionally, Choi discloses wherein the first parameters related to neural networks topology and model information comprise one or more of: a flag indicating whether detailed information for a NN model used in NNPF is provided using an external link (3.1.1 Neural network topology and parameter SEI message syntax, p. 4, discloses NN model topology and parameter syntax elements, including at least nn_topology_info_external_present_flag and nn_parameter_info_external_present_flag for indicating NN model information from an external link. 3.1.2 Neural-network based post-filtering SEI message semantics, p. 7, defines nn_parameter_info_external_present_flag as “equal to 0 specifies that the data of neural network parameters is contrained in the SEI message. nn_parameter_info_external_present_flag equal to 1 specifies that the data of neural network parameters may be externally present and the SEI message contains the external linkage information only”); an NNPF storage and exchange data format parameter; an NNPF arithmetic precision parameter; an NNPF number of models parameter; and an NNPF latency estimate parameter. Regarding claim 4, Choi and Li disclose every limitation of claim 2, as outlined above. Additionally, Choi discloses wherein the second parameters related to data information in the decoded image comprises one or more of: an input chroma format parameter; a packing format parameter; a chroma-dependency format parameter; an input tensor format parameter; a picture padding parameter; and a temporal picture flag indicating the presence of temporal neighbor pictures as an auxiliary input (The “second parameters related to data information” is an alternative option in claim 2, which is not selected for the basis of the rejection and instead the “first parameters” is. 3.1.1 Neural network topology and parameter SEI message syntax, p. 4, discloses NN model topology and parameter syntax elements). Regarding claim 5, Choi and Li disclose every limitation of claim 4, as outlined above. Additionally, Choi discloses wherein the picture padding parameter comprises: 0, for zero padding; 1, for replication padding; and 2, for reflection padding (The “picture padding parameter” is a part of the “second parameters related to data information” that is an alternative option in claim 2, which is not selected for the basis of the rejection and instead the “first parameters” is. 3.1.1 Neural network topology and parameter SEI message syntax, p. 4, discloses NN model topology and parameter syntax elements). Regarding claim 6, Choi and Li disclose every limitation of claim 4, as outlined above. Additionally, Choi discloses wherein the second parameters related to data information further comprise one or more of: a flag indicating whether auxiliary input data is present in the input tensor format parameter of the NNPF metadata; and a flag indicating that a distinct combination of color primaries, transfer characteristics, and matrix coefficients for the NNPF metadata are present (The “second parameters related to data information” is an alternative option in claim 2, which is not selected for the basis of the rejection and instead the “first parameters” is. 3.1.1 Neural network topology and parameter SEI message syntax, p. 4, discloses NN model topology and parameter syntax elements). Regarding claim 7, Choi and Li disclose every limitation of claim 2, as outlined above. Additionally, Choi discloses wherein the third parameters related to NNPF auxiliary information comprise an NNPF auxiliary input identifier which indicates availability of auxiliary inputs comprising one or more of: a QP map; a partition map; and a classification map (The “third parameters related to NNPF auxiliary information” is an alternative option in claim 2, which is not selected for the basis of the rejection and instead the “first parameters” is. 3.1.1 Neural network topology and parameter SEI message syntax, p. 4, discloses NN model topology and parameter syntax elements). Regarding claim 8, Choi and Li disclose every limitation of claim 1, as outlined above. Additionally, Choi discloses wherein the second set of NNPF messaging parameters comprise an NNPF picture model ID specifying a NN post filter to be used for the decoded image (3.2.1 Neural network inference process SEI message syntax, p. 13, discloses “nn_used_id[i]” defined in section 3.2.2 as “nn_used_id[i] indicates the identifier of the i-th neural network model that is used for the picture”). Regarding claim 9, Choi and Li disclose every limitation of claim 8, as outlined above. Additionally, Choi discloses wherein the second set of NNPF messaging parameters further comprise one or more of: picture QP related metadata; picture partition related metadata (3.2.1 Neural network inference process SEI message syntax, p. 14, discloses nn_pic_width_in_luma_samples, nn_pic_height_in_luma_samples, nn_num_block_columns_minus1, nn_num_block_rows_minus1); picture classification related metadata; a dependency flag indicating whether signaled NN post-filtering is independent or dependent on other NN post filters, and if the dependency flag indicates dependency on other NN post filters (3.2.1 Neural network inference process SEI message syntax, p. 14, discloses nn_block_inference_enabled_flag[i]), defined in section 3.2.2 as “nn_block_inference_enabled_flag[i] equal to 0 indicates neural network inference process is not applied to the i-th block. nn_block_inference_enabled_flag[i] equal to 1 indicates the neural network inference process associated with nn_block_model_index[i] is applied to the i-th block”, then further comprising: a preceding number variable indicating how many NN post filters should precede in processing order a current NNPF specified by a picture-layer NNPF identity variable; an array of NNPF identity variables of NN post-filters which should precede in processing order the current NNPF (3.2.1 Neural network inference process SEI message syntax, p. 14, discloses nn_block_model_index[i]), defined in section 3.2.2 as “nn_block_model_index[i] indicates that the neural network model associated with nn_used_id_[ nn_block_model_index[i] ] is applied to the-ith block. The length of the syntax element is Ceil( Log2( num_nn_models_minus1 + 1) ) bits”). Regarding claim 10, Choi and Li disclose every limitation of claim 9, as outlined above. Additionally, Choi discloses wherein the picture QP related metadata comprise one or more of: an NNPF QP info present flag indicating the presence of QP information; an NNPF region info flag indicating the presence of region information; an NNPF region QP present flag indicating the presence of region-based QP information; and if the NNPF QP info present flag is set, further comprising QP information for at least one region (The “picture QP related metadata” is an alternative option in claim 9, which is not selected for the basis of the rejection and instead the “picture partition related metadata” and “a dependency flag” are. 3.2.1 Neural network inference process SEI message syntax, p. 14, discloses the partition syntax and block dependency through inference). Regarding claim 11, Choi and Li disclose every limitation of claim 9, as outlined above. Additionally, Choi discloses wherein the picture partition related metadata comprise: an NNPF region partition present flag indicating the presence of NNPF region partition information; and if the NNPF region partition present flag is set, further comprising at least one picture partition map (The “picture partition related metadata” is an alternative option in claim 9, which is not selected for the basis of the rejection of this claim and instead the “dependency flag” is. 3.2.1 Neural network inference process SEI message syntax, p. 14, discloses block dependency through inference). Regarding claim 12, Choi and Li disclose every limitation of claim 9, as outlined above. Additionally, Choi discloses wherein the picture classification related metadata comprise one or more of: an NNPF picture classification present flag indicating the presence of picture classification information; and if the NNPF picture classification present flag is set, further comprising picture classification for at least one region (The “picture classification related metadata” is an alternative option in claim 9, which is not selected for the basis of the rejection and instead the “picture partition related metadata” and “a dependency flag” are. 3.2.1 Neural network inference process SEI message syntax, p. 14, discloses the partition syntax and block dependency through inference). Regarding claim 13, the limitations are the same as those in claim 1; however, written from the steps of the encoder instead of the decoder, which is well-known to be the inverse steps. Therefore, the same rationale of claim 1 applies equally as well to claim 13. Regarding claim 14, the limitations are the same as those in claim 1. Therefore, the same rationale of claim 1 applies equally as well to claim 14. Additionally, Li discloses a non-transitory computer-readable storage medium having stored thereon computer-executable instructions for executing with one or more processors ([0365], “a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor cause the processor”). The same motivation of claim 1 applies to claim 14. Regarding claim 15, the limitations are the same as those in claim 1. Therefore, the same rationale of claim 1 applies equally as well to claim 15. Additionally, Li discloses an apparatus comprising a processor ([0365], “an apparatus for coding video data comprising a processor”). The same motivation of claim 1 applies to claim 15. Regarding claim 16, the limitations are the same as those in claim 13; however, written in the form of a non-transitory computer-readable storage medium storing computer-executable instructions for the method of claim 13. Therefore, the same rationale of claim 13 applies equally as well to claim 16. Additionally, Li discloses a non-transitory computer-readable storage medium having stored thereon computer-executable instructions ([0023], “a non-transitory memory with instructions thereon”). The same motivation of claim 13 applies to claim 16. Regarding claim 17, the limitations are the same as those in claim 13; however, written as an apparatus claim with structural components for carrying out the method of claim 13. Therefore, the same rationale of claim 13 applies equally as well to claim 17. Additionally, Li discloses an apparatus comprising a processor (FIG. 15, apparatus 1500 with processor 1502). The same motivation of claim 13 applies to claim 17. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STUART D BENNETT/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Sep 26, 2024
Application Filed
Sep 30, 2025
Non-Final Rejection — §103
Nov 24, 2025
Examiner Interview Summary
Nov 24, 2025
Applicant Interview (Telephonic)
Dec 10, 2025
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574559
ENCODER, A DECODER AND CORRESPONDING METHODS FOR ADAPTIVE LOOP FILTER ADAPTATION PARAMETER SET SIGNALING
2y 5m to grant Granted Mar 10, 2026
Patent 12568300
ELECTRONIC APPARATUS, METHOD FOR CONTROLLING ELECTRONIC APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR GUI CONTROL ON A DISPLAY
2y 5m to grant Granted Mar 03, 2026
Patent 12563191
CROSS-COMPONENT SAMPLE OFFSET
2y 5m to grant Granted Feb 24, 2026
Patent 12542925
METHOD AND DEVICE FOR INTRA-PREDICTION
2y 5m to grant Granted Feb 03, 2026
Patent 12542934
ZERO-DELAY PANORAMIC VIDEO BIT RATE CONTROL METHOD CONSIDERING TEMPORAL DISTORTION PROPAGATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
54%
With Interview (-15.0%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month