Prosecution Insights
Last updated: April 19, 2026
Application No. 18/013,645

A METHOD AND AN APPARATUS FOR UPDATING A DEEP NEURAL NETWORK-BASED IMAGE OR VIDEO DECODER

Non-Final OA §101§103
Filed
Dec 29, 2022
Examiner
SAX, STEVEN PAUL
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Interdigital Madison Patent Holdings SAS
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
320 granted / 460 resolved
+14.6% vs TC avg
Strong +45% interview lift
Without
With
+44.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
20 currently pending
Career history
480
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
62.5%
+22.5% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. The Preliminary Amendment filed 12/29/22 has been entered. Claims 1, 4, 7, 9, 11, 13-14, 16-18, 21, 28, 38-45 are pending. Claims 1, 4, 7, 9, 11, 13-14, 16-18, 21, 28, 38-40 have been amended. Claims 41-45 are newly added. Claims 2-3, 5-6, 8, 10, 12, 15, 19-20, 22-27, 29-37 have been cancelled. 3. The Information Disclosure Statements (IDS) filed 12/29/22 and 7/15/25 have been entered and acknowledged. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 38-41 and 45 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 38 recites “a computer readable medium comprising a bitstream comprising data…” There is no definition for a computer readable medium in the Specification, and no disavowal statement, and therefore the broadest reasonable interpretation of “computer readable medium” could include a signal, rather than claiming a physical medium storing the data, a process that manipulates the data in a specific way that transforms something physical, or a machine that uses the data in an inventive physical system. Therefore, claim 38 is not directed toward patent-eligible subject matter. Claim 41, dependent from 38, does not remedy the issue and is rejected as well. Claims 39 and 40 as well recite “a computer readable storage medium” but there is no definition for this in the Specification, and no disavowal statement, and therefore the broadest reasonable interpretation of “computer readable storage medium” could include a signal. These claims are therefore rejected for the same reasons as above. Claim 45, dependent from 39, does not remedy the issue and is rejected as well. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claim(s) 1, 4, 7, 9, 14, 16, 17, 18, 21, and 38-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Owada et al “Owada” (US 2020/0118003 A1) and Jung et al “Jung” (FR 3096538 A1). (Please see the attached copy of Jung that numbers paragraphs in the same manner as that used in this Action). 8. Regarding claim 1, Owada shows: obtaining a latent representative of at least one part of at least one image (para 16, 24 show obtaining a latent image); using at least one update parameter representative of a modification to apply to a deep neural network-based decoder (para 25, 37, and 45 show and updated parameter used to modify the trained decoder network. Para 16, 22, and 42 show the decoder network is a deep neural network); modifying the deep neural network-based decoder based on the update parameter (para 45, 48, 73 show updating/modifying the decoder based on the update parameter; as noted, para 16, 22, and 42 show the decoder is a deep neural network); and reconstructing the at least one part of the at least one image from the latent using at least said modified deep neural network-based decoder (para 17, 24, 45, 73 show reconstructing the volume data to reconstruct the original image from the latent image, using the updated deep neural network-based decoder). Owada not explicitly show decoding the update parameter itself such that modifying the neural network is based on the decoded update parameter. Jung however does show decoding the update parameter and that modifying the neural network is based on the decoded update parameter (para 35 shows decoding a control parameter from the coded data stream, and para 40 shows that control parameter corresponds to an update parameter of the neural network. Para 158, 159, 168, 202 show the neural network is used to decode image data). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention for the update parameter itself to be decoded in Owada, because it would provide an efficient way to use an encoder/decoder network to receive the data. Doing so allows the image and control data to be received together, decoded and used accordingly, as shown in Jung (para 32, 35, 36). 9. Regarding claim 4, Owada shows: a method, comprising obtaining at least one update parameter for modifying a deep-neural-network-based decoder (para 25, 37, 45 show receiving the updated parameters for modifying the decoder network. Para 16, 22, and 42 show the decoder network is a deep neural network) defined from a training of a deep neural network-based auto-encoder using a first training configuration (para 25, 27-28, 51 show the decoder is defined from the training of the encoder/decoder network [which per para 32 comprises the auto-encoder] using a particular training configuration), said at least one update parameter being obtained as a function of a training of said deep neural network-based auto-encoder using a second training configuration (para 25, 37, 39, 45 show the update parameter is based on the training of the encoder/decoder network using a second training configuration. Para 32, 41, 42, 66 show the function of the training of the encoder/decoder network using a second training configuration); encoding at least one part of at least one image using at least the neural network-based auto-encoder trained using the second training configuration (para 44, 66-67 then show after training the encoder/decoder network using the second training configuration, using the updated encoder of the encoder/decoder network to encode image data to output a latent image). Owada does not explicitly show encoding said at least one update parameter. Jung does show encoding the update parameter (two lines right before 140, para 140, 200 show encoding the updated parameters). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to encode the update parameter in Owada, because it would provide an efficient way to use an encoder/decoder network to transmit the data. Doing so allows the image and control data to be transmitted and received together, to eventually be decoded and used accordingly, as shown in Jung (para 32, 35, 36). 10. Claims 7 and 9 show the same features as claims 1 and 4 respectively, and are rejected for the same reasons. In addition, note Owada para 26-27 show the apparatus with processors configured to perform the method steps accordingly. 11. Regarding claim 14, Owada shows said deep neural network-based decoder comprises a hyper decoder configured for decoding side information used by a decoder configured for decoding said bitstream, and wherein modifying said deep neural network-based decoder comprises updating said hyper decoder (Owada para 25, 37, 45 show the hyper decoder with updated hyper parameters). Para 36 and 42 show the mean square error statistical side information that entropy decoders tend to focus on, but Owada does not explicitly mention that the information is used by an entropy decoder per se for entropy decoding the bitstream. Jung however does show the entropy decoder using side information to entropy decode the bitstream (para 29 show the entropy decoder using refinement datum and para 77 show the refinement data may include extracting statistical characteristics). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to use side information for an entropy decoder as is done in Jung, with the hyper decoder of Owada, because it would provide an efficient way to use a hyper decoder to decode the data for reconstructing an image. Doing so allows further refinement data to be decoded and used in reconstructing the image. 12. Regarding claim 16, Owada shows the deep-neural-network-based decoder is configured for outputting first reconstructed data obtained with said deep-neural-network-based decoder (para 16, 24, 40 show outputting the reconstructed data obtained by the deep neural network decoder), said first reconstructed data being used for reference by said deep-neural-network-based decoder (para 48-50 show the generated/reconstructed latent image may be used as an explosion image to serve as the reference image), and wherein said deep-neural-network-based decoder is configured for outputting second reconstructed data obtained with said modified decoder, said second reconstructed data being used for display (para 50 shows the decoder outputs a second reconstructed latent image obtained with the modifications made to the decoder accordingly for display). 13. Regarding claim 17, Owada shows obtaining said at least one update parameter comprises: training said deep neural network-based auto-encoder using said first training configuration (para 25, 27-28 show training the auto-encoder using a first training configuration); storing learnable parameters of a decoder of said deep neural network-based auto-encoder (para 25, 37, 45 show the updated parameters as a result of the neural network learning are stored and available); and said deep-neural-network-based decoder using said second training configuration, wherein said retraining comprises modifying said deep neural network-based decoder, said at least one update parameter being representative of said modification (para 45, 47, 48 show the decoder uses the second configuration based on the updated parameters which modify the decoder). 14. Regarding claim 18, Owada shows the retraining comprises jointly retraining an encoder part of said deep-neural-network-based auto-encoder using said second training configuration (para 43, 44, 58, 66 show the encoder is also trained using the second configuration based on the updated parameters. Para 43 and Figure 5 show how the auto-encoder includes the encoder and decoder networks which are jointly being trained). 15. Regarding claim 21, the second training configuration comprises a loss function based on at least one of a subjective quality and a metric for a machine task or wherein the second training configuration comprises a data set with specific video content type (note the alternative recitation. Owada para 36, 46, 47, 70 show updated training using the loss function based on a particular color content with a reference image. Para 17 shows the data may specifically be video data). 16. Claim 38 shows the same features as claim 1 and is rejected for the same reasons. Additionally, note that para 27 shows the circuitry and processor logic that contains the bitstream with data. Please also see the 101 rejection. 17. Claims 39-40 show the same features as claims 1 and 4 respectively, and are rejected for the same reasons. Additionally, note that para 27-28 shows the memories storing instructions to cause processors to carry out the method steps. 18. Claim(s) 11, 13, 28, and 41-45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Owada and Jung and Karras et al “Karras” (US 2019/0171936 A1). 19. Regarding claim 11, in addition to that mentioned for claim 1, Owada and Jung do not explicitly show that modifying said deep neural network-based decoder comprises at least one of adding at least one new layer to said deep neural network-based decoder and updating at least one layer of a set of layers of said deep neural network. Please note the alternative recitation. Karras however does show modifying the deep neural network based decoder comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 20. Regarding claim 13, in addition to that mentioned for claim 4, Owada and Jung do not explicitly show that modifying said deep neural network-based decoder comprises at least one of adding at least one new layer to said deep neural network-based decoder and updating at least one layer of a set of layers of said deep neural network. Please note the alternative recitation. Karras however does show modifying the deep neural network based decoder comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 21. Regarding claim 28, in addition to that mentioned for claim 1, please note the alternative recitation. Owada and Jung do not explicitly show that the update parameter comprises an indication of adding at least one new layer to said deep neural network-based decoder. Karras however does show an update parameter comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 22. Regarding claim 41, in addition to that mentioned for claim 38, please note the alternative recitation. Owada and Jung do not explicitly show that the update parameter comprises an indication of adding at least one new layer to said deep neural network-based decoder. Karras however does show an update parameter comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 23. Regarding claim 42, in addition to that mentioned for claim 4, please note the alternative recitation. Owada and Jung do not explicitly show that the update parameter comprises an indication of adding at least one new layer to said deep neural network-based decoder. Karras however does show an update parameter comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 24. Regarding claim 43, in addition to that mentioned for claim 7, please note the alternative recitation. Owada and Jung do not explicitly show that the update parameter comprises an indication of adding at least one new layer to said deep neural network-based decoder. Karras however does show an update parameter comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 25. Regarding claim 44, in addition to that mentioned for claim 9, please note the alternative recitation. Owada and Jung do not explicitly show that the update parameter comprises an indication of adding at least one new layer to said deep neural network-based decoder. Karras however does show an update parameter comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 26. Regarding claim 45, in addition to that mentioned for claim 39, please note the alternative recitation. Owada and Jung do not explicitly show that the update parameter comprises an indication of adding at least one new layer to said deep neural network-based decoder. Karras however does show an update parameter comprises adding a new layer to the deep neural network-based decoder (para 49, 52, 58 shows the updated parameter adds a new layer to the neural network based encoder/decoder system. Para 70, 88 shows the new layer may be added to the decoder). It would have been obvious to a person with ordinary skill in the art before the effective date of the claimed invention to have the update parameter add a new layer to the decoder in Owada, especially as modified by Jung, because it would provide an efficient way to train a decoder network to reconstruct the image. Doing so allows the neural network of the decoder to add a new layer to process the data with the new training configuration. 27. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: a) Sun (CN 110569961 A) encodes/decodes parameters used for image reconstruction. b) Denli (CA 3122686) trains deep neural networks for image decoding. c) Kim (US 20210295606) reconstructs latent image data. 28. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PAUL SAX whose telephone number is (571)272-4072. The examiner can normally be reached Monday - Friday, 9:30 - 6:00 Est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed, can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN P SAX/ Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Dec 11, 2025
Examiner Interview (Telephonic)
Dec 13, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602537
METHODS FOR SERVING INTERACTIVE CONTENT TO A USER
2y 5m to grant Granted Apr 14, 2026
Patent 12596343
GRAPHICAL ELEMENT SEARCH TECHNIQUE SELECTION, FUZZY LOGIC SELECTION OF ANCHORS AND TARGETS, AND/OR HIERARCHICAL GRAPHICAL ELEMENT IDENTIFICATION FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12547922
BENCHMARK-DRIVEN AUTOMATION FOR TUNING QUANTUM COMPUTERS
2y 5m to grant Granted Feb 10, 2026
Patent 12541708
TRUSTED AND DECENTRALIZED AGGREGATION FOR FEDERATED LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12524691
CENTRAL CONTROLLER FOR A QUANTUM SYSTEM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+44.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month