Prosecution Insights
Last updated: April 19, 2026
Application No. 18/878,439

FINE-TUNING A LIMITED SET OF PARAMETERS IN A DEEP CODING SYSTEM FOR IMAGES

Non-Final OA §102§103
Filed
Dec 23, 2024
Examiner
JEBARI, MOHAMMED
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
71%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
266 granted / 487 resolved
-3.4% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
46 currently pending
Career history
533
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
18.2%
-21.8% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on 12/23/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation 3. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 4. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an encoder for encoding…; the encoder being configured to…; a decoder for decoding…; the decoder being configured to…” in claims 9 and 15. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. According to the teachings in the specification as-originally filed, the encoder and decoder are described as software embodied in a structure of a processor, see page 15 line 29 – page 16 line 4. Thus, the claimed encoder and decoder are interpreted to be embodied on a processor. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claim(s) 1, 5-9, 13-16 and 18-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by CRICRÌ et al. (US 2023/0269387) published as WO2021255605, hereinafter “CRICRÌ”. As per claim 1, CRICRÌ discloses a method for encoding an input image, the method comprising: determining, using a deep neural network based on a first model an embedding representative of the input image (paragraph 0212, Input the image to the encoding pipeline, obtaining a latent tensor; see also Fig. 8 or Fig. 10); selecting a subset of parameters to be updated based on the input image (paragraph 0171, a set of Overfitting Parameters (OPs) are selected; see also paragraphs 0212-0216, which teach that OPs are updated based on the input image); determining a parameter update for the selected subset of parameters (paragraphs 0212-0216 teach that OPs are updated) for fine-tuning a second neural network model based on the first model (paragraph 0152, Various embodiments described herein, propose to initially train the encoder and decoder NNs to be optimal for performing further learning or finetuning at test phase. For example, when the encoder-side device overfits some of parameters to the content that needs to be encoded), wherein the fine-tuning is based on the input image (see Fig. 8) and a decoded version of the embedding as decoded using a deep neural network based on the second neural network model (paragraphs 0152-0153, In other words, during the training phase or during a fine-tuning phase, the NNs need to ‘learn to learn to compress and reconstruct’ a content that is provided at inference phase, by leveraging the meta-learning paradigm… During meta-learning, first one or more sets of parameters to be overfitted to a content being encoded are selected. For example, the latent tensor output by the encoder and the decoder NN’s bias terms); and generating encoded data comprising at least an encoded quantized embedding information representative of the selected subset of parameters and an encoded quantized parameter update (paragraph 0197 and 0199, In case the OPs were some of the Decoder NN’s parameters, the encoded latent tensor and the overfitted decoder’s parameters would be both part of the encoded data to be sent to the decoder-side, where the overfitted decoder’s parameters may comprise one or more of the following possible options: the overfitted decoder’s parameters, a compressed version of the overfitted decoder’s parameters, an update to the decoder’s parameters, a compressed version of an update to the decoder’s parameters, a cluster label, a compressed version of cluster label. Compression of the decoder’s parameters or of an update to the decoder’s parameters or of cluster labels may comprise one or more of sparsification, quantization and entropy encoding… the decoder Overfitting Parameters was compressed, such as quantized and entropy encoded; see also FIG. 8 or FIG. 10). As per claim 5, CRICRÌ disclose the method of claim 1, wherein the fine-tuning is based on a loss function to minimize a measure of a distortion between the input image and an image reconstructed using a deep neural network based on the second neural network model updated with one or more updated parameters (paragraph 0149, The optimization may consist of using the output of the encoder and decoder for computing a loss function similar to the loss function used during training, and then differentiating it with respect to the parameters to be optimized, for example, one or more of the three above mentioned options. This optimization may be referred to also as finetuning or overfitting, in various embodiments). As per claim 6, CRICRÌ disclose the method of claim 1, wherein the selected subset of parameters is selected from among a set comprising a bias, a weight, one or more parameters of a non-linear function of a model, a subset of layers of the model, a specific layer of the model, a bias of a specific layer of the model, and a subset of neurons of the model (paragraphs 0171-0174; see also paragraphs 0145-0148). As per claim 7, CRICRÌ discloses a method for decoding an image represented by encoded data, the method comprising: obtaining a decoded embedding, information representative of a selected subset of parameters of a model of a deep neural network (paragraph 0199, At decoder side, in case the OPs were some of the Decoder NN’s parameters, the decoder device would use the set or version of Overfitting Parameters which was signaled by the encoder device……If the signaling comprises a cluster label, the decoder device selects the set or version of overfitted Overfitting Parameters that corresponds to the signaled cluster label, for example by using a look-up table that maps cluster labels to sets or versions of overfitted Overfitting Parameters…If the signaling information about the decoder Overfitting Parameters was compressed, such as quantized and entropy encoded, the decoder device may need to first decompress the information, such as entropy decode and dequantize the received data) and a decoded parameters update from the encoded data (paragraph 0199, At decoder side, in case the OPs were some of the Decoder NN’s parameters, the decoder device would use the set or version of Overfitting Parameters which was signaled by the encoder device. If the signaling comprises the overfitted decoder’s parameters, then these parameters are used within the Decoder NN. If the signaling comprises an update to the decoder’s parameters, then this update is applied to the Decoder NN’s parameters…If the signaling information about the decoder Overfitting Parameters was compressed, such as quantized and entropy encoded, the decoder device may need to first decompress the information, such as entropy decode and dequantize the received data); selecting the selected subset of parameters based on the information (paragraph 0199, the decoder device selects the set or version of overfitted Overfitting Parameters that corresponds to the signaled cluster label, for example by using a look-up table that maps cluster labels to sets or versions of overfitted Overfitting Parameters); updating the selected subset based the decoded parameters update (paragraphs 0200-0204, During development or training phase, after having selected the Overfitting Parameters, the neural networks of the codec are trained by using the proposed meta-learning procedure as part of the training procedure. The proposed training procedure may be used for training a randomly initialized neural network, or for training a neural network which has been previously trained by using any suitable training method, such as a more conventional training method that does not utilize meta-learning In an embodiment, the training procedure may comprise following two nested loops:…To this end, first perform a forward operation on the neural networks, compute a loss, which can be a sum of multiple losses, as already described, then compute derivatives of the loss with respect to the selected Overfitting Parameters, and then update the OPs by using the computed derivatives. As a result of the overfitting stage, a model with updated parameters, due to the overfitting of the Overfitting Parameters, is obtained); and determining, using the deep neural network with the updated parameters (paragraph 0200-0204, During development or training phase…and then update the OPs by using the computed derivatives. As a result of the overfitting stage, a model with updated parameters, due to the overfitting of the Overfitting Parameters, is obtained), a decoded image based on the obtained decoded embedding (paragraph 0144, The phase when a codec or neural network are deployed and used is referred to as inference phase or test phase in various embodiments. Normally, at inference phase (for example, once the model is trained), the input image is passed through the encoding stages, for example, an encoder NN, a quantizer, and an arithmetic encoder, to obtain a bitstream and send it to the decoder side for decoding). As per claim 8, arguments analogous to those applied for claim 6 are applicable for claim 8. As per claims 9 and 13-14, arguments analogous to those applied for claims 1 and 5-6 are applicable for claims 9 and 13-14; in addition, CRICRÌ discloses using an encoder for encoding an input image (see FIG. 8 or FIG. 10). As per claims 15-16, arguments analogous to those applied for claims 7-8 are applicable for claims 15-16; in addition, CRICRÌ discloses using a decoder for decoding an image (see FIG. 8 or FIG. 10). As per claims 18-19, arguments analogous to those applied for claims 1 and 7 are applicable for claims 18-19; in addition, CRICRÌ discloses using a non-transitory computer readable medium comprising program code instructions for implementing the encoding and decoding methods (paragraph 0337). Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 9. Claim(s) 4 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over CRICRÌ et al. (US 2023/0269387) in view of BESENBRUCH et al. (US 2022/0272345) hereinafter “BESENBRUCH”. As per claim 4, CRICRÌ discloses the method of claim 1, further comprising quantizing the parameters update based on a…quantization with one or more quantization parameters (paragraph 0182, a quantization operation, which are applied on the updates to the overfitted Decoder NN’s parameters…In case of quantization, a differentiable approximation of a quantizer may be used, such as a quantizer based on Straight-Through Estimator), and wherein the encoded data further comprises information representative of the one or more quantization parameters (paragraph 0111). However, CRICRÌ does not explicitly disclose quantizing the parameters update based on a trained quantization. In the same field of endeavor, BESENBRUCH discloses quantizing the parameters update based on a trained quantization (paragraphs 0259-0260, the quantized latent is calculated using a training quantisation function). Therefore, it would have been obvious for one having skill in the art before the effective filing date of the claimed invention to modify the teachings of CRICRÌ in view of BESENBRUCH, by using a trained quantization, thus converging to a target rate on a training set of images reasonably guarantees that we will have the same constraint satisfied on a test set of images (BESENBRUCH, paragraphs 0259-0260) As per claim 12, arguments analogous to those applied for claim 4 are applicable for claim 12. 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (US20250045973, US20250211756, US20240296594, US20220279183, US20210281867, US20220224926, US20220394288, US20230325639, US20230154055, US20200034709, US10594338) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED JEBARI whose telephone number is (571)270-7945. The examiner can normally be reached Mon-Fri: 09:00am-06:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED JEBARI/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 23, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598337
DYNAMIC AIRPLANE VIDEO-ON-DEMAND BANDWIDTH MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12593134
CYLINDRICAL PANORAMA HARDWARE
2y 5m to grant Granted Mar 31, 2026
Patent 12584763
ENVIRONMENT MAP GENERATION PROGRAM AND THREE-DIMENSIONAL SENSOR CONTROL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12574506
METHOD AND DEVICE FOR CODING IMAGE ON BASIS OF INTER PREDICTION
2y 5m to grant Granted Mar 10, 2026
Patent 12568208
IMAGE AND VIDEO CODING USING MACHINE LEARNING PREDICTION CODING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
71%
With Interview (+16.4%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month