Prosecution Insights
Last updated: April 19, 2026
Application No. 18/838,532

REDUCING THE AMORTIZATION GAP IN END-TO-END MACHINE LEARNING IMAGE COMPRESSION

Final Rejection §102§103
Filed
Aug 14, 2024
Examiner
LIMA, FABIO S
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
319 granted / 415 resolved
+18.9% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
32 currently pending
Career history
447
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 32-49 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 32, 38, and 44 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Toma et al. (US20100040351A1), hereinafter referred to as Toma. Regarding claim 32, Toma discloses device for video decoding, comprising: a processor configured to (¶[0511] disclosing the decoder may be implemented by a processor): obtain a first entropy model indication for decoding a first picture and a second entropy model indication for decoding a second picture (Toma, ¶¶ [0034], [0035] and [0195] and FIG. 4 disclose that the entropy coding mode is determined on a picture basis ); obtain a first entropy model based on the first entropy model indication and a second entropy model based on the second entropy model indication (Toma, ¶ [0195] discloses obtaining the entropy model based on the indication); and decode the first picture based on the first entropy model and the second picture based on the second entropy model (Toma, ¶ [0196] discloses decoding the video data using the determined entropy model processes). Regarding claim 38, this claim is rejected based on the same art and evidentiary limitations applied to the device of claim 32, since it claims analogous subject matter in the form of a method for performing the same or equivalent functionality. The Examiner notes that it is well-known in the art that video compression involves a complementary pair of systems: a compressor (encoder) and a decompressor (decoder). The encoder converts the source data into a compressed form, occupying a reduced number of bits prior to transmission or storage, while the decoder converts the compressed form back into a representation of the original video data by performing a reciprocal process to that of the encoder, decoding the encoded video data from the bitstream. Regarding claim 44, this claim is rejected based on the same art and evidentiary limitations applied to the device for video decoding of claim 32, since it claims analogous subject matter in the form of a device for video encoding for performing the same or equivalent functionality. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 35, 37, 41 and 43 are rejected under 35 U.S.C. 103 as being unpatentable over Toma, in view of Schierl et al. (US20200221105A1), hereinafter referred to as Schierl. Regarding claim 35, Toma discloses all the limitations of claim 32, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a prior entropy model for decoding at least one of the first picture or the second picture, obtain the prior entropy model, wherein at least one of the first picture or the second picture is decoded based on the prior entropy model However, Schierl from the same or similar endeavor of data coding discloses the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a prior entropy model for decoding at least one of the first picture or the second picture, obtain the prior entropy model, wherein at least one of the first picture or the second picture is decoded based on the prior entropy model (Schierl, claim 1 - initialize the symbol probability based on a saved symbol probability as acquired in context adaptive entropy decoding a previously-decoded portion up to a previous coding block of the picture associated with the previously-decoded portion. See also Abstract and [0012]). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Toma to add the teachings of Schierl as above in order to improve coding efficiency by propagating probability statistics across picture or partition boundaries (Schierl, ¶ [0012]) Regarding claim 37, Toma discloses all the limitations of claim 32, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a prior entropy model for decoding at least one of the first picture or the second picture, obtain previous entropy model parameters associated with a previous picture, wherein at least one of the first picture or the second picture is decoded based on the previous entropy model parameters associated with the previous picture. However, Schierl from the same or similar endeavor of data coding discloses the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a prior entropy model for decoding at least one of the first picture or the second picture, obtain previous entropy model parameters associated with a previous picture, wherein at least one of the first picture or the second picture is decoded based on the previous entropy model parameters associated with the previous picture. (Schierl, claim 1 - initialize the symbol probability based on a saved symbol probability as acquired in context adaptive entropy decoding a previously-decoded portion up to a previous coding block of the picture associated with the previously-decoded portion. See also ¶¶ [0012] and [0125]). The motivation for combining Toma and Schierl has been discussed in connection with claim 35, above. Regarding claim 41, this claim is rejected based on the same art and evidentiary limitations applied to the device for video decoding of claim 35, since it claims analogous subject matter in the form of a device for video encoding for performing the same or equivalent functionality. Regarding claim 43, this claim is rejected based on the same art and evidentiary limitations applied to the device for video decoding of claim 37, since it claims analogous subject matter in the form of a device for video encoding for performing the same or equivalent functionality. Claims 33, 34, 36, 39, 40, 42, 45- 49 are rejected under 35 U.S.C. 103 as being unpatentable over Toma, in view of Minnen et al. (US20200027247A1), hereinafter referred to as Minnen. Regarding claim 33, Toma discloses all the limitations of claim 32, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a reparametrized entropy model for decoding at least one of the first picture or the second picture, obtain the reparametrized entropy model, wherein at least one of the first picture or the second picture is decoded based on the reparametrized entropy model. However, Minnen from the same or similar endeavor of data coding discloses the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a reparametrized entropy model for decoding at least one of the first picture or the second picture, obtain the reparametrized entropy model, wherein at least one of the first picture or the second picture is decoded based on the reparametrized entropy model. (Minnen, ¶¶[0005]-[0008] and [0037] encoder neural network generates a latent representation of the data. The latent representation of the data is processed using a hyper-encoder neural network to generate a latent representation of an entropy model (e.g. reparametrized entropy model), where the entropy model is defined by one or more probability distribution parameters characterizing one or more code symbol probability distributions). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Toma to add the teachings of Minnen as above, to compresses input data using a conditional entropy model that is determined based on the input data (using neural networks), rather than using, e.g., a static, predetermined entropy model. Determining the entropy model based on the input data enables the entropy model to be richer and more accurate, e.g., by capturing spatial dependencies in the input data, and thereby enables the input data to be compressed at a higher rate than can be achieved in some conventional systems (Minnen, [0045]). Regarding claim 34, Toma and Minnen disclose all the limitations of claim 32, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a reparametrized entropy model for decoding at least one of the first picture or the second picture, obtain at least one updated entropy model parameter associated with the reparametrized entropy model, wherein at least one of the first picture or the second picture is decoded based on the at least one updated entropy model parameter associated with the reparametrized entropy model. However, Minnen from the same or similar endeavor of data compression discloses the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a reparametrized entropy model for decoding at least one of the first picture or the second picture, obtain at least one updated entropy model parameter associated with the reparametrized entropy model, wherein at least one of the first picture or the second picture is decoded based on the at least one updated entropy model parameter associated with the reparametrized entropy model. (Minnen, ¶¶ [0008] - [0011] and [0037] the latent representation of the entropy model includes quantizing the latent representation of the entropy model and then processing the quantized latent representation to generate the probability distribution parameters. Examiner represents these quantized parameters as the updated entropy model parameters). The motivation for combining Toma and Minnen has been discussed in connection with claim 33, above. Regarding claim 36, Toma and Minnen disclose all the limitations of claim 32, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a learned entropy model for decoding at least one of the first picture or the second picture, obtain the learned entropy model, wherein at least one of the first picture or the second picture is decoded based on the learned entropy model. However, Minnen from the same or similar endeavor of data compression discloses the device of claim 32, wherein the processor is further configured to: based on at least one of the first entropy model indication or the second entropy model indication indicating to use a learned entropy model for decoding at least one of the first picture or the second picture, obtain the learned entropy model, wherein at least one of the first picture or the second picture is decoded based on the learned entropy model. (Minnen, ¶¶ [0005] - [0008] the probability distribution parameter generated by neural networks. Examiner submits because these parameters are generated by neural network, the model is learned.). The motivation for combining Toma and Minnen has been discussed in connection with claim 33, above. Regarding claims 39, 40 and 42, these claims are rejected based on the same art and evidentiary limitations applied to the device of claims 33, 34 and 36, since they claim analogous subject matter in the form of a method for performing the same or equivalent functionality. The Examiner notes that it is well-known in the art that video compression involves a complementary pair of systems: a compressor (encoder) and a decompressor (decoder). The encoder converts the source data into a compressed form, occupying a reduced number of bits prior to transmission or storage, while the decoder converts the compressed form back into a representation of the original video data by performing a reciprocal process to that of the encoder, decoding the encoded video data from the bitstream. Regarding claims 45, 46 and 49, these claims are rejected based on the same art and evidentiary limitations applied to the device for video decoding of claims 33, 34 and 36, since it claims analogous subject matter in the form of a device for video encoding for performing the same or equivalent functionality. Regarding claim 47, Toma and Minnen disclose all the limitations of claim 44, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 44, wherein the processor is further configured to: obtain a latent representation of t at least one of the first picture or the second picture; derive a reparametrized entropy model based on the latent representation; and determine to use the reparametrized entropy model for encoding at least one of the first picture or the second picture. However, Minnen from the same or similar endeavor of data compression discloses the device of claim 44, wherein the processor is further configured to: obtain a latent representation of t at least one of the first picture or the second picture; derive a reparametrized entropy model based on the latent representation; and determine to use the reparametrized entropy model for encoding at least one of the first picture or the second picture (Minnen, ¶¶ [0005] - [0008] - encoder neural network generates a latent representation of the data. The latent representation of the data is processed using a hyper-encoder neural network to generate a latent representation of an entropy model (e.g. reparametrized entropy model)). The motivation for combining Toma and Minnen has been discussed in connection with claim 33, above. Regarding claim 48, Toma and Minnen disclose all the limitations of claim 44, and is analyzed as previously discussed with respect to that claim. Toma does not explicitly disclose the device of claim 44, wherein the processor is further configured to: obtain a latent representation of the current picture; derive a reparametrized entropy model based on the latent representation; and determine to use the reparametrized entropy model for encoding the current picture. However, Minnen from the same or similar endeavor of data compression discloses the device of claim 44, wherein the processor is further configured to: obtain a latent representation of the current picture; derive a reparametrized entropy model based on the latent representation; and determine to use the reparametrized entropy model for encoding the current picture. (Minnen, ¶ [0096] and FIG. 6 - performance gains that can be achieved by using the compression/decompression systems). The motivation for combining Toma and Minnen has been discussed in connection with claim 33, above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FABIO S LIMA whose telephone number is (571)270-0625. The examiner can normally be reached on Monday - Friday 8 am - 4 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FABIO S LIMA/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Aug 14, 2024
Application Filed
Aug 05, 2025
Non-Final Rejection — §102, §103
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Nov 07, 2025
Response Filed
Jan 20, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604015
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12593038
TEMPORAL PREDICTION OF PARAMETERS IN NON-LINEAR ADAPTIVE LOOP FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12593045
ENTROPY CODING-BASED FEATURE ENCODING/DECODING METHOD AND DEVICE, RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN, AND METHOD FOR TRANSMITTING BITSTREAM
2y 5m to grant Granted Mar 31, 2026
Patent 12581099
INFORMATION PROCESSING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12581094
IMAGE SIGNAL ENCODING/DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
92%
With Interview (+14.8%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month