Prosecution Insights
Last updated: April 19, 2026
Application No. 18/463,051

METHOD, APPARATUS AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING

Non-Final OA §102§103
Filed
Sep 07, 2023
Examiner
DUONG, JOHNNYKHOI BAO
Art Unit
2667
Tech Center
2600 — Communications
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
37 granted / 56 resolved
+4.1% vs TC avg
Strong +33% interview lift
Without
With
+32.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
10 currently pending
Career history
66
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
36.3%
-3.7% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 56 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2023-0118490, filed on 09/06/2023; as well as Application No. KR10-2022-0114099, filed on 09/08/2022. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/07/2023 was filed and is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 6-9, 13-16, 19, 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cui (“Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation”, Aug 2022). Regarding claims 1, 8, and 15, Cui teaches A method for image encoding (Cui, pg 1, column 2, first full paragraph, reproduced below: PNG media_image1.png 284 760 media_image1.png Greyscale , “Novel image compression framework, AG-VAE” is being interpreted as involving image encoding [for claim 1] and decoding [for claims 8 and 15]), comprising: generating a latent representation using an input image (Cui, pg 3, column 1, Section 3.1, ¶1, reproduced below: PNG media_image2.png 764 724 media_image2.png Greyscale . “We taken one image as input of the encoder to obtain its latent representation”); generating a quantized latent representation (Cui, pg 3, column 1, Section 3.1, ¶2-3, reproduced below: PNG media_image3.png 674 742 media_image3.png Greyscale . “Quantization loss of the latent representation” is being interpreted as involving “generating a quantized latent representation”) by performing adaptive quantization on the latent representation (Cui, pg 3, column 1, see Section 3.1, ¶2-3 image above: “scale the latent representation flexibly” is being interpreted as involving “adaptive quantization on the latent representation”); deriving a set of selected elements of the quantized latent representation (Cui, see Section 3.1 images above for full context, but ¶1 cites: “We can conclude that the channels’ importance varies and can be scaled to control the reconstruction quality”. “Importance varies” shows a set of selected elements, the importance values. When combined with the “scale the latent representation flexibly” from ¶2 and the “quantization loss” from ¶3, shows “quantized latent representation”); and generating encoded information of the selected elements by performing entropy encoding on the set of the selected elements (Cui, pg 1 column 2 last paragraph to pg 2 column lines1-4, reproduced below: PNG media_image4.png 529 1142 media_image4.png Greyscale . “Asymmetric Gaussian entropy model” is being interpreted as involving entropy encoding. As the Cui framework does encoding and decoding, as see in Figure 1 text. Further. Section 3.5 shows the entropy model generates encoded information [“estimate the distribution of the latent representation”]). Regarding claim 2, Cui teaches The method of claim 1, wherein the quantized latent representation is generated for a specific target quality level (Cui, pg 3, section 3.1, ¶1, reproduced below: PNG media_image5.png 804 758 media_image5.png Greyscale “can be scaled to control the reconstruction quality” is being interpreted to involve “a specific target quality level”). Regarding claim 6, Cui teaches The method of claim 1, wherein the encoded information of the selected elements is generated using a parameter for a specific target quality level (Cui, section 3.1, ¶1, reproduced below: PNG media_image6.png 802 758 media_image6.png Greyscale . “can be scaled to control the reconstruction quality” is being interpreted as “a specific target quality level”. The “channels’ importance varies” is being interpreted as “a parameter”). Regarding claim 7, Cui teaches The method of claim 6, wherein the parameter includes a scale parameter for the specific target quality level (Cui, section 3.1, ¶1, reproduced below: PNG media_image6.png 802 758 media_image6.png Greyscale . “can be scaled to control the reconstruction quality” is being interpreted as “a specific target quality level”. The “channels’ importance varies” is being interpreted as “a parameter” that can be scaled) or a mean parameter for the specific target quality level. Claims 9 and 16 is/are rejected using the same rationale as applied to claim 2 discussed above. Claims 13 and 19 is/are rejected using the same rationale as applied to claim 6 discussed above. Claims 14 and 20 is/are rejected using the same rationale as applied to claim 7 discussed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 10, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cui, in view of Li (“Learning Convolutional Networks for Content-weighted Image Compression”, 2017). Regarding claim 3, Cui teaches The method of claim 1, wherein the set of the selected elements (Cui, see Section 3.1 images above for full context, but ¶1 cites: “We can conclude that the channels’ importance varies and can be scaled to control the reconstruction quality”. “Importance varies” shows a set of selected elements, the importance values. When combined with the “scale the latent representation flexibly” from ¶2 and the “quantization loss” from ¶3, shows “quantized latent representation”) However, Cui does not appear to explicitly teach 3D binary mask; though does teach importance values as the Li reference does. Pertaining to the same field of endeavor, Li teaches is determined using a 3D binary mask (Li, pg 4, column 2, full paragraphs 1-3, reproduced below: PNG media_image7.png 726 556 media_image7.png Greyscale . The “importance mask” with three-dimensions n x h x w, is being interpreted as involving a “3D binary mask”. Equation 5 shows the binary nature of the importance mask). Cui and Li are considered to be analogous art because they are directed to learned image compression. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for image compression with importance values (as taught by Cui) to include a 3D binary mask (as taught by Li) because the combination provides an improvement to image compression (Li, abstract). Further, Cui teaches 3D latent representation (Cui, section 3.1, line 5). Claims 10 and 17 is/are rejected using the same rationale as applied to claim 3 discussed above. Claim(s) 4, 5, 11, 12, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cui, as modified by Li, in view of Balle (“Variational image compression with a scale hyperprior”, 2018). Regarding claim 4, Cui teaches The method of claim 3, wherein the 3D binary mask is generated (Li, pg 4, column 2, full paragraphs 1-3, reproduced below: PNG media_image7.png 726 556 media_image7.png Greyscale . The “importance mask” with three-dimensions n x h x w, is being interpreted as involving a “3D binary mask”. Equation 5 shows the binary nature of the importance mask). However, Cui and Li do not appear to specifically teach hyper-decoder. Pertaining to the same field of endeavor, Balle teaches using output of a specific layer of a hyper-decoder (Balle, pg 6, Figure 4, reproduced below: PNG media_image8.png 630 918 media_image8.png Greyscale ”arithmetic decoder” is being interpreted as hyper-decoder since the right side of Figure 4 is the hyperprior model. The hyperprior model is being interpreted as outputting, note the z leading to the Q out of the specific layer from the hyperprior model at the end). Cui, Li, and Balle are considered to be analogous art because they are directed to learned image compression. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for image compression with importance values, modified to have a 3D binary mask (as taught by Cui and Li) to include an output of a specific layer of a hyper-decoder (as taught by Balle) because the combination provides an improvement to image compression (Balle, abstract). Regarding claim 5, Balle teaches The method of claim 4, wherein a hyperprior is input to the hyper-decoder (Balle, pg 6, Figure 4, reproduced below: PNG media_image8.png 630 918 media_image8.png Greyscale ”arithmetic decoder” is being interpreted as hyper-decoder since the right side of Figure 4 is the hyperprior model. The hyperprior is input into the hyper-decoder on the right side). Cui, Li, and Balle are considered to be analogous art because they are directed to learned image compression. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system for image compression with importance values, modified to have a 3D binary mask (as taught by Cui and Li) to include the hyperprior as input to the hyper-decoder (as taught by Balle) because the combination provides an improvement to image compression (Balle, abstract). Claims 11 and 18 is/are rejected using the same rationale as applied to claim 4 discussed above. Claim 12 is/are rejected using the same rationale as applied to claim 5 discussed above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY B DUONG whose telephone number is (571)272-1358. The examiner can normally be reached Monday - Thursday 10a-9p (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.B.D./Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586187
LESION LINKING USING ADAPTIVE SEARCH AND A SYSTEM FOR IMPLEMENTING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12525024
ELECTRONIC DEVICE, METHOD, AND COMPUTER READABLE STORAGE MEDIUM FOR DETECTION OF VEHICLE APPEARANCE
2y 5m to grant Granted Jan 13, 2026
Patent 12518510
MACHINE LEARNING FOR VECTOR MAP GENERATION
2y 5m to grant Granted Jan 06, 2026
Patent 12498556
Microscopy System and Method for Evaluating Image Processing Results
2y 5m to grant Granted Dec 16, 2025
Patent 12488438
DEEP LEARNING-BASED IMAGE QUALITY ENHANCEMENT OF THREE-DIMENSIONAL ANATOMY SCAN IMAGES
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+32.8%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 56 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month