Prosecution Insights
Last updated: April 19, 2026
Application No. 18/350,897

VIDEO ENCODER AND VIDEO ENCODING METHOD WITH REAL-TIME QUALITY ESTIMATION

Final Rejection §103
Filed
Jul 12, 2023
Examiner
CARTER, RICHARD BRUCE
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
85%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
290 granted / 453 resolved
+6.0% vs TC avg
Strong +21% interview lift
Without
With
+20.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
12 currently pending
Career history
465
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 453 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. The applicant's amendment received on 09/23/2025 in which claims 1, 9, 11, and 19 were (AMENDED); claims 2, 12-13, 15, and 17 (CANCELED); and claims 21-22 (NEWLY ADDED), has been fully considered and entered, but the arguments are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 3-11, 14, 16, and 18-22 are rejected under 35 U.S.C. 103 as being unpatentable over Mcallister (“Mcallister”) (US Pub. No.: 2018/0098083 A1) in view of Fu et al. (“Fu”) (US Pub. No.: 2010/0074330 A1), and further in view of Hayashi (“Hayashi”) (US Pub. No.: 2005/0069214 A1). In regards to claims [1] and [11], Mcallister discloses a video encoder (see fig. 4) and video encoding method with real-time quality estimation (see fig. 10), comprising: a video compressor (see fig. 4), receiving source data of a video (see fig. 4, e.g., “input video data”) to generate compressed data (see fig. 4, e.g., “compressed output bit-stream”); a video reconstructor see fig. 4 unit 450, paragraph [0046], e.g., “decoding loop 450 = video reconstructor”), coupled to the video compressor (see fig. 4 unit 400 or unit 414), to generate playback-level data (see fig. 4, paragraph [0055], e.g., “quality/playback-level data = in unit 442 utilizes Lagrangian principle to determine R&D resultant bits R and distortion D output data based on quantization parameter QP”) that is buffered for inter-prediction (see fig. 4 unit 434, e.g., “motion compensation performs this function”) by the video compressor (see fig. 4 unit 400 or unit 414), wherein the video reconstructor (see fig. 4 unit 450) generates intermediate data (see fig. 4 unit 418 and/or unit 420, e.g., “inverse quantize or inverse transform data”); a quality estimator (see fig. 4 unit 454), coupled to the video reconstructor (see fig. 4 unit 450 and/or unit 426) to receive the intermediate data (see fig. 4 unit 418 and/or unit 420, e.g., “inverse quantize or inverse transform data”), and used for performing quality estimation (see fig. 4 unit 454) based on the intermediate data (see fig. 4 unit 418 and/or unit 420, e.g., “inverse quantize or inverse transform data”); wherein the quality estimator (see fig. 4 unit 454) estimates quality distortion (see paragraph [0061] and [0074], e.g. “per-frame parameter quality control unit 454 is capable to adjust algorithm complexity to lower/decrease/distort the quality of the image”); and an encoder top controller (see fig. 4 unit 416), adjusting at least one video compression factor (see fig. 4 unit 453, paragraph [0022] and [0045]) in real time (see fig. 7, where the examiner notes by referring to fig. 7 shows the changes/adjustment in video compression factor (e.g., “frame rate FPS”) over time at peak point 702 in time that is continuously descended to a sharp drop at updated point 704 in time) based on quality estimation result (see paragraph [0083]) from the quality estimator (see fig. 4 unit 454). Yet, Mcallister is not very clear in disclosing such that wherein the video reconstructor comprises: an inverse quantization unit, coupled to a quantization unit of the video compressor for performing inverse quantization to generate inverse- quantized data; an inverse-transform unit, performing an inverse transform on the inverse- quantized data to generate inverse-transformed data; a reconstruction unit, receiving prediction data obtained from the video compressor as well as the inverse-transformed data, to generate reconstructed data for intra-prediction by the video compressor; and an in-loop filter with at least one stage, processing the reconstructed data to generate the playback-level data as specified in the amended claim. However, in the same field of endeavor, Fu teaches the well-known concept such that wherein the video reconstructor (see fig. 5) comprises: an inverse quantization unit (see fig. 5 unit 545, e.g. “IQ = inverse quantization”), coupled to a quantization unit (see fig. 5 unit 540, e.g. “Q = quantization”) of the video compressor (see fig. 5 unit 500) for performing inverse quantization to generate inverse-quantized data (see fig. 5 unit 545, e.g. “IQ”); an inverse-transform unit (see fig. 5 unit 545, e.g. “IT = inverse transform”), performing an inverse transform (see fig. 5 unit 545, e.g. “IT = inverse transform”) on the inverse- quantized data (see fig. 5 unit 545, e.g. “IQ = inverse quantization”) to generate inverse-transformed data (see fig. 5 unit 545, e.g. “IT = inverse transform”); a reconstruction unit (see fig. 5 unit 535), receiving prediction data (see fig. 5 unit 505 or e.g. “prediction”) obtained from the video compressor (see fig. 5 unit 500) as well as the inverse-transformed data (see fig. 5 unit 545, e.g. “IT = inverse transform”), to generate reconstructed data (see fig. 5 unit 535) for intra-prediction (see fig. 5 unit 505) by the video compressor (see fig. 5 unit 500); and an in-loop filter (see fig. 5 unit 530 or unit 515) with at least one stage (see fig. 5), processing (see fig. 5) the reconstructed data (see fig. 5 unit 535) to generate the playback-level data (see fig. 5 unit 525, paragraphs [0033] and [0045], where the examiner notes that the filter parameter estimator 525 comprises a rate-distortion determination unit for performing a rate-distortion criterion (e.g., “quality/playback-level data”) of the coding performance at frame level). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains could recognize the advantage of modifying the proposed teachings of Mcallister above by incorporating the proposed teachings of Fu above to perform such a modification to provide a method and apparatus for video processing that implements wherein the video reconstructor comprises: an inverse quantization unit, coupled to a quantization unit of the video compressor for performing inverse quantization to generate inverse- quantized data; an inverse-transform unit, performing an inverse transform on the inverse- quantized data to generate inverse-transformed data; a reconstruction unit, receiving prediction data obtained from the video compressor as well as the inverse-transformed data, to generate reconstructed data for intra-prediction by the video compressor; and an in-loop filter with at least one stage, processing the reconstructed data to generate the playback-level data; wherein the intermediate data is one of: the inverse-quantize data, the inverse- transformed data, the reconstructed data and data that has not passed through the final stage of the in-loop filter with at least one stage as well as to the solve the problem in a case to reduce the error between an original signal and a noisy signal (a signal with certain errors inherent as a result of the coding process) as taught by Fu et al. (see Fu, paragraph [0003]), thus improving compression efficiency. Although Mcallister discloses a video encoder (see fig. 4) and video encoding method with real-time quality estimation (see fig. 10), comprising: a video compressor (see fig. 4), receiving source data of a video (see fig. 4, e.g., “input video data”) to generate compressed data (see fig. 4, e.g., “compressed output bit-stream”); a video reconstructor see fig. 4 unit 450, paragraph [0046], e.g., “decoding loop 450 = video reconstructor”), coupled to the video compressor (see fig. 4 unit 400 or unit 414), to generate playback-level data (see fig. 4, paragraph [0055], e.g., “quality/playback-level data = in unit 442 utilizes Lagrangian principle to determine R&D resultant bits R and distortion D output data based on quantization parameter QP”) that is buffered for inter-prediction (see fig. 4 unit 434, e.g., “motion compensation performs this function”) by the video compressor (see fig. 4 unit 400 or unit 414), wherein the video reconstructor (see fig. 4 unit 450) generates intermediate data (see fig. 4 unit 418 and/or unit 420, e.g., “inverse quantize or inverse transform data”); a quality estimator (see fig. 4 unit 454), coupled to the video reconstructor (see fig. 4 unit 450 and/or unit 426) to receive the intermediate data (see fig. 4 unit 418 and/or unit 420, e.g., “inverse quantize or inverse transform data”), and used for performing quality estimation (see fig. 4 unit 454) based on the intermediate data (see fig. 4 unit 418 and/or unit 420, e.g., “inverse quantize or inverse transform data”); wherein the quality estimator (see fig. 4 unit 454) estimates quality distortion (see paragraph [0061] and [0074], e.g. “per-frame parameter quality control unit 454 is capable to adjust algorithm complexity to lower/decrease/distort the quality of the image”); and an encoder top controller (see fig. 4 unit 416), adjusting at least one video compression factor (see fig. 4 unit 453, paragraph [0022] and [0045]) in real time (see fig. 7) based on quality estimation result (see paragraph [0083]) from the quality estimator (see fig. 4 unit 454), the combination of teachings of Mcallister and Fu fails to explicitly disclose such that wherein the quality estimator receives the intermediate data from one of the inverse quantization unit, the inverse-transform unit, or the reconstruction unit, and wherein the intermediate data is one of: the inverse-quantize data, the inverse-transformed data, or the reconstructed data as specified in the amended claim. However, in the same field of endeavor, Hayashi further teaches wherein the quality estimator (see fig. 1 unit 21) receives the intermediate data (see fig. 1, e.g., “quality calculation section 21 receives intermediate data 18 or 19”) from one of the inverse quantization unit (see fig. 1 unit 18), the inverse-transform unit (see fig. 1 unit 19), or the reconstruction unit (see fig. 1, e.g. “decompressed data”), and wherein the intermediate data is one of: the inverse-quantize data (see fig. 1 unit 18), the inverse-transformed data (see fig. 1 unit 19), or the reconstructed data (see fig. 1, e.g. “decompressed data”). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains could recognize the advantage of modifying the proposed combination of teachings of Mcallister and Fu above by further incorporating the proposed teachings of Hayashi above to perform such a modification to provide a method and apparatus for image encoding that implements wherein the quality estimator receives the intermediate data from one of the inverse quantization unit, the inverse-transform unit, or the reconstruction unit, and wherein the intermediate data is one of: the inverse-quantize data, the inverse-transformed data, or the reconstructed data as well as to the solve the problem in a case where there is no concept for selecting a quantization table to create a smaller file size in spite of indistinguishable changes in the image quality, there is a problem in advancing effective utilization of the memory for image storage in this respect. Additionally, a typical image with many low frequency components in the transform coefficients after DCT conversion, for example, an image photographed of a blue sky without clouds, etc., the image quality remains basically unchanged even if the file size becomes larger as taught by Fu et al. (see Fu, paragraph [0003]), thus enhancing image quality and improving compression efficiency even more. As per claim [3], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 1 (see the above rejection of claim 1), wherein: the quality estimator (see fig. 4 unit 454) is coupled to an output port (see fig. 4) of the reconstruction unit (see fig. 4 unit 450 and/or unit 426), to receive (see fig. 4) the reconstructed data as the intermediate data (see fig. 4 unit 450 and/or unit 426). As per claim [4], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 3 (see the above rejection of claim 3), wherein: the quality estimator (see fig. 4 unit 454) compares (see paragraph [0022] and [0077]) the reconstructed data (see fig. 4 unit 450 and/or unit 426) with source data (see fig. 4, e.g., “input video data”) corresponding to the reconstructed data (see fig. 4 unit 450 and/or unit 426). As per claim [5], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 1 (see the above rejection of claim 1), wherein: the quality estimator (see fig. 4 unit 454) is coupled to an output port of the inverse-transform unit (see fig. 4 unit 420), to receive the inverse-transformed data as the intermediate data (see fig. 4 unit 420). As per claim [6], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 5 (see the above rejection of claim 5), wherein: the quality estimator (see fig. 4 unit 454) compares (see fig. 4 unit 424) the inverse-transformed data (see fig. 4 unit 420) with residual data (see fig. 4 unit 422) corresponding to the inverse-transformed data (see fig. 4 unit 420). As per claim [7], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 1 (see the above rejection of claim 1), wherein: the quality estimator (see fig. 4 unit 454) is coupled to an output port (see fig. 4) of the inverse quantization (see fig. 4 unit 418), to receive (see fig. 4) the inverse-quantized data as the intermediate data (see fig. 4 unit 418). As per claim [8], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 7 (see the above rejection of claim 7) wherein: the quality estimator (see fig. 4 unit 454) compares (see paragraph [0022] and [0077]) the inverse-quantized data (see fig. 4 unit 418) with transformed (see fig. 4 unit 410) residual data (see fig. 4 unit 408) corresponding to the inverse-quantized data (see fig. 4 unit 418). As per claim [9], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 21 (see the rejection of claim 21), wherein: the playback-level data see fig. 4, paragraph [0055]) is output (see fig. 4) from the final-stage (see paragraph [0044] and [0056]) of the in-loop filter (see fig. 4 unit 450 and/or unit 428) with at least one stage (see paragraph [0044] and [0056]); and the quality estimator (see fig. 4 unit 454) is coupled to an output port (see fig. 4) of any former-stage (see paragraph [0044] and [0056]) of the in-loop filter (see fig. 4 unit 450 and/or unit 428) with at least one stage (see fig. 4, paragraph [0056] and [0109]) to receive (see fig. 4) the intermediate data (see fig. 4 unit 450 and/or unit 426). As per claim [10], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 9 (see the above rejection of claim 9), wherein: the quality estimator (see fig. 4 unit 454) compares (see paragraph [0022] and [0077]) the intermediate data (see fig. 4 unit 450 and/or unit 426) with source data (see fig. 4, e.g., “input video data”) corresponding to the intermediate data (see fig. 4 unit 450 and/or unit 426). As per claim [14], most of the limitations have been noted in the above rejection of claim 11. In addition, Mcallister discloses the video encoding method with real-time quality estimation as claimed in claim 11 (see the above rejection of claim 11), further comprising: comparing (see paragraph [0022] and [0077]) the reconstructed data (see fig. 4 unit 450 and/or unit 426) with source data (see fig. 4, e.g., “input video data”) corresponding to the reconstructed data (see fig. 4 unit 450 and/or unit 426) for quality estimation (see fig. 4 unit 454) when the reconstructed data is used as the intermediate data (see fig. 4 unit 450 and/or unit 426). As per claim [16], most of the limitations have been noted in the above rejection of claim 11. In addition, Mcallister discloses the video encoding method with real-time quality estimation as claimed in claim 11 (see the above rejection of claim 11), further comprising: comparing (see fig. 4 unit 424, paragraph [0022] and [0077]) the inverse-transformed data (see fig. 4 unit 420) with residual data (see fig. 4 unit 422) corresponding to the inverse-transformed data (see fig. 4 unit 420) when the inverse transformed data is used as the intermediate data (see fig. 4 unit 420). As per claim [18], most of the limitations have been noted in the above rejection of claim 11. In addition, Mcallister discloses the video encoding method with real-time quality estimation as claimed in claim 11 (see the above rejection of claim 11), further comprising: comparing (see paragraph [0022] and [0077]) the inverse-quantized data (see fig. 4 unit 418) with transformed (see fig. 4 unit 410) residual data (see fig. 4 unit 408) corresponding to the inverse-quantized data (see fig. 4 unit 418), when the inverse-quantized data is used as the intermediate data (see fig. 4 unit 418). As per claim [19], the video encoding method with real-time quality estimation as claimed in claim 22, is analogous to claim 9, which is performed by claim 19. As per claim [20], the video encoding method with real-time quality estimation as claimed in claim 19, is analogous to claim 10, which is performed by claim 20. As per claim [21], most of the limitations have been noted in the above rejection of claim 1. In addition, Mcallister discloses the video encoder with real-time quality estimation as claimed in claim 1 (see the above rejection of claim 1), wherein the intermediate data is one of: the inverse-quantize data (see fig. 4 unit 418), the inverse- transformed data (see fig. 4 unit 420), the reconstructed data and data that has not passed through a final stage of the in-loop filter with at least one stage. As per claim [22], the video encoding method with real-time quality estimation as claimed in claim 11, is analogous to claim 21, which is performed by claim 22. Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Carmel et al. (US Pub. No.: 2014/0177734 A1) discloses controlling a video content system. He et al. (US Pub. No.: 2024/0251089 A1) discloses method and system of video coding with fast low-latency bitstream size control. Orton-Jay et al. (US Pub. No.: 2014/0241420 A1) discloses systems and methods of encoding multiple video streams for adaptive bitrate streaming. 6. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Richard Carter whose telephone number is (571)270-1220. The examiner can normally be reached M-F 8:30 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.B.C/Examiner, Art Unit 2485 /JAYANTI K PATEL/Supervisory Patent Examiner, Art Unit 2485 October 30, 2025
Read full office action

Prosecution Timeline

Jul 12, 2023
Application Filed
Nov 12, 2024
Non-Final Rejection — §103
Feb 13, 2025
Response Filed
Mar 06, 2025
Final Rejection — §103
Jun 06, 2025
Request for Continued Examination
Jun 12, 2025
Response after Non-Final Action
Jun 18, 2025
Non-Final Rejection — §103
Sep 23, 2025
Response Filed
Oct 29, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591126
APPARATUS AND METHODS FOR REAL-TIME IMAGE GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12578567
APPARATUS AND METHODS FOR REAL-TIME IMAGE GENERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12568224
EVC DECODING COMPLEXITY METRICS
2y 5m to grant Granted Mar 03, 2026
Patent 12563233
CTU-ROW BASED GEOMETRIC TRANSFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12563173
HEAD-MOUNTED DEVICE FOR DISPLAYING PROJECTED IMAGES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
85%
With Interview (+20.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 453 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month