Prosecution Insights
Last updated: April 19, 2026
Application No. 18/650,638

QUALITY-BASED PROCESSING OF VIDEO

Non-Final OA §102§103§112
Filed
Apr 30, 2024
Examiner
LOTFI, KYLE M
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
City University Of Hong Kong
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 8m
To Grant
71%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
226 granted / 355 resolved
+5.7% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
22 currently pending
Career history
377
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
25.8%
-14.2% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 11 is objected to because of the following informalities: Line 5 of the claim recites “Q is the quantization step size, α and β are are model parameters”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. This claim recites the phrase “and/or”, which is indefinite because it is not clear whether the limitation that follows is in the alternative, (“or”), or is a required (“and”) limitation. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 13, 15, and 17-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wen, US 2016/0301931 A1. Regarding claim 1, Wen discloses: a computer-implemented method for processing a video, comprising: (a) determining a target frame-level quality required for a frame of the video to be Encoded (See [0080], disclosing “constant quality (CQ)” control, i.e. setting a target quality.), the determining of the target frame-level quality is based on, at least, a rate-quantization (R-Q) model that relates bit-rate and quantization step size and a quality-quantization model that relates quality measure and the quantization step size (See [0050], equation 5, which shows a rate-quantization relationship for rate control, and equation 8, relating distortion (quality), to rate, which is in turn related to quantization.); and (b) determining one or more coding parameters for encoding the frame based on the determined target frame-level quality (See [0052]). Regarding claim 2, Wen discloses: the computer-implemented method of claim 1, wherein the R-Q model is defined by R = γ/Q, where R is bit-rate, Q is quantization step size, and γ is model parameter of the R-Q model. (See [0050]-[0051], which disclose with respect to equation 5.) Regarding claim 13, Wen discloses: the computer-implemented method of claim 1, further comprising: (c) encoding the frame based on the one or more determined coding parameters (See [0044]). Regarding claim 15, Wen discloses: the computer-implemented method of claim 13, further comprising: (d) determining, based on the encoding of the frame, an output bit-rate and an output quality of the frame (See [0045], “Determining output bitrate”, noting as in [0057], the model compressed on a per-frame basis.); and (e) updating, based on the determined output bit-rate and output quality, the model parameters of the R-Q model and the quality-quantization model (See [0057] in Wen disclosing updating the model after compressing each frame.). Regarding claim 17, Wen discloses: the computer-implemented method of claim 15, further comprising: performing or repeating steps (a) to (e) for multiple frames of the video (See [0045]). Regarding claim 18, Wen discloses: a system for processing a video, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing or facilitating performing of to the computer-implemented method of claim 1 (See [0099], “encoder 200 may be implemented by a processing unit 402 that executes instructions stored in a memory unit,”). Regarding claim 19, Wen discloses: a non-transitory computer readable medium having instructions stored thereon which, when executed by one or more processors, cause the one or more processors to execute the computer-implemented method of claim 1 (The encoder 200 may be implemented by a processing unit 402 that executes instructions stored in a memory unit, which may include random access memory (RAM) 404a as well as non-volatile memory 404b.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-4, 10-12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wen, in view Ding, et al., “Image Quality Assessment: Unifying Structure and Texture Similarity”. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 44, NO. 5, MAY 2022. Regarding claim 3, Wen discloses the limitations of claim 1, upon which depends claim 3. Wen does not disclose: the computer-implemented method of claim 1, wherein the quality-quantization model comprises a DISTS-quantization (D-Q) model that relates DISTS value and the quantization step size. However, Ding discloses a Deep Image Structure and Texture Similarity (DISTS) index that can be used as a distortion measure in distortion-quantization model, such as the distortion quantization model disclosed in Wen Eqs. (5) and (8). See equation 7 in Ding. It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate the DISTS index as a distortion measure into a video encoder such as disclosed in Wen, in order to improve the rate control accuracy by more closely mimicking the human visual system’s perception of distortion over prior art distortion measures (e.g. mean-squared error (MSE), structural similarity index measure (SSIM), by having robust tolerance to texture resampling while also having good sensitivity to structural distortions. See Ding Introduction, 3rd para., 1 Background, 1st para. Regarding claim 4, the combination of Wen in view of Ding discloses the limitations of claim 3, upon which depends claim 4. This combination, specifically Wen, further discloses: the computer-implemented method of claim 3, wherein the D-Q model is defined as D = αQβ, Q is quantization step size, and α and β are model parameters of the D-Q model (See equations 5 and 8 in Wen, noting that substitution of the aQ-1 + bQ-2 in equation 8 yields D = C(aQ-1 + bQ-2)-K, which has the same form D = αQβ, when Q is not too large.). This combination, specifically Ding, further discloses: where D is DISTS value. It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate the DISTS index as a distortion measure into a video encoder such as disclosed in Wen, by substituting the DISTS value in equation 8 in Wen, in order to improve the rate control accuracy by more closely mimicking the human visual system’s perception of distortion over prior art distortion measures (e.g. mean-squared error (MSE), structural similarity index measure (SSIM), by having robust tolerance to texture resampling while also having good sensitivity to structural distortions. See Ding Introduction, 3rd para., 1 Background, 1st para. Regarding claim 10, the combination of Wen in view of Ding discloses the limitations of claim 3, upon which depends claim 10. This combination, specifically Wen, further discloses: the computer-implemented method of claim 3, wherein the one or more the coding parameters comprises a quantization parameter and a Lagrangian multiplier (See [0040]). Regarding claim 11, the combination of Wen in view of Ding discloses the limitations of claim 10, upon which depends claim 11. This combination, specifically Wen, further discloses: the computer-implemented method of claim 10, wherein the determining of the quantization parameter in (b) is based on Q = (D/α)1/β and QP = log-X(Q) x A + B where D is the target frame-level quality represented as a target frame-level DISTS value, Q is the quantization step size, α and β are model parameters of the D-Q model, QP is the quantization parameter, A, B, and X are constants. See Equation 8 in Wen, which can be solved for R and then substituted R = a/Q in Equation 8 and again solved for Q to yield the above equation for Q. Equation 6 has the same form as the QP equation above. Regarding claim 12, the combination of Wen in view of Ding discloses the limitations of claim 11, upon which depends claim 12. This combination, specifically Wen, further discloses: the computer-implemented method of claim 11, wherein the determining of the Lagrangian multiplier in (b) is based on λ = C x DQP/E where λ, is the Lagrangian multiplier, QP is the quantization parameter, C, D, and E are constants. See equation 3 in Wen, noting that equation 3 has the same form, where C = “QPFactor/(212/3.0), D = 2, and E = 1/3.0. Regarding claim 16, Wen does not disclose: the computer-implemented method of claim 15, wherein the updating in (e) is performed based on a gradient descent update method. Ding discloses this limitation. See figure 2, and caption, disclosing recovery of an original image using gradient descent minimization of an equation relating distortion to pixel values. It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to incorporate a gradient descent method for finding the optimal quantization values, as disclosed in Ding, in the context of using a DISTS measure for distortion in Wen, as gradient descent methods were well known in the art for this purpose, as disclosed in Ding Section 2.1, last paragraph. Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Wen, in view of Ding, in view Kim, US 2003/0031128 A1. Regarding claim 5, the combination of Wen in view of Ding discloses the limitations of claim 3, upon which depends claim 5. This combination, specifically Wen, further discloses: the computer-implemented method of claim 3, further comprises determining a target GOP-level quality required for a GOP of the video, the GOP comprising a plurality of frames including the frame to be encoded (See [0037], At a high level, rate control algorithms consist of two steps. The first step is to allocate a target bitrate budget over a group of pictures (GOPs)…”), and This combination does not disclose: wherein the determining of the target frame-level quality is further based on the determined target GOP-level quality. However, determining a target frame-level coding quality based on a GOP-level bit budget or quality target is disclosed in an analogous art by Zhang, which discloses GOP-level bitrate control, and a further bit budget allocation at a frame level that is constrained and informed by the GOP level. See Kim [0135]. It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to perform frame-level bitrate control subject to GOP-level bitrate or quality control, as disclosed in Kim, in order to maintain even distribution of quality across frames within a GOP, that is, to prevent abrupt changes in quality. See Kim, [0110]. Regarding claim 6, the combination of Wen, in view of Ding, in view of Kim discloses the limitations of claim 5, upon which depends claim 6. This combination, specifically Kim, further discloses: the computer-implemented method of claim 5, wherein the determining of the target frame-level quality required for the frame of the video comprises distributing or allocating at least part of the target GOP-level quality to the plurality of frames of the GOP (See [0135]-[0140], specifically “running bits” equation in [0137].). Regarding claim 7, the combination of Wen, in view of Ding, in view of Kim discloses the limitations of claim 5, upon which depends claim 7. This combination, specifically Kim, further discloses the computer-implemented method of claim 5, wherein the determining of the target frame-level quality required for the frame of the video comprises determining the target frame-level quality while optimizing a GOP-level rate-distortion (R-D) cost function (See [0139], “target bit rate” in equation 14, which is a GOP-level target bit rate). Claims 8 and 9 are rejected under 35 USC 103 as being unpatentable over Wen, in view of Ding, in view of Kim, in further view of Vetro, US 2005/0175109 A1. Regarding claim 8, the combination of Wen, in view of Ding, in view of Kim discloses the limitations of claim 7, upon which depends claim 8. This combination does not disclose: the computer-implemented method of claim 7, wherein the GOP-level rate-distortion cost function is defined based on, at least, a GOP-level Lagrangian multiplier for the GOP. However, Vetro discloses in an analogous art an optimal bit allocation scheme which uses a GOP-level rate-distortion optimization calculation, as disclosed in [0045]-[0046]. It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the GOP-level R-D optimization disclosed in Vetro into the encoder of Wen, in view of Ding, in order to account for inter-frame dependencies. See Vetro, [0012], [0015]-[0016]. Regarding claim 9, the combination of Wen, in view of Ding, in view of Kim, in view of Vetro discloses the limitations of claim 8, upon which depends claim 9. This combination, specifically Wen, further discloses: the computer-implemented method of claim 8, wherein the GOP-level Lagrangian multiplier is related to the target GOP-level quality through the R-Q model and the D-Q model (See [0040].); This combination, specifically Kim, further discloses: and/or wherein the determining of the target frame-level quality required for the frame of the video comprises determining the target frame-level quality required for the frame of the video based on the GOP-level Lagrangian multiplier (See Kim, [0137], “remaining bits”). It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the GOP-level Lagrange Multiplier, as disclosed in Kim, in to the encoder disclosed in Wen, in order to relate frame-level quality to GOP-level quality, in order to maintain even distribution of quality across frames within a GOP, that is, to prevent abrupt changes in quality from one frame to another within a GOP. See Kim, [0110]. Claim 14 is rejected under 35 USC 103 as being unpatentable over Wen, in view of Ding, in view of Rapaka, US 2021/0400273 A1. Regarding claim 14, Wen discloses the limitations of claim 13, upon which depends claim 14. Wen does not disclose: the computer-implemented method of claim 13, wherein the encoding in (c) is performed based on versatile video coding (VVC) based technique. See Rapaka, [0157], which discloses rate-distortion optimization encoding in the context of VVC. It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to perform the rate-quantization target-based encoding based on a VVC codec, as suggested by Rapaka. Doing so would have merely entailed combining the known prior art elements respectively disclosed in Wen, and in Rapaka, and would have had predictable results for one of ordinary skill in the art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE M LOTFI whose telephone number is (571)272-8762. The examiner can normally be reached 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE M LOTFI/ Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598317
HYBRID SPATIO-TEMPORAL NEURAL MODELS FOR VIDEO COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12593070
SYSTEMS AND METHODS FOR SIGNALING SOURCE PICTURE TIMING INFORMATION FOR TEMPORAL SUBLAYERS IN VIDEO CODING
2y 5m to grant Granted Mar 31, 2026
Patent 12587646
NETWORK BASED IMAGE FILTERING FOR VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12581061
MATRIX BASED INTRA PREDICTION WITH MODE-GLOBAL SETTINGS
2y 5m to grant Granted Mar 17, 2026
Patent 12574527
METHODS FOR ENCODING AND DECODING FEATURE DATA, AND DECODER
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
71%
With Interview (+7.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month