Prosecution Insights
Last updated: April 19, 2026
Application No. 18/852,889

PRE-ANALYSIS FOR VIDEO ENCODING

Final Rejection §103
Filed
Sep 30, 2024
Examiner
RAHAMAN, SHAHAN UR
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
479 granted / 633 resolved
+17.7% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
51 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 633 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Following prior arts are considered pertinent to applicant's disclosure. US 20130322524 A1 (hereinafter Jang) US 20090086816 A1 (hereinafter Leontaris) F. Maurer, S. Battista, L. Ciccarelli, G. Meardi, and S. Ferrara, “Overview of MPEG-5 Part 2—Low complexity enhancement video coding (LCEVC),” ITU J. ICT Discoveries, vol. 3, no. 1, pp. 1–11, Jun. 2020 ( hereinafter Maurer) US 10257542 B1 (Brailovskiy) US 20210067785 A1 (para 29, Fig.1) Response to Remarks/Arguments Applicant’s arguments with respect to prior art rejection have been fully considered but they are not persuasive for following reason. Re: Prior art rejection of independent claims Applicant argued in substance that the prior art combination does not teach the amended limitations. Examiner respectfully disagrees and argues that, Fig.2 or Maurer teaches this limitation, for instance teaches wherein the encoder is a Low Complexity Enhancement Video Coding (LCEVC) encoder [(Introduction, Fig.2)] , and encoding the second video frame comprises: down-sampling the second video frame [(Fig.2 downscaler)] : encoding the down-sampled second video frame using a base codec to obtain a base encoding layer [(Fig.2 base encoder)] decoding the base encoding layer using the base codec to obtain a decoded reference video frame [(“reconstruction” and uplscaling of the output of “Base Encoder” ; also the loop having Inverse Quantization and inverse transformation in the next layer; see description in section 5.1.2)] calculating one or more residuals based on a difference between the second frame and the decoded reference video frame: [(L1 residuals or L-2 residuals)] and encoding the one or more residuals to obtain an enhancement layer , wherein the encoder parameter is a parameter for calculating or encoding one or more of the residuals. [(section 5.1.2 in light of Jang’s teaching)] Therefore, applicant’s arguments are not persuasive Re: Prior art rejection of dependent claims Applicant has presented no additional argument, other than arguments already presented with respect to independent claims. Therefore, the arguments are similarly not persuasive. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 17, 20-22, 24-27, 29 are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Maurer With respect to claim 17, Jiang teaches a method for determining an encoder parameter for encoding an input video comprising a sequence of video frames, the input video having a first resolution, the method comprising: obtaining a first and a second video frame of the input video, wherein the second video frame follows the first video frame in the sequence of video frames [(Fig.17A, para 146)] : down-sampling the first and second frames to a second resolution to obtain a first and a second down-sampled video frame: [(Fig.17A, para 147)] : generating a detail perception metric based on the first and second down-sampled video frames [(statistical information/complexities is determined based on base layer {para 86}, first-layer =down-sampled {Fig.4 unit 60}, complexity =edge detection {para 221})] : and determining, based on the detail perception metric, an encoder parameter for encoding the second video frame[(based on the statistical information encoding parameter is assigned for enhancement/second layer {para 155, Fig.4, Fig.15}; second layer encode video frames including the second video frame)] , wherein the detail perception metric comprises an edge detection metric based on the second down-sampled frame and a motion metric based on a difference between the first and second down-sampled frames [(complexity =edge detection {para 221} and motion {para 173})] Jang does not explicitly show wherein the encoder is a Low Complexity Enhancement Video Coding (LCEVC) encoder, and encoding the second video frame comprises: down-sampling the second video frame: encoding the down-sampled second video frame using a base codec to obtain a base encoding layer: decoding the base encoding layer using the base codec to obtain a decoded reference video frame: calculating one or more residuals based on a difference between the second frame and the decoded reference video frame: and encoding the one or more residuals to obtain an enhancement layer, wherein the encoder parameter is a parameter for calculating or encoding one or more of the residuals. However, in the same/related field of endeavor, Maurer teaches wherein the encoder is a Low Complexity Enhancement Video Coding (LCEVC) encoder [(Introduction, Fig.2)] , and encoding the second video frame comprises: down-sampling the second video frame [(Fig.2 downscaler)] : encoding the down-sampled second video frame using a base codec to obtain a base encoding layer [(Fig.2 base encoder)] decoding the base encoding layer using the base codec to obtain a decoded reference video frame [(“reconstruction” and uplscaling of the output of “Base Encoder” ; also the loop having Inverse Quantization and inverse transformation in the next layer; see description in section 5.1.2)] calculating one or more residuals based on a difference between the second frame and the decoded reference video frame: [(L1 residuals or L-2 residuals)] and encoding the one or more residuals to obtain an enhancement layer , wherein the encoder parameter is a parameter for calculating or encoding one or more of the residuals. [(section 5.1.2 in light of Jang’s teaching)] Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts to reduce computational complexity [(Maurer “Introduction”)] Jang teaches, with respect to claim 20. The method according to claim 17, wherein the motion metric comprises a sum of absolute differences between the first down-sampled frame and the second down-sampled frame. [(temporally smallest SAD indicates motion {para 96, 139} )] Jang teaches, with respect to claim 21. The method according to claim 17, comprising generating the detail perception metric and determining the encoder parameter for each of a plurality of local blocks of the second down-sampled video frame. [(par 282, Fig.15)] Jang teaches, with respect to claim 22. The method according to claim 17, wherein the encoder parameter comprises a priority level for encoding resources. [(priority level of bit {para 281})] Regarding Claim 27: Please see analysis of claim 17 and note that Jang also teaches performing pre-analysis to determine an encoder parameter for encoding the second video frame: and instructing an encoder to encode the second video frame based on the encoder parameter, wherein the pre-analysis comprises [(unit 121D of Fig.6 perform the analysis (i.e. statistical and ROI information) before the encoding by 122D. i.e. 121D perform pre-analysis. )] Jang teaches, with respect to claim 29. A device comprising one or more processors: a non-transitory memory, the memory storing executable instructions which, when executed by the processors, cause the device to perform the following: obtain a first and a second video frame of the input video, wherein the second video frame follows the first video frame in the sequence of video frames: down-sample the first and second frames to a second resolution to obtain a first and a second down-sampled video frame: generate a detail perception metric based on the first and second down-sampled video frames: and determine, based on the detail perception metric, an encoder parameter for encoding the second video frame, wherein the detail perception metric comprises an edge detection metric based on the second down-sampled frame and a motion metric based on a difference between the first and second down-sampled frames. [(see analysis of claim 17 and para 263 of Jang)] Jang in view of Maurer additionally teaches, w.r.t. claim 24. The method according to claim 23, wherein the encoder parameter comprises a residual mode selection for encoding a residual in an LCEVC enhancement layer when encoding the second frame. [(Jang para 109, QP selection for residual processing corresponds residual mode; also see para 110 “Thus, at least one from among quantized residual coefficients of coefficient vectors may be discarded during encoding of the enhancement layer.”)] Jang in view of Maurer additionally teaches, w.r.t. claim 25. The method according to claim 23, wherein the encoder parameter comprises a decision of whether or not to apply temporal prediction to an LCEVC enhancement layer when encoding the second frame [(Jang para 88 inter/intra frame prediction information)] . Jang in view of Maurer additionally teaches, w.r.t. claim 26. The method according to claim 23, wherein the encoder parameter comprises a quantization parameter for an LCEVC enhancement layer when encoding the second frame. [(Jang para 187)] Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Maurer in view of Brailovskiy. Regarding Claim 18: Jang in view of Maurer does not explicitly show wherein the edge detection metric comprises a text detection metric. However, in the same/related field of endeavor, Brailovskiy teaches wherein the edge detection metric comprises a text detection metric [(Brailovskiy column 17, lines 13-16)] . Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would provide predictable result with no change of their respective functionalities. Brailovskiy additionally teaches, with respect to claim 19. The method according to claim 17, wherein the edge detection metric is calculated by processing the second down-sampled frame using a directional decomposition to generate a set of directional components. [(sampling/downsampling in horizontal and vertical direction {column 9, lines 1-3} and equation 1)] Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shahan Rahaman whose telephone number is (571)270-1438. The examiner can normally be reached on 7am - 3:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at telephone number (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /SHAHAN UR RAHAMAN/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Sep 09, 2025
Non-Final Rejection — §103
Jan 12, 2026
Response Filed
Jan 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599294
IMAGE-RECORDING DEVICE FOR IMPROVED LOW LIGHT INTENSITY IMAGING AND ASSOCIATED IMAGE-RECORDING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602765
DEFECT INSPECTION SYSTEM AND DEFECT INSPECTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12598328
VIDEO SIGNAL PROCESSING METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593035
IMAGE ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586224
THREE-DIMENSIONAL SCANNING SYSTEM AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
88%
With Interview (+12.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 633 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month