Prosecution Insights
Last updated: April 19, 2026
Application No. 18/439,204

VIDEO ENCODING OPTIMIZATION FOR MACHINE LEARNING CONTENT CATEGORIZATION

Non-Final OA §103
Filed
Feb 12, 2024
Examiner
TORRENTE, RICHARD T
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Ati Technologies Ulc
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
717 granted / 1039 resolved
+11.0% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
40 currently pending
Career history
1079
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
25.9%
-14.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1039 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/25 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chandran et al. (US 2020/0380261) in view of Zhang et al. (US 2021/0067785). Regarding claim 1, Chandran discloses an apparatus (see 100 in fig. 1A) comprising: machine learning (ML) engine circuitry (see 116 in fig. 1A); and a motion unit (see 108 in fig. 1A) comprising circuitry configured to generate an indication of whether the ML engine circuitry should process an input video frame (see 120 and 122 in fig. 1A), based on an analysis of motion characteristics of the input video frame (see 108 in fig. 1A); wherein the machine learning engine circuitry is selectively activated to process the input video frame further, based on the indication (see 120 & 116 in fig. 1A). Although Chandran discloses wherein when activated the machine learning engine circuitry is configured to generate encoding information for a video encoder based on the input video frame (see 126a in fig. 1A), it is noted that Chandran does not provide the particular wherein the encoding information is an encoding-control information. However, Zhang discloses a encoding system with machine learning generating encoding information wherein the encoding information is an encoding-control information (e.g. see 116 in fig. 1). Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Zhang teachings of quantization parameter estimation into Chandran quantization estimation for the benefit of improving compression efficiency and/or video quality. Regarding claims 2, 9 and 16, Chandran further discloses wherein the motion unit is further configured to identify one or more objects in the input frame based on the motion characteristics (see 108 in fig. 1A; see 213 in fig. 2B), and when activated the ML engine circuitry is configured to execute an ML model on only the one or more objects identified by the motion unit (see 116 in fig. 1A). Regarding claims 3, 10 and 17, Chandran further discloses wherein responsive to receiving an indication that a new object has not been detected in the input video frame and a change of scene has not been detected for the input video frame (see fig. 2E), the input video frame is encoded by the video encoder without use of a ML model (see 118 in fig. 1A). Regarding claims 4, 11 and 18, the references further discloses wherein the encoding- control information generated by the ML engine circuitry comprises a quantization parameter (QP) map (see Zhang 116 in fig. 1). Regarding claims 5, 12 and 19, the references further discloses comprising a downscaling unit configured to downscale the input video frame to generate a downscaled version (e.g. see Chandran ¶ [0012]), the downscaled version being supplied to the ML engine circuitry for generating the encoding-control information when the ML engine circuitry is activated (e.g. see Chandran ¶ [0012] with 116 in fig. 1A; see Zhang 116 in fig. 1A). Regarding claims 6, 13 and 20, Chandran further discloses wherein, when activated, the ML engine circuitry is configured to execute a ML model on only the downscaled version of the input video frame (see 116 in fig. 1A; e.g. see ¶ [0012]). Regarding claims 7 and 14, the references further discloses wherein the indication comprises at least a new object has been detected in the input video frame compared to a reference frame based on motion characteristics derived from motion vectors (see Chandran “New object” in fig. 2D), the indication being used to selectively activate the ML engine circuitry to generate the encoding-control information (see Zhang 116 in fig. 1). Regarding claim 8, the claim(s) recite analogous limitations to claim 1, and is/are therefore rejected on the same premise. Regarding claim 15, the claim(s) recite analogous limitations to claim 1, and is/are therefore rejected on the same premise. Response to Arguments Applicant's arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection. Citation of Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Piacentino et al. (US 2021/0160422), discloses object and scene change detection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD T TORRENTE whose telephone number is (571)270-3702. The examiner can normally be reached M-F: 6:45-3:15 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571) 272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD T TORRENTE/Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Feb 12, 2024
Application Filed
Jan 24, 2025
Examiner Interview (Telephonic)
Mar 19, 2025
Non-Final Rejection — §103
May 20, 2025
Examiner Interview Summary
May 20, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103
Nov 24, 2025
Examiner Interview Summary
Nov 24, 2025
Applicant Interview (Telephonic)
Dec 12, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604032
SYSTEMS AND METHODS FOR PERFORMING PADDING IN CODING OF A MULTI-DIMENSIONAL DATA SET
2y 5m to grant Granted Apr 14, 2026
Patent 12604041
METHODS AND DEVICES FOR GEOMETRIC PARTITIONING MODE SPLIT MODES REORDERING WITH PRE-DEFINED MODES ORDER
2y 5m to grant Granted Apr 14, 2026
Patent 12604014
METHOD AND SYSTEM OF VIDEO PROCESSING WITH LOW LATENCY BITSTREAM DISTRIBUTION
2y 5m to grant Granted Apr 14, 2026
Patent 12593062
IMAGE ENCODING AND DECODING METHOD WITH MERGE FLAG AND MOTION VECTORS
2y 5m to grant Granted Mar 31, 2026
Patent 12581067
INTRA PREDICTION METHOD AND DEVICE USING MPM LIST
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
83%
With Interview (+14.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 1039 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month