Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,964

METHOD FOR MANAGING ENCODING OF MULTIMEDIA CONTENT AND APPARATUS FOR IMPLEMENTING THE SAME

Final Rejection §103
Filed
Dec 22, 2022
Examiner
ANYIKIRE, CHIKAODILI E
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Ateme
OA Round
4 (Final)
75%
Grant Probability
Favorable
5-6
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
779 granted / 1042 resolved
+16.8% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
51 currently pending
Career history
1093
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
36.9%
-3.1% vs TC avg
§112
1.5%
-38.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1042 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1 – 8 and 10 - 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khsib et al (US 11,729,387, hereafter Khsib) in view of Tasinga et al (US 2022/0180178, hereafter Tasinga). As per claim 1, Khsib discloses a method for managing encoding of multimedia content stored in a file, comprising: determining, using a supervised learning algorithm, a prediction of processing resources required for encoding the multimedia content, based on one or more multimedia content characteristics of the multimedia content and on one or more multimedia content encoding parameters for encoding the multimedia content (column 3 lines 13 – 39); and determining a processing configuration for encoding the multimedia content based on the prediction of processing resources (column 3 lines 13 - 16). However, Khsib does not explicitly teach determining, using a neural network implementing a supervised learning algorithm, and wherein the predicted processing resources comprises one or more resources of one or more of: type of public cloud instance, CPU instances, RAM resources, storage type, public cloud provider, and time of day. In the same field of endeavor, Tasinga teaches determining, using a neural network implementing a supervised learning algorithm (¶ 99), and wherein the predicted processing resources comprises one or more resources of one or more of: type of public cloud instance, CPU instances, RAM resources, storage type, public cloud provider, and time of day (¶ 100, 109, and 110). Therefore, it would have been obvious for one of ordinary skill in the art before the time the invention was effectively filed to modify the invention of Khsib in view of Tasinga. The advantage would be optimizing video encoding and multimedia management. As per claim 2, Khsib discloses the method according to claim 1, further comprising: encoding the multimedia content by a video encoder configured with the processing configuration (Figure 1 element 116; column 3 lines 15 – 19; an encoder 116 to encode a media file (e.g., “input video”) according to the one or more encoding settings and send the encoded media file to a viewer device 122 according to some embodiments). As per claim 3, Khsib discloses the method according to claim 1, wherein the processing configuration comprises a configuration of a cloud instance, and wherein the encoding the multimedia content is performed by the cloud instance configured with the configuration of the cloud instance (column 10 lines 21 – 35; A provider network 500 (or, “cloud” provider network) provides users with the ability to utilize one or more of a variety of types of computing-related resources). As per claim 4, Khsib discloses the method according to claim 1, further comprising a training phase for training a neural network implementing the supervised learning algorithm performed on a plurality of training multimedia content files, the training phase comprising, for a training multimedia content file of the plurality of training multimedia content files: determining, based on the training multimedia content file, a reference prediction of processing resources required for encoding a training multimedia content contained in the training multimedia content file, and performing training of the neural network based on input data comprising one or more multimedia content characteristics of the training multimedia content and on one or more multimedia content encoding parameters for encoding the training multimedia content, and based on the reference prediction of processing resources, to generate a prediction model for predicting a prediction of processing resources required for encoding multimedia content (column 7 lines 62 – column 8 lines 43). As per claim 5, Khsib discloses the method according to claim 4, wherein the training phase further comprises, for the training multimedia content file: performing a plurality of encodings of the training multimedia content file using respective combinations of the one or more multimedia content encoding parameters; and determining, for each of the plurality of encodings, a respective result (column 8 lines 44 – 67). As per claim 6, Khsib discloses the method according to claim 5, wherein one or more of the respective results comprise a respective combination of one or more performance metrics (column 8 lines 44 – 67). As per claim 7, Khsib discloses the method according to claim 1, wherein the one or more multimedia content characteristics are one or more of: a type of the multimedia content, a duration of the multimedia content, a resolution of the multimedia content, one or more video characteristics of the multimedia content, and one or more audio characteristics of the multimedia content (column 8 lines 44 – 67). As per claim 8, Khsib discloses the method according to claim 1, wherein the one or more multimedia content encoding parameters are one or more of: a video compression standard, a number of output streams and their corresponding resolution, bitrate and/or quality setting, pre-processing requirements, an audio compression standard, and a required turnaround time (column 3 lines 40 – 67). As per claim 10, Khsib discloses the method according to claim 1, wherein the prediction of processing resources comprises a performance level associated with processing resources, and corresponding to one or more performance metrics, and wherein the processing configuration is determined based on the performance level of the associated processing resources (column 10 lines 21 – 39). As per claim 11, Khsib discloses the method according to claim 6, wherein one or more of the one or more performance metrics are one or more of: time to encode, encoding speed versus real time, average CPU usage, peak CPU usage, average memory usage, peak memory usage, amount of storage usage, type of storage usage, visual quality of output stream, bit-rate of output stream (column 8 lines 27 – 39 and column 14 lines 65 - column 15 lines 10). As per claim 12, Khsib discloses the method according to claim 1, further comprising: determining the one or more multimedia content characteristics based on the multimedia content, wherein the one or more multimedia content characteristics are of respective predetermined types of characteristic (column 11 lines 65 – column 12 lines 3). As per claim 13, Khsib discloses the method according to claim 1, further comprising: obtaining one or more multimedia content classes, and selecting a multimedia content class among the one or more multimedia content classes based on the one or more multimedia content characteristics, wherein the prediction of processing resources is determined based on the selected multimedia content class (column 3 lines 33 – 39; Embodiments herein utilize one of more machine learning models 108 to adapt the (video) encoder settings 114 to content, e.g., especially the dynamic channels. For example, live broadcasts of a sport channels may contain segments of film trailers or computer-generated videos, and thus selecting a single tuning (e.g., single set of encoder settings) for “sports” may lead to non-optimal quality (e.g., for the film trailers or computer-generated videos) in these embodiments.; Khsib’s disclosure shows the different content classes as the type of videos being analyzed such as sports as a class). Regarding claim 14, arguments analogous to those presented for claim 1 are applicable for claim 14. Regarding claim 15, arguments analogous to those presented for claim 1 are applicable for claim 15. Regarding claim 16, arguments analogous to those presented for claim 2 are applicable for claim 16. Regarding claim 17, arguments analogous to those presented for claim 3 are applicable for claim 17. Regarding claim 18, arguments analogous to those presented for claim 4 are applicable for claim 18. Regarding claim 19, arguments analogous to those presented for claim 5 are applicable for claim 19. Regarding claim 20, arguments analogous to those presented for claim 6 are applicable for claim 20. As per claim 21, Khsib teaches the apparatus according to claim 14. However, Khsib does not explicitly teach wherein the prediction of processing resource comprises a performance level associated with processing resources, and corresponding to one or more performance metrics, and wherein the processing configuration is determined based on the performance level of the associated processing resources. In the same field of endeavor, Tasinga teaches wherein the prediction of processing resource comprises a performance level associated with processing resources, and corresponding to one or more performance metrics, and wherein the processing configuration is determined based on the performance level of the associated processing resources (¶ 110). Therefore, it would have been obvious for one of ordinary skill in the art before the time the invention was effectively filed to modify the invention of Khsib in view of Tasinga. The advantage would be optimizing video encoding and multimedia management. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHIKAODILI E ANYIKIRE whose telephone number is (571)270-1445. The examiner can normally be reached 8 am - 4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached on 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHIKAODILI E ANYIKIRE/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Oct 04, 2024
Non-Final Rejection — §103
Feb 10, 2025
Response Filed
Apr 10, 2025
Final Rejection — §103
Jul 15, 2025
Request for Continued Examination
Jul 18, 2025
Response after Non-Final Action
Nov 10, 2025
Non-Final Rejection — §103
Feb 10, 2026
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598307
CONSTRAINED OPTIMIZATION TECHNIQUES FOR GENERATING ENCODING LADDERS FOR VIDEO STREAMING
2y 5m to grant Granted Apr 07, 2026
Patent 12598290
SYSTEMS AND METHODS FOR INTER PREDICTION COMPENSATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597507
SYSTEM AND METHOD FOR COMPRESSING AND/OR RECONSTRUCTING MEDICAL IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12587676
COMBINED INTRA-PREDICTION MODE FOR BITSTREAM DECODER
2y 5m to grant Granted Mar 24, 2026
Patent 12585999
METHOD AND SYSTEM FOR CALIBRATING MACHINE LEARNING MODELS IN FULLY HOMOMORPHIC ENCRYPTION APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+11.5%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 1042 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month