Prosecution Insights
Last updated: April 18, 2026
Application No. 18/949,551

MACHINE LEARNING OF ENCODING PARAMETERS FOR A NETWORK USING A VIDEO ENCODER

Final Rejection §103§DP
Filed
Nov 15, 2024
Examiner
FEREJA, SAMUEL D
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
458 granted / 614 resolved
+16.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
66 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
64.1%
+24.1% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 614 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Currently, claims 1-20 are pending in the application. Claims 1, 2, 4-10, 12-14, 16, 17, and 19 are amended. Response to Arguments / Amendments Applicant’s arguments have been fully considered but are rendered moot in view of the new ground of rejection necessitated by amendments initiated by the applicant. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. US Patent No. 12149708 Claim 1-20 of the instant application is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of US Patent No. 12149708 (Application Number: 17/402953). Regarding Claim 1: Although the conflicting claims are not identical, they are not patentably distinct from each other because claim 1 of the instant application generic to all that is recited in claim 1 of the US Patent No. 12149708. That is, claim 1 of the instant application is anticipated by claim 1 of US Patent No. 12149708. Regarding Claims 2-8: Although the conflicting claims are not identical, they are not patentably distinct from each other because all limitations of claims 2-8 of the instant application are recited in claim 2-8 of the US Patent No. 12149708. Regarding Claims 9-15 system claims 9-115 of using the corresponding method claimed in claims 1-8, and the rejections of which are incorporated herein for the same reasons as used above. Regarding Claims 16-20, processor claims 16-20 of using the corresponding method claimed in claims 1 -8 and the rejections of which are incorporated herein for the same reasons as used above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-7, 9-11, 13-14 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Codenie et al. (US 20210092493, hereinafter Codenie) in view of Peng et al. (US 20200344472, hereinafter Peng) and BAO (US 20240015288, hereinafter BAO) Regarding Claim 1, Codenie discloses a method comprising: predicting, using a Machine Learning Model (MLM), at least one encoding parameter representing a target bitrate for encoded video data ([0026], broadcasting system uses a feedback control loop for dynamically adjusting parameters of the encoding to place the metric within the range of the target metric values and then updates the parameters of the encoding method; [0045] adjusting target video quality (VQ) and the target cap value and submit the adjusted values to encoder 303 at step 359 using any suitable feedback control technique such as a proportional-integral-derivative controller, a machine-learning model (e.g., a neural network, a recurrent neural network, a convolutional neural network, a generative adversarial network, a decision tree, and models based on ensemble methods, such as random forests), transmitting data to cause the encoder to use the at least one predicted encoding parameter to encode video data into the encoded video data using the target bitrate data ([0026] broadcasting system transmits the first broadcast, encoded using the encoding method, to a playback device after encoding the first broadcast wherein the system computes a metric related to quality and a bitrate of the encoded first broadcast; [0044], FIG. 3, video encoder 303 may, at step 352A, encode video content 301 using target video quality and target cap, resulting in encoded signal 302); and streaming the encoded video data to a remote client ([0026], transmitting or streaming the second broadcast to the playback device; [0050] FIG. 5, information about programs 501 is first stored in database 111 and then transmitted to a program classification system 505). Codenie does not explicitly disclose the at least one encoding parameter being predicted based at least on reducing a difference between the target bitrate and an actual bitrate of the encoded video data that would be produced if the at least one encoding parameter were to be applied to an encoder. Peng teaches the ([0080], Table II, the model of this embodiment matches the target bit rates quite closely and the fluctuation in GOP-level bit rate is much less significant than the baseline, especially at low bit rates; FIG. 11A˜FIG. 11D visualizing the GOP bit rate as a function of GOP index; [0082], minimize the bit rate error at the sequence-level, the baseline adopts a windowing mechanism: when a current GOP has an actual bit rate higher than (respectively, lower than) the target, the following GOPs will compensate for the difference by decreasing (respectively, increasing) their target bit rates). PNG media_image1.png 384 629 media_image1.png Greyscale Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of reduce a mismatch between the target bitrate and an actual bitrate as taught by Peng ([0080]) into the encoding/decoding system of Codenie in order to provide Optimal bit allocation addressing classic problem in video encoder control due to the inter-dependencies between video frames (Peng, [0005]) resulting in the predictable result of improving encoding quality of the video data in reliable manner. Codenie & Peng do not explicitly disclose reducing the difference between the target bitrate and the actual bitrate of the encoded video data that would be produced if the at least one encoding parameter were to be applied to the encoder. BAO teaches reducing the difference between the target bitrate and the actual bitrate of the encoded video data that would be produced if the at least one encoding parameter were to be applied to the encoder ([0032], FIG. 1B, a bit rate difference between the precoding bit rate and the pre-allocated bit rate needs to be analyzed, and the actual coding quantization parameter (QP) of the to-be-coded current macroblock subset in the current video frame in the previous video frame is correspondingly adjusted to acquire a target coding quantization parameter of the current macroblock subset in the current video frame, without calculating the coding cost of each coded block at each recursion depth by analyzing the image complexity). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of reducing the difference between the target bitrate and the actual bitrate of the encoded video data that would be produced if the at least one encoding parameter were to be applied to the encoder as taught by BAO ([0032]) into the encoding/decoding system of Codenie & Peng in order to enable reducing coding overhead and coding complexity in video coding process, improving efficiency of video coding and ensuring network bandwidth fluctuation on basis of improving stability of video coding quality (BAO, [0026]). Regarding Claim 2, Codenie in view of Peng & BAO discloses the method of claim 1, Codenie discloses wherein the prediction is based at least on one or more of: one or more categories of content depicted in the video data; a level of motion depicted in the video data; or one or more network conditions associated with the video data ([0042], encoder is that a bitrate of video content 301 may exhibit high peaks for challenging content (e.g., a video content that has a lot of motion, rapid scene changes addressed by employing segments of video content 301 a value of the target VQ, and a value of the cap (also referred to as a bitrate cap)). Regarding Claim 3, Codenie in view of Peng & BAO discloses the method of claim 1, Codenie discloses wherein the at least one encoding parameter is further representing one or more of: an error correction mode for the encoded video data, a video resolution for the encoded video data, an intra-refresh interval for the encoded video data, or a packet pacing interval for the encoded video data ([0041] FIG. 3, objective measurement which quantifies the distortions due to the encoding by expressing it as a noise such as signal to noise ratio (SNR), spectral distortion (SD), root-mean-squared error (RMSE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and segmental signal-to-noise ratio (SEGSNR)). Regarding Claim 5, Codenie in view of Peng & BAO discloses the method of claim 1, Peng discloses wherein the prediction predicting is based at least on a magnitude of the difference. ([0080], Table II, the model of this embodiment matches the target bit rates quite closely and the fluctuation in GOP-level bit rate is much less significant than the baseline, especially at low bit rates; FIG. 11A˜FIG. 11D visualizing the GOP bit rate as a function of GOP index; [0082], minimize the bit rate error at the sequence-level, the baseline adopts a windowing mechanism: when a current GOP has an actual bit rate higher than (respectively, lower than) the target, the following GOPs will compensate for the difference by decreasing (respectively, increasing) their target bit rates). The same reason or rational of obviousness motivation applied as used above in claim 1. Regarding Claim 6, Codenie in view of Peng & BAO discloses the method of claim 1, Peng discloses the predicting is to minimize the difference based at least on a loss function that includes a representation of a difference between a target bitrate variable and an actual encoded bitrate output ([0080], Table II, the model of this embodiment matches the target bit rates quite closely and the fluctuation in GOP-level bit rate is much less significant than the baseline, especially at low bit rates; FIG. 11A˜FIG. 11D visualizing the GOP bit rate as a function of GOP index; [0082], minimize the bit rate error at the sequence-level, the baseline adopts a windowing mechanism: when a current GOP has an actual bit rate higher than (respectively, lower than) the target, the following GOPs will compensate for the difference by decreasing (respectively, increasing) their target bit rates). The same reason or rational of obviousness motivation applied as used above in claim 1. Regarding Claim 7, Codenie in view of Peng & BAO discloses the method of claim 1, BAO discloses wherein the prediction penalizes the difference ([0032], FIG. 1B, a bit rate difference between the precoding bit rate and the pre-allocated bit rate needs to be analyzed, and the actual coding quantization parameter (QP) of the to-be-coded current macroblock subset in the current video frame in the previous video frame is correspondingly adjusted to acquire a target coding quantization parameter of the current macroblock subset in the current video frame). The same reason or rational of obviousness motivation applied as used above in claim 1. Regarding Claims 9-11 and 13-14 system claims 9-11 & 13-14 of using the corresponding method claimed in claims 1-3 & 5-7 and the rejections of which are incorporated herein for the same reasons as used above. Regarding Claims 16-18 processor claims 16-20 of using the corresponding method claimed in claims 1-3 & 5-7 and the rejections of which are incorporated herein for the same reasons as used above. Claims 4, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Codenie et al. (US 20210092493, hereinafter Codenie) in view of Peng et al. (US 20200344472, hereinafter Peng), BAO and Khsib (US 11729387, hereinafter Khsib) Regarding Claim 4, Codenie in view of Peng & BAO discloses the method of claim 1, but does not explicitly disclose further comprising: annotating the video data with one or more labels identifying one or more categories of content depicted in the video data; and applying the one or more labels to the MLM to generate output data indicating the target bitrate. Khsib teaches : annotating the video data with one or more labels identifying one or more categories of content depicted in the video data; and applying the one or more labels to the MLM to generate output data indicating the target bitrate. (Col. 12, ll. 42-45, a user 530 may provide or otherwise identify a training dataset 526 with labels 528 (e.g., media (e.g., video) file and its corresponding optimal encoder setting labels) for use in creating a mode) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of annotating the video data with one or more labels identifying one or more categories of content depicted in the video data as taught by Khsib (Col. 11, ll. 21-36) into the encoding/decoding system of Codenie & Peng in order to provide use machine learning to achieve the dual objective of higher bandwidth savings with improved visual quality, and/or provide a fast inference for the machine learning for real time applications (e.g., in contrast to machine learning models that are too computational prohibitive for real time usage) (Khsib, Col. 2, ll. 55-60). Regarding Claim 12, system claim 15 of using the corresponding method claimed in claim 4, and the rejections of which are incorporated herein for the same reasons of obviousness as used above. Regarding Claim 19, processor claim 20 of using the corresponding method claimed in claim 4, and the rejections of which are incorporated herein for the same reasons of obviousness as used above. Claims 8, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Codenie et al. (US20210092493, hereinafter Codenie) in view of Peng et al. (US 20200344472, hereinafter Peng), BAO and Fang et al. (US 20210012227, hereinafter Fang). Regarding Claim 8, Codenie in view of Peng & BAO discloses the method of claim 1, but does not explicitly disclose wherein the predicting based at least on a simulation of a network processing a stream associated with the video data. Fang teaches wherein the predicting is based at least on a simulation of a network processing a stream associated with the video data ([0035], FIGS. 3-5, the agent 206 (coding or an application that resides on one or both of the sending computing device ) is trained using one or more of a simulated environment 300, an emulated environment 400 and a real network environment 500; [0036], network simulation tools, such as ns-2 or ns-3 (discrete-event network simulators), are used in the simulated environment 300 for rapid data collection and training) Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of simulating the network processing as taught by Fang ([0036]) into the encoding/decoding system of Codenie in order to provide to optimize user-perceived quality in real-time audio and video communications (Fang, [0003]) resulting in the predictable result of improving quality of the image in reliable manner. Regarding Claim 15, system claim 15 of using the corresponding method claimed in claim 8, and the rejections of which are incorporated herein for the same reasons of obviousness as used above. Regarding Claim 20, processor claim 20 of using the corresponding method claimed in claim 8, and the rejections of which are incorporated herein for the same reasons of obviousness as used above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samuel D Fereja whose telephone number is (469)295-9243. The examiner can normally be reached 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID CZEKAJ can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL D FEREJA/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Nov 15, 2024
Application Filed
Oct 10, 2025
Non-Final Rejection — §103, §DP
Jan 14, 2026
Response Filed
Mar 31, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597264
Method for Calibrating an Assistance System of a Civil Motor Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12598318
METHOD AND SYSTEM-ON-CHIP FOR PERFORMING MEMORY ACCESS CONTROL WITH LIMITED SEARCH RANGE SIZE DURING VIDEO ENCODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593018
SYSTEM AND METHOD FOR CONTROLLING PERCEPTUAL THREE-DIMENSIONAL ELEMENTS FOR DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12593036
METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12591123
METHOD FOR DETERMINING SLOPE OF SLIDE IN SLIDE SCANNING DEVICE, METHOD FOR CONTROLLING SLIDE SCANNING DEVICE AND SLIDE SCANNING DEVICE USING THE SAME
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+11.8%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 614 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month