Prosecution Insights
Last updated: April 19, 2026
Application No. 18/617,162

MACHINE LEARNING TECHNIQUES FOR VIDEO DOWNSAMPLING

Non-Final OA §102§DP
Filed
Mar 26, 2024
Examiner
LEMIEUX, IAN L
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Netflix Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
496 granted / 569 resolved
+25.2% vs TC avg
Moderate +10% lift
Without
With
+9.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§102 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are currently pending in U.S. Patent Application No. 18/617,162 and an Office action on the merits follows. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the grounds of nonstatutory double patenting as being unpatentable and/or obvious over one or more claims of: U.S. Patent No. 11,948,271 B2 to parent application 17/133,206. In further view of Sun et al. “Learned Image Downscaling for Upscaling using Content Adaptive Resampler” (attached PTO-892 NPL Citation No. V). Although the claims at issue are not identical, they are not patentably distinct from each other because claims of reference at least render obvious independent claim(s) of the instant application, in further view of the following reasons/considerations: • Instant claims and claims of reference recite common subject matter, and recite the open ended transitional phrase “comprising” which does not preclude any additional elements recited by claims of reference – see the limitation mappings/table presented below; • Language/terminology of instant claim(s) constituting minor/slight variations from the claims of reference, if/where present, require interpretations under Broadest Reasonable Interpretation and/or plain meaning definitions (MPEP 2173 and 2111) equivalent to/met by language of the reference claims in view of that corresponding/shared Specification. While the disclosure of reference may not be used as prior art (Double Patenting concerns the claims of reference), portions of the specification which provide support for reference claims may also be examined and considered when addressing the scope of claim(s) of reference and the issue of whether an instant claim defines an obvious variation or falls within the scope of an invention claimed in the claim(s) of reference. See MPEP 804 with reference to In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970). • Whereby element(s) of instant claim(s) otherwise not present explicitly in corresponding reference claim(s) (see * in the table below – i.e. limitations describing those residual blocks, but only in a manner wherein e.g. a skip connection/alternate function path is added/summed with the output of a convolutional layer stack (see Fig. 4 mappings to residuals 448 and 378 and summation operations that follow)), correspond(s) to known residual block characteristics and/or disclosure as identified in prior art of record to include e.g. Sun et al. “Learned Image Downscaling for Upscaling using Content Adaptive Resampler”. See also well-known literature He et al. “Deep Residual Learning for Image Recognition” (2016). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify claims of reference so as to further describe one or more residual blocks as recited and evidenced by Sun, the motivation as similarly taught/suggested therein and/or readily recognized by POSITA that those characteristics of a residual block allow for a faster and more stable convergence during training since the network is forced to learn only the residual/avoid vanishing gradients in back-propagation (particularly for deeper networks), and similar to that use in ResamplerNet of Sun, may allow for modeling the context of the input image. Instant Claims Claims of Reference US 11,948,271 B2 Claim 1 A computer-implemented method for training a neural network to downsample images in a video encoding pipeline, the method comprising: Claim 1 A computer-implemented method for training a neural network to downsample images in a video encoding pipeline, the method comprising: executing a first convolutional neural network on a first source image having a first resolution to generate a first downsampled image, wherein the first convolutional neural network includes at least two residual blocks and is associated with a first downsampling factor; executing a first convolutional neural network on a first source image having a first resolution to generate a first downsampled image, wherein the first convolutional neural network includes at least two residual blocks; executing an upsampling algorithm on the first downsampled image to generate a first reconstructed image having the first resolution; executing an upsampling algorithm on the first downsampled image to generate a first reconstructed image having the first resolution; computing a first reconstruction error based on the first reconstructed image and the first source image; and computing a first reconstruction error based on the first reconstructed image and the first source image; and updating at least one parameter of the first convolutional neural network based on the first reconstruction error to generate a trained convolutional neural network; wherein updating at least one parameter of the first convolutional neural network based on the first reconstruction error to generate a trained convolutional neural network. a residual block comprises a portion of the first convolutional neural network that maps the input of the residual block to a residual and then adds the residual to a function of the input of the residual block to generate the output of the residual block; and wherein *see bullet 3 above re. Obviousness type DP in view of characteristics common to residual blocks and as further evidenced in e.g. Sun et al. “Learned Image Downscaling for Upscaling using Content Adaptive Resampler” Fig. 1 light blue ResBlock illustrating that summation operation following output from conv/LeakyReLU/scale stack each downsampled image has a resolution that is lower than a resolution of a corresponding source image by the first downsampling factor. *see bullet 2 above, a downsampling factor is arguably inherent, if not at least implicit in view of recited “to generate a downsampled image” – it is necessarily downsampled by some measure/ factor, and the limitation in question serves only to establish a name for said factor Claim 9 A computer-implemented method for downsampling images, the method comprising: Claim(s) 21-22 A computer-implemented method for downsampling images, the method comprising: executing a first trained convolutional neural network on a first source image having a first resolution to generate a first downsampled image having a second resolution that is lower than the first resolution; executing a first trained convolutional neural network on a first source image having a first resolution to generate a first downsampled image having a second resolution that is lower than the first resolution, wherein the first trained convolutional neural network includes at least two residual blocks and is associated with a first downsampling factor; wherein the first trained convolutional neural network includes at least two residual blocks and is associated with a first downsampling factor. wherein a residual block comprises a portion of the first trained convolutional neural network that maps the input of the residual block to a residual and then adds the residual to a function of the input of the residual block to generate the output of the residual block, and See * Above re. corresponding limitation in Claim 1 wherein the at least two residual blocks include an upsampling residual block that is associated with a numerator of a resampling fraction and a downsampling residual block that is associated with a denominator of the resampling fraction. Claim 22 (further limiting 21) wherein the at least two residual blocks include an upsampling residual block that is associated with a numerator of a resampling fraction and a downsampling residual block that is associated with a denominator of the resampling fraction. Dependent claims correspond to those as identified below in a more concise mapping, and for the case of CRM claim(s) 15-20, these claims are unpatentable over claims of reference in view of Obviousness type Double Patenting procedures as they relate to system/CRM and method claims of congruent scopes. It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify congruent method claim(s) (22) of reference so as to be implemented by one or more generic non-transitory computer readable medium, the motivation being as readily recognized by POSITA that such medium embodiments may serve to facilitate product/software distribution in a manner characterized by a reasonable expectation of success. Instant application Claims of Reference Claim(s) 2-8 Claim(s) 2-8 respectively Claim(s) 10/16 Claim 23 Claim(s) 11/17 Claim 24 Claim(s) 12/18 Claim 25 Claim(s) 13/19 Claim 26 Claim(s) 14/20 Claim 27 Claim 15 CRM congruent to instant claim 9 Claim 22 – see corresponding method above for the case of instant claim 9 Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 1. Claim 1 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sun et al. “Learned Image Downscaling for Upscaling using Content Adaptive Resampler”. As to claim 1, Sun discloses a computer-implemented method for training a neural network to downsample images in a video encoding pipeline, the method comprising: executing a first convolutional neural network (ResamplerNet, Fig. 1) on a first source image having a first resolution (Fig. 1 input HR image) to generate a first downsampled image (Fig. 1 intermediate LR output between ResamplerNet and SRNet), wherein the first convolutional neural network includes at least two residual blocks (Fig. 1 light blue ResBlocks of ResamplerNet, 5 in total, page 6 section 4.1.2 Implementation details “5 residual blocks with each having features of 128 channels are used to model the context”) and is associated with a first downsampling factor (ratio between LR and HR, page 3 Section 3.1 “s is the downscaling factor”); executing an upsampling algorithm on the first downsampled image to generate a first reconstructed image having the first resolution (SRNet to produce SR image that is understood to have the same scale/size/dimension/resolution as that of HR image input, in further view of Applicant’s disclosure wherein resolution and scale appear synonymous – e.g. [0119] “having a resolution of 1280x720”); computing a first reconstruction error based on the first reconstructed image and the first source image (page 5 section 3.4, “One of the main contributions of our work is that we propose a model to learn image downscaling without any supervision signifying that no constraint is applied to the downscaled image. The only objective guiding the generation of the downscaled image is the SR restoration error”, “To do fair comparisons with the EDSR, we only adopt the L1 norm loss as the restoration metric as suggested by [33]”, Equation 8, etc.,; while not required in a basis for rejection in view of Sun as applied, see also Li et al., “Learning a Convolutional Neural Network for Image Compact-Resolution” (cited by Applicant) page 1094 1) Reconstruction Loss); and updating at least one parameter of the first convolutional neural network based on the first reconstruction error to generate a trained convolutional neural network (page 4 section 3.2 Backward pass “The ResamplerNet is trained using the gradient descent technique and we need to back-propagate gradients from the SRNet through the resampling operation”); wherein a residual block comprises a portion of the first convolutional neural network that maps the input of the residual block to a residual and then adds the residual to a function of the input of the residual block to generate the output of the residual block (Fig. 1 lower portion illustrating architecture of each light blue ResBlock, wherein the mapped residual is the output of that scale layer prior to that summation with the residual block input following that top skip connection/path); and wherein each downsampled image has a resolution that is lower than a resolution of a corresponding source image by the first downsampling factor (LR is HR downscaled by factor s, page 3 Section 3.1 “s is the downscaling factor”, page 4 Fig. 2, scale of equation 1, etc.,). Allowable Subject Matter Claim(s) 9-20 would be allowable if rewritten or amended (see above regarding Terminal Disclaimer) to overcome the Double Patenting rejections set forth in this Office action. Dependent claim(s) 2-8 would similarly be allowable if rewritten to overcome (or otherwise overcome – see above re. Terminal Disclaimer) Double Patenting rejections, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. References of record fail to serve in any obvious combination teaching/ suggesting each and every limitation as required by the instant claims, and reasons for allowance are additionally apparent from the record(s) associated with parent application 17/133,206. See MPEP § 1302.14. Additional References Prior art made of record and not relied upon that is considered pertinent to applicant's disclosure: Additionally cited references (see attached PTO-892) otherwise not relied upon above have been made of record in view of the manner in which they evidence the general state of the art. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to IAN L LEMIEUX whose telephone number is (571)270-5796. The examiner can normally be reached Mon - Fri 9:00 - 6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IAN L LEMIEUX/Primary Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602825
Human body positioning method based on multi-perspectives and lighting system
2y 5m to grant Granted Apr 14, 2026
Patent 12592086
POSE DETERMINING METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586397
METHOD AND APPARATUS EMPLOYING FONT SIZE DETERMINATION FOR RESOLUTION-INDEPENDENT RENDERED TEXT FOR ELECTRONIC DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579840
BEHAVIOR ESTIMATION DEVICE, BEHAVIOR ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573086
CONTROL METHOD, RECORDING MEDIUM, METHOD FOR MANUFACTURING PRODUCT, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.6%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month