Prosecution Insights
Last updated: April 19, 2026
Application No. 17/514,448

IMAGE UPSAMPLING USING ONE OR MORE NEURAL NETWORKS

Non-Final OA §103
Filed
Oct 29, 2021
Examiner
SHEDRICK, CHARLES TERRELL
Art Unit
2646
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
87%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
768 granted / 993 resolved
+15.3% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
40 currently pending
Career history
1033
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
30.6%
-9.4% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 993 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/29/26 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1-30 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-4, 7-10, 13-16, 19-22, and 25-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hsiao, PH., Chang, PL. (2017). Video Enhancement via Super-Resolution Using Deep Quality Transfer Network. In: Lai, SH., Lepetit, V., Nishino, K., Sato, Y. (eds) Computer Vision – ACCV 2016. ACCV 2016, hereinafter, ‘Hsiao’ in view of KWON et al. US Patent Pub. No.: 2010/0150473 A1, hereinafter, ‘Kwon’ and further in view of Ho et al. US Patent Pub. No.: 2023/0111546 A1, hereinafter, ‘Ho’. Consider Claims 1, 7, 13 19 and 25, Hsiao teaches One or more processors, comprising: circuitry (e.g., see computing device noted on page 193 lines 16-22) to: obtain, from one or more storage locations (e.g., see, e.g., see computing device noted on page 193 lines 16-22) a pixel of a prior upsampled frame of a video and a corresponding pixel of a current input frame of the video; and generate an upsampled output frame of the video (e.g., see at least page 187 paragraph 2 and page 189 paragraph 2 and the abstract)(i.e., as noted in the conclusion, Hsiao’s objective is achieved because “the proposed CNN model consists of modules including generation and selection of HR pixel candidates, fusion with LR input, residual learning and bidirectional architecture. ) However, Hsiao does not specifically teach to calculate, for the current input frame of the video, an exposure adjustment value; provide the exposure adjustment value for the current input frame of the video and the prior upsampled frame of the video, as input to one or more neural networks that infer one or more blending weights to blend at least one pixel of the current input frame of the video and at least one corresponding pixel of the prior upsampled frame of the video; and generate an upsampled output frame of the video by applying the one or more inferred blending weights to the at least one pixel of the current input frame of the video and the at least one corresponding pixel of the prior upsampled frame of the video. In analogous art, Kwon teaches a technology of blending a high dynamic range (HDR) image or a plurality of images captured with different exposure settings to create multiple images-e.g., see at least 0003- “The generating of the at least one multi-exposure image may include generating the at least one multi-exposure image by generating a weight to perform image blending corresponding to each of the at least one area of interest and processing the HDR image or the plurality of images captured with different exposure settings according to the weight”.-0020. In analogous art, Ho further teaches using AI to learn to blend the data and learning to generate the blending weight – 0040 and see also the abstract for a broader overview. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date, given the teachings of Kwon and Ho to modify Hsiao to try to calculate, for the current input frame of the video, an exposure adjustment value; provide the exposure adjustment value for the current input frame of the video and the prior upsampled frame of the video, as input to one or more neural networks that infer one or more blending weights to blend at least one pixel of the current input frame of the video and at least one corresponding pixel of the prior upsampled frame of the video; and generate an upsampled output frame of the video by applying the one or more inferred blending weights to the at least one pixel of the current input frame of the video and the at least one corresponding pixel of the prior upsampled frame of the video for the purpose of improving image processing (e.g., image resolution). Consider claim 2, 8, 14 20 and 26, Hsiao teaches the claimed invention except the one or more processors of claim 1, wherein the exposure adjustment value comprises an exposure value calculated for at least the current input frame of the video. In analogous art Kwon teaches in 0044 “A HDR image or a plurality of images with different exposures are received (in 410). Where the plurality of images with different exposures are acquired, the exposure times may be adjusted so that the dynamic ranges of the images acquired respectively at the exposure times overlap each other”. Therefore, it would have been obvious to a person of ordinary skill in the art to try wherein the exposure adjustment value comprises an exposure value calculated for at least the current input frame of the video for the purpose of blending the images. Consider claim 3, 9, 15, 21 and 27, Hsiao teaches the claimed invention except wherein the exposure value is used to adjust brightness values of the current input frame of the video and the prior upsampled frame of the video. In analogous art Kwon teaches in 0006 “Various methods of generating HDR images exist, and one such method is to expand the dynamic range of images by blending a plurality of images with different exposure settings.” A dynamic range (DR) of a digital image is defined as a ratio of a brightness of a darkest pixel of the digital image with respect to a brightness of a brightest pixel of the digital image. Therefore, it would have been obvious to a person of ordinary skill in the art to try wherein the exposure value is used to adjust brightness values of the current input frame of the video and the prior upsampled frame of the video for the purpose of adjusting the dynamic range. Consider Claims 4, 10, 16, 22 and 28, Hsiao teaches the claimed invention except wherein the circuitry is further to use one or more neural networks to infer blending weights for corresponding pixels of at least the current input frame of the video and the prior upsampled frame of the video, based, at least in part, on the adjusted brightness values. In analogous art, Kwon teaches generating a weight with respect to the exposure values – e.g., see at least 0003 and 0020. However, Kwon does not specifically teach wherein the blending weights are inferred. In analogous art, Ho teaches using AI to learn to blend the data and learning to generate the blending weight – 0040 and see also the abstract for a broader overview. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Hsiao as modified by Kwon and further modified by Ho to achieve the result wherein the circuitry is further to use one or more neural networks to infer blending weights for corresponding pixels of at least the current input frame of the video and the prior upsampled frame of the video, based, at least in part, on the adjusted brightness values for the purpose of improving image processing. 7. Claim(s) 5-6, 11-12, 17-18, 23-24, and 29-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hsiao, PH., Chang, PL. (2017). Video Enhancement via Super-Resolution Using Deep Quality Transfer Network. In: Lai, SH., Lepetit, V., Nishino, K., Sato, Y. (eds) Computer Vision – ACCV 2016. ACCV 2016, hereinafter, ‘Hsiao’ in view of KWON et al. US Patent Pub. No.: 2010/0150473 A1, hereinafter, ‘Kwon’ and further in view of Ho et al. US Patent Pub. No.: 2023/0111546 A1, hereinafter, ‘Ho’ and further in view of Kalantari et al US Patent Pub. No.:2019/0096046, hereinafter, ‘Kalantari’. Consider Claims 5, 11, 17, 23 and 29, Hsiao as modified by Kwon and further modified by Ho teaches the claimed invention except wherein the circuitry is further to increase a color range of one or more output frames generated based at least in part upon the blending weights for the current input frame of the video and the prior upsampled frame of the video. In analogous art, Kalantari teaches to increase a color range of one or more output images generated based at least in part upon the blending weights for the current input image and the prior upsampled image (e.g., see estimating blending weights with respect to color noted in at least -0058, 0105 and 0123-0124). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Hsiao as modified by Kwon and further modified by Ho and further modified by Kalantari to achieve the result wherein the circuitry is further to increase a color range of one or more output frames generated based at least in part upon the blending weights for the current input frame of the video and the prior upsampled frame of the video for the purpose of improving the image. Consider Claims 6, 12, 18, 24 and 30, Hsiao as modified by Kwon and further modified by Ho teaches the claimed invention except wherein the blending weights are applied to color values from the current input frame of the video and the prior upsampled frame of the video, and wherein the color values are determined in part using an accumulation of values determined using a rendering application-provided exposure value. In analogous art, Kalantari teaches wherein the blending weights are applied to color values from the current input image and the prior upsampled image(e.g., see estimating blending weights with respect to color noted in at least -0058, 0105 and 0123-0124), and wherein the color values are determined in part using an accumulation of values determined using a rendering application-provided exposure value(e.g., see reconstruction application 0088 and 0093). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Hsiao as modified by Kwon and further modified by Ho and further modified by Kalantari to achieve the result wherein the blending weights are applied to color values from the current input frame of the video and the prior upsampled frame of the video, and wherein the color values are determined in part using an accumulation of values determined using a rendering application-provided exposure value for the purpose of improving the image. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TERRELL SHEDRICK whose telephone number is (571)272-8621. The examiner can normally be reached 8A-5P. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew D Anderson can be reached at 571 272 4177. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES T SHEDRICK/Primary Examiner, Art Unit 2646
Read full office action

Prosecution Timeline

Oct 29, 2021
Application Filed
Sep 29, 2023
Non-Final Rejection — §103
Jan 17, 2024
Interview Requested
Jan 26, 2024
Applicant Interview (Telephonic)
Jan 26, 2024
Examiner Interview Summary
Feb 29, 2024
Response Filed
Jun 01, 2024
Final Rejection — §103
Oct 02, 2024
Interview Requested
Oct 08, 2024
Examiner Interview Summary
Oct 08, 2024
Applicant Interview (Telephonic)
Oct 08, 2024
Response after Non-Final Action
Dec 06, 2024
Notice of Allowance
Feb 06, 2025
Request for Continued Examination
Feb 07, 2025
Response after Non-Final Action
Mar 08, 2025
Non-Final Rejection — §103
Apr 19, 2025
Applicant Interview (Telephonic)
Apr 19, 2025
Examiner Interview Summary
Jun 13, 2025
Response Filed
Sep 24, 2025
Final Rejection — §103
Nov 14, 2025
Examiner Interview Summary
Nov 14, 2025
Applicant Interview (Telephonic)
Jan 29, 2026
Request for Continued Examination
Feb 02, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604209
BASE STATION ALLOCATION SUPPORT APPARATUS, BASE STATION ALLOCATION SUPPORT METHOD AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12597171
SYSTEMS AND METHODS FOR 3D POINT CLOUD DENSIFICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591948
METHOD AND APPARATUS WITH IMAGE DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12581457
PAGING MESSAGE MONITORING METHOD, PAGING MESSAGE MONITORING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12581291
AUTHENTICATION METHOD AND RELATED APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
87%
With Interview (+9.5%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 993 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month