Prosecution Insights
Last updated: April 19, 2026
Application No. 18/518,614

METHOD OF LOCAL IMPLICIT NORMALIZING FLOW FOR ARBITRARY-SCALE IMAGE SUPER-RESOLUTION, AND ASSOCIATED APPARATUS

Final Rejection §103
Filed
Nov 24, 2023
Examiner
SUMMERS, GEOFFREY E
Art Unit
2669
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
249 granted / 348 resolved
+9.6% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment Claims 1-13 were previously pending. Applicant’s amendment filed February 12, 2026, has been entered in full. Claims 1, 5, 9 and 10 are amended. No claims are added or cancelled. Accordingly, claims 1-13 are now pending. Response to Arguments Applicant argues that the amendments have overcome the previous objection to claim 10 (Remarks filed February 12, 2026, hereinafter Remarks: Page 6). Examiner agrees. The previous objection to claim 10 is withdrawn. Applicant argues that the amendments to claim 1 have overcome the previous rejection under 35 U.S.C. 112(a) (Remarks: Pages 6-8). Examiner agrees. The previous rejection under 35 U.S.C. 112(a) is withdrawn. Applicant argues that the amendments to the claims have overcome the previous rejections under 35 U.S.C. 112(b) (Remarks: Pages 8-9). Examiner agrees. The previous rejections under 35 U.S.C. 112(b) are withdrawn. Applicant argues that the amendments to the claims have overcome the previous rejections under 35 U.S.C. 103 (Remarks: Pages 9-11). Examiner respectfully disagrees. Regarding claim 1, Applicant argues that Ma “fails to teach patch generation with non-overlapping patches in an HR image” (Remarks: Page 10). Examiner respectfully disagrees. First, the claim recites “non-overlapping patches in the input image” rather than the output image. Second, the claim requires obtaining “multiple super-resolution predictions for different locations … wherein the different locations correspond to non-overlapping patches in the input image.” The scope of the requirement that the locations “correspond” to non-overlapping patches is quite broad and does not require actually dividing the input image into patches, processing individual patches, etc. Indeed, any locations in an input image “correspond to non-overlapping patches in the input image” because an input image can be arbitrarily divided into non-overlapping patches and a given location in the input image will necessarily correspond to one such non-overlapping patch. Third, Ma does divide an input image into non-overlapping patches between features of the input image, which are used to produce an ensemble prediction for a given local coordinate within a patch – i.e., each patch defined by corners t as described in Sec. III.A and illustrated in Fig. 4. Fourth, the random patch sampling described by Ma is used to generate ground truth low-resolution and high-resolution image pairs for training (e.g., page 3675, 1st par.) and is not employed during inference. Further regarding claim 1, Applicant asserts that “Ma teaches that the co-ordinate information is removed from the input, and vectors are aggregated before being fed into the implicit function” and argues that Ma does not teach using local co-ordinates (Remarks: Page 10). Examiner respectfully disagrees. In Ma, “[t]he reconstruction process is achieved by: υ x , y = f ( m ' * , x + Δ x , y + Δ y , c h , c w ) ” (Sec. III.B, eqn. 5 and text above). The local coordinates are ( x , y ) (see, e.g., definition at Sec. III.A, 1st par.) and the reconstruction is plainly based on these local coordinates. The teachings of Ma to which Applicant is apparently referring (without citation) are at page 3673 and concern calculation of offsets Δ x , Δ y at equation 8, rather than the reconstruction as a whole. Regarding claims 10 and 11, Applicant presents the following Remarks at page 11: PNG media_image1.png 200 400 media_image1.png Greyscale The basis for these statements is respectfully unclear. For example, claims 10 and 11 make no mention of “pixel information”, of “pixel values”, or of any “pixels” at all. Furthermore, as discussed above, Ma does consider coordinate information ( x , y ) in addition to offsets. Admitted Prior Art In the Office Action dated November 18, 2025, Examiner took Official Notice of facts in the following instance(s): At Page 11: “However, Examiner takes Official Notice that it is old and well-known in the art of image analysis to implement an image processing method as a processing circuit within an electronic device that is used to perform the method. For example, a processor (such as a CPU and/or GPU) within a computer. Such computer implementation advantageously allows an image processing method to be performed quickly and efficiently.” Regarding Official Notice, MPEP 2144.03(C) includes the following instructions: “To adequately traverse such a finding, an applicant must specifically point out the supposed errors in the examiner’s action, which would include stating why the noticed fact is not considered to be common knowledge or well-known in the art.” “A general allegation that the claims define a patentable invention without any reference to the examiner’s assertion of official notice would be inadequate.” “If applicant does not traverse the examiner’s assertion of official notice or applicant’s traverse is not adequate, the examiner should clearly indicate in the next Office action that the common knowledge or well-known in the art statement is taken to be admitted prior art because applicant either failed to traverse the examiner’s assertion of official notice or that the traverse was inadequate. If the traverse was inadequate, the examiner should include an explanation as to why it was inadequate.” In the reply filed February 12, 2026, Applicant generally alleges that the claims define a patentable invention without any reference to Examiner’s assertion of Official Notice, which is an inadequate traverse. Therefore, as required by the MPEP, Examiner clearly indicates that the Official Notice statement(s) noted above is/are taken to be admitted prior art because Applicant either failed to traverse it/them or inadequately traversed it/them. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over ‘Ma’ (“Recovering Realistic Details for Magnification-Arbitrary Image Super-Resolution,” 17 May 2022). Regarding claim 1, Ma teaches a method of local implicit normalizing flow for arbitrary-scale image super-resolution (e.g., Figure 3; see further mapping below), the method being applied to a processing circuit within an electronic device (see Note Regarding Hardware below), the method comprising: utilizing the processing circuit to (see Note Regarding Hardware below) run a trained model of the local implicit normalizing flow framework (e.g., Fig. 3 shows running of various trained models of a local implicit normalizing flow framework, such as those labeled E and F ; The framework is within the scope of a “local implicit normalizing flow” framework at least because it calculates implicit pixel flows (IPF) that are used to normalize/correct a local implicit image function (LIIF) in order to obtain sharper super-resolution outputs; see, e.g., Section III for details) to start performing arbitrary-scale image super-resolution (e.g., Sec. I, second paragraph, LIIF technique introduced by Chen et al. achieves “image representation of arbitrary resolutions taking advantage of the flexibility of sampling continuous coordinates” – i.e., the super-resolution is “arbitrary-scale” because any resolution/scale of pixels can be sampled from the continuous coordinates; also see, e.g., Sec. I, last par., last bullet, “magnification-arbitrary image super-resolution”) according to at least one input image (e.g., Fig. 3, left, input image), for generating at least one output image (e.g., Fig. 3, top-right, output image, which is a super-resolution version of the input image), wherein a selected scale of the at least one output image with respect to the at least one input image is an arbitrary-scale (see discussion above); and during performing the arbitrary-scale image super-resolution with the trained model (i.e., during the processing illustrated in Fig. 3), performing prediction processing to obtain multiple super-resolution predictions for different locations of a space of the input image (e.g., Sec. III, eqn. 5, prediction processing is performed to predict RGB values υ at each of multiple location x , y of the input image space) according to the input image (e.g., Sec. III, eqn. 5 is a function of m ' * , which are features in the input image feature map M extracted by encoder E , so the predictions at eqn. 5 are according to the input image), a local co-ordinate (e.g., Sec. III, eqn. 5, ( x , y ) ) and a cell size (e.g., Sec. III, eqn. 5, cell size defined by height c h and width c w ) wherein the different locations correspond to non-overlapping patches in the input image (Any input image can be arbitrarily divided into non-overlapping patches, so the different locations ( x , y ) within the input image fall within the scope of “correspond[ing] to non-overlapping patches in the input image”; I.e., the claim does not require, for example, creating or processing non-overlapping patches in the input image; Furthermore, non-overlapping patches are formed between sets of features in the input image and each location x , y falls within one of those patches – see, e.g., corner features at t in Sec. III.A and patches between corners illustrated in Fig. 4), and a same non-super-resolution input image among the at least one input image is given (e.g., Fig. 3, same input image is given for whole framework, including components E and f that are part of LIIF), in order to generate the at least one output image (i.e., I x , y in equation 3; υ x , y in eqn. 5 is RGB values of a pixel in the output image). Note Regarding Hardware. While Ma’s teachings imply the use of some sort of processing circuit, Ma focuses on describing its image processing method, rather than the hardware used to implement that method, and Ma does not explicitly teach the use of a processing circuit within an electronic device (e.g., a processor in a computer). However, it has been taken as admitted prior art that it is old and well-known in the art of image analysis to implement an image processing method as a processing circuit within an electronic device that is used to perform the method. For example, a processor (such as a CPU and/or GPU) within a computer. Such computer implementation advantageously allows an image processing method to be performed quickly and efficiently. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to implement the image processing method of Ma as a processing circuit (e.g., a CPU and/or GPU) within an electronic device (e.g., a computer) in order to improve the method with the reasonable expectation that this would result in a method that could advantageously be performed quickly and efficiently. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Ma to obtain the invention as specified in claim 1. Regarding claim 2, Ma teaches the method of claim 1, further comprising: changing a controllable super-resolution preference coefficient of the local implicit normalizing flow framework to perform the arbitrary-scale image super-resolution with the trained model according to the at least one input image to generate at least one other output image, wherein said at least one output image and said at least one other output image are super-resolution results of different preferences produced with a signal model which is the trained model (Example A: e.g., Fig. 5, upscaling factor is changed to different values 8, 16, and 24, which are used to generate different output images using the IPF implicit normalizing flow framework; e.g., Sec. III.B.2, upscaling factor is a controllable super-resolution preference coefficient k used in the local implicit normalizing flow framework; Example B: Sec. IV.B.6, super-resolution output images are generated for a variety of different values of controllable preference coefficients γ , where the values of parameters γ control preferences for PSNR, SSIM, and LPIPS performances). Regarding claim 3, Ma teaches the method of claim 2, wherein the controllable super-resolution preference coefficient represents a temperature coefficient τ of the trained model (Example A: a “temperature” can refer to a parameter that controls the strength or magnitude of an effect produced by a machine learning model; The upscaling factor k controls the strength or magnitude of the super-resolution effect provided by the super-resolution framework of Ma – i.e., higher k results in a higher degree of magnification; Example B: a “temperature” can refer to a parameter that balances the influences of different components in a machine learning model; The parameters γ in Ma balance the influences of different types of loss functions – e.g., Sec. III, eqn. 10 – and balance performance according to different types of error metrics – e.g., Sec. IV.B.6; Note that the specific Greek letter used as the variable [i.e., τ ] is arbitrary and non-limiting). Regarding claim 4, Ma teaches the method of claim 1, wherein the local implicit normalizing flow framework is arranged to reconstruct at least one high-resolution (HR) image (e.g., Fig. 3, right, high-resolution output image) from at least one low-resolution (LR) counterpart (e.g., Fig. 3, left, low-resolution input image) by recovering missing high-frequency information (e.g., Fig. 1 and associated text, the local implicit normalizing flow framework taught by Ma is used to recover missing high-frequency information, thereby transforming a blurry representation into a sharp representation), wherein the at least one output image belongs to the at least one HR image, and the at least one input image belongs to the at least one LR counterpart (see above and Fig. 3). Regarding claim 5, Ma teaches the method of claim 1, and Ma further teaches that the local implicit normalizing flow framework is arranged to perform the arbitrary-scale image super-resolution with the trained model, wherein after any upsampling scale is determined, further adjusting output resolutions is allowed (As discussed above with respect to claim 1, the upsampling scale is arbitrary and can be adjusted to any value; For example, as shown in Table II, even when the upsampling scale of x8 is determined and used, it is allowed to be further adjusted to x12 or x16 in the described IPF technique). Regarding claim 6, Ma teaches the method of claim 1, wherein the local implicit normalizing flow framework is arranged to perform training of the trained model in a training phase (e.g., Sec. III.C describes training phase), and perform the arbitrary-scale image super-resolution with the trained model in an inference phase (e.g., Figs. 3 and 5). Regarding claim 7, Ma teaches the method of claim 6, wherein in the training phase, the local implicit normalizing flow framework is arranged to formulate super-resolution as a problem of learning a distribution of a local texture patch (e.g., Sec. III.A, last par., discusses distribution produced by trained model f ; e.g., Sec. IV.A.2 discusses training on cropped local texture patches). Regarding claim 8, Ma teaches the method of claim 7, wherein in the inference phase, with the learned distribution, the local implicit normalizing flow framework is arranged to perform the arbitrary-scale image super-resolution with the trained model by generating at least one local texture separately for each non-overlapping patch in any output image among the at least one output image (e.g., page 3675, 1st par., 60x60-pixel patches are cropped for performing super-resolution; also see, e.g., Fig. 5). Regarding claim 9, Ma teaches the method of claim 6, wherein the local implicit normalizing flow framework is arranged to perform the training of the trained model to complete learning at least one distribution of at least one local texture patch in the training phase (e.g., Sec. IV.A.2, local texture patches are cropped and used for training; also see, e.g., Sec. III.A, last par., discussing distribution produced by trained model f ), for performing the arbitrary-scale image super-resolution with the trained model to obtain the multiple super-resolution predictions for said different locations of the space in the inference phase, in order to generate the at least one output image (The purpose and/or intended use of training the model is so that the model can perform inference, which (as explained in the rejection of claim 1) includes obtaining the multiple predictions for different locations in order to generate the output image). Regarding claim 12, Ma teaches the method of claim 6, wherein the local implicit normalizing flow framework is arranged to perform patch-based distribution learning during performing the training of the trained model in the training phase (e.g., Sec. IV.A.2, input patch size is set to L = 60 – i.e., 60x60-pixel patches are input to the model; This hyperparameter is used in training and inference), and perform patch-based inference during performing the arbitrary-scale image super-resolution with the trained model in the inference phase (see, e.g., above and Fig. 5, which shows inference results for different patches). Regarding claim 13, Examiner notes that the claim recites an apparatus including the processing circuit within the electronic device of claim 1. As explained in the rejection of claim 1, Ma has been modified with the processing circuit within the electronic device. Therefore, Ma as applied to claim 1 further teaches the apparatus of claim 13. Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma as applied above, and further in view of ‘Lee’ (“Local Texture Estimator for Implicit Representation Function,” 21 November 2021). Regarding claim 10, Ma teaches the method of claim 6. Ma further teaches that the local implicit normalizing flow framework comprises multiple modules (e.g., Fig. 3, E, F, g, f, etc., and combinations thereof) corresponding to different types of models (e.g., Fig. 3, encoders, decoders, etc.), and the multiple modules corresponding to said different types of models comprise a local implicit module (e.g., Fig. 3, combination of E and f, which corresponds to LIIF taught by Chen et al.) and a coordinate conditional normalizing flow (e.g., Fig. 3, F, g, and/or c), wherein the local implicit module comprises multiple sub-modules (e.g., Fig. 3, encoder E and decoder f). Ma’s framework uses the local implicit image function (LIIF) described by Chen et al., but adds an implicit pixel flow (IPF) to sharpen its outputs (e.g., Sec. III.A, especially “In Fig. 3, the E and f modules illustrate the pipeline of LIIF while the other components show the data flow of the proposed IPF method”). Ma does not teach the multiple sub-modules of the local implicit module (i.e., the LIIF) comprising a set of first sub-modules for performing frequency estimation, and at least one second sub-module for performing Fourier analysis; and the local implicit normalizing flow framework is arranged to utilize the set of first sub-modules and the at least one second sub-module to perform the frequency estimation and the Fourier analysis, respectively, in order to retain more image details during learning at least one distribution of at least one local texture patch in the training phase, for being used in the inference phase. However, Lee does teach a technique for improving the LIIF of Chen et al. (Ref [4] in Lee) by adding sub-modules (e.g., Fig. 2, local texture estimator (LTE), which is shown as a pink-shaded region in a color version of the reference) comprising a set of first sub-modules for performing frequency estimation (e.g., Fig. 2, lower Conv in LTE, which produces frequency estimates), and at least one second sub-module for performing Fourier analysis (e.g., Fig. 2, FC layer that produces phase information; e.g., Fig. 2, cos and sin calculations); and the local implicit normalizing flow framework (i.e., the overall super-resolution process illustrated in Fig. 2) is arranged to utilize the set of first sub-modules and the at least one second sub-module to perform the frequency estimation and the Fourier analysis (e.g., Fig. 2, the estimation of amplitude, frequency, and phase information), respectively, in order to retain more image details during learning at least one distribution of at least one local texture patch in the training phase, for being used in the inference phase (e.g., Sec. 3, 1st par.; Sec. 5.2; The proposed network with LTE retains more details than LIIF). As discussed above, Ma uses the encoder and decoder of Chen’s LIIF (i.e., E and f in Fig. 3 of Ma). Lee also uses the encoder and decoder of Chen’s LIIF (i.e., E φ and f θ ), but adds extra sub-modules in order to improve its performance (e.g., Fig. 2, extra LTE sub-modules; e.g., Sec. 5.2 and Fig. 4 show improved performance of Lee’s LTE-based model relative to Chen’s LIIF). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Ma as applied above with the additional local texture estimator (LTE) sub-modules of Lee in order to improve the method with the reasonable expectation that this would result in a method that could obtain super-resolution images with higher performance, such as by better preserving details. This technique for improving the method of Ma was within the ordinary ability of one of ordinary skill in the art based on the teachings of Lee. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Ma and Lee to obtain the invention as specified in claim 10. Regarding claim 11, Ma in view of Lee teaches the method of claim 10, and Lee further teaches that the set of first sub-modules comprise at least one encoder module (e.g., Fig. 2, encoder E φ ), multiple convolutional layers modules (e.g., Fig. 2, Conv), at least one multiplier module (e.g., Fig. 2, ⨂ ) and at least one linear module (e.g., Fig. 2, ⨁ ), and the at least one second sub-module comprises a Fourier feature formation (e.g., Sec. 3, equation 5 describes formation of Fourier features) and ensemble module (e.g., Sec. 3, ensembling with weights w in equations 1 and 4). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEOFFREY E SUMMERS whose telephone number is (571)272-9915. The examiner can normally be reached Monday-Friday, 7:00 AM to 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEOFFREY E SUMMERS/Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Nov 24, 2023
Application Filed
Nov 12, 2025
Non-Final Rejection — §103
Feb 12, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586379
SYSTEM FOR DETECTING OCCURRENCE PERIOD OF CYCLICAL EVENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561755
System and Method for Image Super-Resolution
2y 5m to grant Granted Feb 24, 2026
Patent 12555205
METHOD AND APPARATUS WITH IMAGE DEBLURRING
2y 5m to grant Granted Feb 17, 2026
Patent 12541838
INSPECTION APPARATUS AND REFERENCE IMAGE GENERATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536682
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+35.4%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month